gravaera wrote:Love4Boobies wrote:Hi.
My proposal was to ... provide binary compatibility via bytecoded drivers. Although it would make UDI a bit more complicated people should think of this in a similar was as they do about ACPICA: there's a reference implementation that everyone ports.
The difference between ACPICA and UDI is that ACPICA is meant to be provided on many different platforms and many different CPUs with the very same code. That is a bytecoded language. The ACPICA is not abstracted. That is, it does not call on an operating system which has already abstracted everything for it. It is essentially in itself highly machine dependent.
The reason for OSes porting ACPICA is not that it is less of an abstraction, it's because it's a huge amount of work that people would rather not waste any time on. I can't see how Project UDI should be any different in this respect. UDI, just like ACPI is supposed to work on many different platforms and many different types of CPUs.
UDI has a set of OS services it can call on, all of which return the same formatted information. This means that there is no need to bytecode since any platform on which a UDI driver is run will emulate the same behaviour with respect to outputs from service calls. Therefore drivers can be compiled to run natively and expect a certain minimum amount of standard behaviour. This is the whole purpose of the UDI environment which shields the driver from the embedding platform. The environment abstracts all of that form the driver so it can focus on its device.
That's not the point of the bytecode at all. (Already clarified on IRC, anyone who's still confused should re-read my post.)
Therefore, the environment and the UDI source compatibility together provide *more than enough* independence for anyone who needs it, and with a simple re-compile, you can have a driver for your platform, assuming the device will work with the driver.
What about proprietary drivers or drivers?
That's another reason why going so far is almost nonsensical: Devices themselves, unless implemented on a bus which is sufficiently abstracted (such as PCI), are generally tied to a particular chipset. And many chipsets may have the same device tied to different MMIO ranges, without a self enumerating bus to provide the MMIO range for the driver. This means that you will have to have different drivers for each chipset, even for the same device, or #ifdefs for each chipset the device is known to be one, anyway. There are too many cases like this for a bytecoded language to be in any way necessary. In fact, given that this is the common case on most platforms (custom, embedded, etc), the practicality of trying make drivers that abstracted is reduced to almost 0.
MCA alone was available for 3 architectures. PCI and USB are available for many more and there are not the only 2 buses. And even if what you said were true, I can't see why bytecodes wouldn't work... the driver needn't be aware of any ports or MMIO ranges, the environment needs.
Remember, we're not talking about software. This is about drivers. These, no matter how hard you try, will never be fully portable for all time since there are different companies and vendors with different budgets, and different levels of skill diversity in their workers (and thus different levels of compliance and ability even with the implementation of a self-enumerating bus).
What does the budget of the company have to do with the driver interface?
To be perfectly honest, I didn't understand a lot of what you said there, since you seem to be trying to make this too platform specific, i.e.: much ado about nothing. Things like taking advantage of SSE, 3dNow, etc, are really supposed to be up to the embedding OS. All you need is a platform features enumeration API and that that's it. IIRC, the UDI 1.1 even mentions that it does not allow floating point operations.
"UDI shouldn't do fancy stuff because there are architectures out there that can't do it." If that were the case, then UDI itself should be forever banned. Also, I don't know about UDI 1.1 as it does not yet exist but the 1.01 specification says that a FPU can be used when present.
One thing to note is: Even if you use a bytecoded language to distribute the drivers, and then compile them down to native instructions, you will end up having to ask the embedding OS whether or not advanced FPU instruction sets are available before executing code with SSE, etc. Also, even if you run the bytecode as JIT or whatever, you'll STILL have to ask the embedding OS, and in the end, you now have a huge JIT compiler in the environment just to end up having to do the same thing as native, compiled drivers.
What's the problem with asking the OS about the architecture? Of course you ask it. And I have no clue where you got the JIT idea from, I talked about an AOT compiler that is part of the install process.
The rest of the post is a rant so I won't discuss it any further
JohnnyTheDon wrote:As far as bytecode goes, you could use LLVM, which would solve the SSE/3DNow/Optimization problem quite nicely. This would add virtually no complexity (you simply distribute drivers as LLVM files, users can then use LLVM to compile to native code tailored to run on their processor) and would still allow you to provide native drivers in architecture-specific binary format.
I've thought about this. I need to check the license, we don't want it to be restrictive.
As long as you are using a standard C ABI between the drivers and the OS, it doesn't really matter how exactly they are compiled. I agree that there is no need for a new bytecode system to be built into UDI 2.0 that every OS will need to implement, but there is a definite need, as Love4Boobies stated, for a way to perform processor specific optimization of drivers. If the UDI 2.0 spec simply suggests (or requires) LLVM as a method for compiling and distributing the drivers, the problem is solved.
Again, we'd be losing proprietary drivers.