Love4Boobies wrote:Hi.
My proposal was to ... provide binary compatibility via bytecoded drivers. Although it would make UDI a bit more complicated people should think of this in a similar was as they do about ACPICA: there's a reference implementation that everyone ports.
The difference between ACPICA and UDI is that ACPICA is meant to be provided on many different platforms and many different CPUs with the very same code. That is a bytecoded language. The ACPICA is not abstracted. That is, it does not call on an operating system which has already abstracted everything for it. It is essentially in itself highly machine dependent.
UDI has a set of OS services it can call on, all of which return the same formatted information. This means that there is no need to bytecode since any platform on which a UDI driver is run will emulate the same behaviour with respect to outputs from service calls. Therefore drivers can be compiled to run natively and expect a certain minimum amount of standard behaviour. This is the whole purpose of the UDI environment which shields the driver from the embedding platform. The environment abstracts all of that form the driver so it can focus on its device.
Therefore, the environment and the UDI source compatibility together provide *more than enough* independence for anyone who needs it, and with a simple re-compile, you can have a driver for your platform, assuming the device will work with the driver.
That's another reason why going so far is almost nonsensical: Devices themselves, unless implemented on a bus which is sufficiently abstracted (such as PCI), are generally tied to a particular chipset. And many chipsets may have the same device tied to different MMIO ranges, without a self enumerating bus to provide the MMIO range for the driver. This means that you will have to have different drivers for each chipset, even for the same device, or #ifdefs for each chipset the device is known to be one, anyway. There are too many cases like this for a bytecoded language to be in any way necessary. In fact, given that this is the common case on most platforms (custom, embedded, etc), the practicality of trying make drivers that abstracted is reduced to almost 0.
Remember, we're not talking about software. This is about drivers. These, no matter how hard you try, will never be fully portable for all time since there are different companies and vendors with different budgets, and different levels of skill diversity in their workers (and thus different levels of compliance and ability even with the implementation of a self-enumerating bus).
Nobody needs to implement an UDI environment from scratch because it's a complicated task. If you look at the driver packaging section in the specification, it mentions that a driver ABI can define generic drivers and optimized drivers for specific CPU models; the environment would pick the right one (generic if the optimized one is missing). This is quite complicated for many CPU architectures, x86(-64) in particular since there are just so many CPU models. Although a flag-based approach would work (say, each driver would have flags to indicate whether it uses SSE, MMX, 3DNow!, etc.) then we'd be missing out on micro-optimizations (e.g., an 80386 has a prefetch queue while a Core i7 does not, LEA and shifts are good for most CPUs but not for P4's, etc.) and I don't even want to start on CPU bugs (even if this issue is considerably less common, someone would need to check plenty of erratas, better do it just once in the reference implementation). UDI 2.0 would eliminate the need for ABI specifications. An additional advantage to USB 2.0 is that if we use a safe bytecode we could very well safely use UDI drivers on MMU-less architectures and even link use them in kernel space on microkernels (it's for the OS implementer to decide whether he trusts the AOT compiler or not).
To be perfectly honest, I didn't understand a lot of what you said there, since you seem to be trying to make this too platform specific, i.e.: much ado about nothing. Things like taking advantage of SSE, 3dNow, etc, are really supposed to be up to the embedding OS. All you need is a platform features enumeration API and that that's it. IIRC, the UDI 1.1 even mentions that it does not allow floating point operations.
One thing to note is: Even if you use a bytecoded language to distribute the drivers, and then compile them down to native instructions, you will end up having to ask the embedding OS whether or not advanced FPU instruction sets are available before executing code with SSE, etc. Also, even if you run the bytecode as JIT or whatever, you'll STILL have to ask the embedding OS, and in the end, you now have a huge JIT compiler in the environment just to end up having to do the same thing as native, compiled drivers.
Since I can't quite discern what the logic behind the other arguments is, I'll have to dismiss them as nitpicking, and straw clutching. I apologise for the strong terms used.
---
In short, UDI does NOT need a bytecoded distribution form. And also, I am strongly of the opinion that UDI MUST remain in compiled-language source form. I believe that we ALL know that were it not for the open-source, open-standards movement, this community would not exist. Had it not been for those fighting to make sure that things like specifications, etc are released, we would have NO information. Therefore, allowing that much abstraction and making room for pure binary distribution without a need for source distribution for unknown platforms is going to undermine the open-source side of UDI, however small it is.
Without specifications and source, we end up at the mercy of crappy code written by outsourced workers, and crappy buggy code from vendors with no hope of re-compiling and uploading patches to the central repo. There are those who do not see this as a big deal: "Ask the vendors to fix the bugs". Think a bit more, and realize why letting vendors get away with pure binary drivers and nothing more is a bad thing. Think about how hard it would be to have your little crummy "Hello world", even if you do not have some advanced kernel at the moment.
And those who say something like: "What are you talking about? It won't be that bad.". Yes, it will. Have you seen what kinds of CRAP companies like HP produce? A driver for a printer. A printer. And it runs about at least 5 different processes while idle, plus N different processes while operational. No. I will not support anything which will make that kind of mess easier to produce in UDI.
I do not support bytecoded drivers, and again, I assert that if someone wants to move the spec in such a radical direction, they should just go and make their own interface altogether. I support UDI because it clearly provides all needed functionality in it current state, and allows for extensions. That is all that is necessary.
EDIT:
Instead of making another post, I'll say it here: Since there is no need for a bytecoded language, there is by extension, no need for "UDI 2.0". UDI 1.1 in its current state is sufficient. What is needed right now are more metalanguage specifications. Not another version of the core spec. The core spec is more than sane. Anyone who reads it can tell that. The main change proposed between UDI 1.1 and 2.0 is the bytecode idea. This change is no longer on the table, and if something like it comes up, as far as I'm concerned, it is going to be some other unrelated driver interface.
--Thanks
gravaera