Hi:
Love4Boobies, maybe you need to re-explain what you're talking about since I'm not the only person who thinks you're proposing a new language:
Owen wrote:
You're proposing
- Rewriting the UDI specification, which is already a lot of work just to read
- Designing a new language to go with this specification
- Designing a new bytecode to go with this specification
- Implementing a language to bytecode compiler
- Implementing a bytecode to native compiler
and saying that its not a lot of work?
For what benefit? What benefit does
forcing bytecode on people bring over simply allowing it?
Why does it have to be mandatory? Why can't it just be a "virtual architecture" and language binding? Why does it require a spec rewrite?
Next, I'll continue with the essentials: proving why a UDI bytecode is a waste of time:
Love4Boobies wrote:
My point is merely that instruction sets are evolving and we cannot base our decision on whether someone finds SSE in particular useful or not.
UDI 1.01 already provides for identification of drivers which use floating point operations. It clearly allows for region identification of FP utilizing code. However you take it, if a platform has a CPU which does not have an FPU available for use, then no-matter what bytecode you use, you're not going to get FP operations on that platform, regardless of how awesome your bytecode is. It will then have to do the same thing as equivalent code written in C according to the UDI 1.01 spec: identify its FP utilizing region in the .udiprops section.
From there, when the environment is loading the driver, it will see the indication of FP use, and decide whether or not to load it. It will then be up to the environment. Whether you use a bytecode in JIT style, or invent some new language which will then be compiled down to native, this is the most sound way to do it; either that or you leave the duty to detect the availability of an FPU to the driver itself by calling an enumeration API. This will introduce a bunch of unnecessary branches in the driver hot path. The current UDI spec is yet again seen to be perfectly sound, and once again, bytcode is seen to be unneeded.
And again, this point was brought up on IRC, and demolished soundly. Again, if a manufacturer cares about older CPUs without FPUs, it can provide the driver in source form, and have you build it from source. This way, you can tell your compiler to use SSE, SSE2, or whatever instruction set when compiling floating point operations. GCC has this ability.
Again, most vendors do not care, about per-model optimizations and will simply tell you that "SSE2" is required for this driver.
The 80486 added things of its own. Should we stick to 80386? Intel just added AES support recently and it's still Intel 64, not Intel 128. VIA had AES for a long time and the instructions are quite different
So how many devices do you know about whose drivers will require AES extensions? Please do tell.
Has anyone explained why proprietary drivers and/or micro-optimizations are unimportant?
Yes, I explained why micro-optimizations are a waste of time twice in two posts, and in one post where I specifically addressed that point.
Now I'll move on to "Why proprietary drivers are unimportant".
They are not unimportant. They are in fact very essential. But a bytecode will not in any way provide added ease for proprietary development. You claim that the use of the current specification will make you "lose out on proprietary drivers". Yet UDI 1.01 is binary portable across implementations of the same architecture.
The next thing is that UDI 1.01 allows for the packaging of drivers such that you may have one for each architecture. Of course, a vendor may also choose to have, on their website, a simple page with multiple links, each to a differently compiled driver for a different architecture so you don't have to download the whole archive with all the builds. I should not have had to mention this.
After that comes the fact that the bytecode language proposed with UDI 2.0 would still be discernable with some the use of some tool or other, and more so than a native distribution of machine code. A manufacturer is already not going to want to convert its existing C codebase into UDILANG; talk less about it when they realize that it adds nothing to the current level of obscurity they enjoy from a compiled driver. So your point about proprietary drivers has nothing to do inherently with a bytecoded language. A bytcoded language is not in any way more attractive to a vendor than the current specification, both economically and strategically.
Also, I'd like to point out this small thing here, which is actually quite major in meaning:
Has anyone explained why proprietary drivers ... are unimportant?
Not using a bytecode, and not considering a bytecode to be important is not *in any way* implicitly linked to discouraging proprietary drivers. Proprietary drivers will remain important for as long as important vendors distribute them. But I do not like the way you weasel in a relationship between "proprietary driver unavailability" and "the opposition of bytecode in UDI" when you are fully aware of the fact that there is NO RELATIONSHIP. And you know English well enough to have been aware of the intent behind your wording when you typed that. If you were not aware of it, then I'm showing it to you now.
If anything, native language compiled drivers distributed as binaries and not bytecode provide more assurance of privacy to proprietary vendors. And if they are going to distribute their drivers as binaries directly, they are not going to be writing them in UDILANG first before compiling and releasing them. They will, like every other normal company, use good, stable, well-known old C, and follow the UDI core specification, much like it has stated before in this thread by multiple people who told you about the obvious infeasibility of enforcing a new language on developers.
I await your next point, although I know fully well that in the discussion on IRC, while in the presence of at least three other people, I have already disproved all of them.