Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
berkus wrote:An LLVM-based JIT wouldn't even need any changes to the UDI spec, I believe.
You cannot compile C to portable LLVM bitcode. Too much information gets lost in the process. To elaborate:
Source code contains platform dependent typedefs
Anything selected by a preprocessor conditional gets removed
Endian related information gets lost
Things like sizeof get baked into the code
Random pointer arithmetic gets baked in
Some of these are possible to work around with a custom bytecode format (the last two), the first three aren't. Of particular note:
UDI contains some variable sized types
We shouldn't expect all drivers to be built only on UDI's types. Many companies have existing portability layers in their drivers; they will not pick up the UDI types
If UDI loses its C API:
We will lose the good will of developers who already know C - in particular, device driver developers. Many of them, particularly from a hardware background, may not know many languages beyond C and assembler
Companies will not be willing to pay to train employees in this new language
Compenies have established source bases which are shared between drivers for various platforms. We cannot expect them to port these.
From a technical point of view, bytecode drivers offer the advantage of allowing the compiler to optimize specifically for a system's processor at install time. However, this also incurs the loss of the ability of the developer to hand optimize with his/her own assembly routines. Knowing how well compilers optimize (particularly numerically intensive code, i.e. where the speed up is most needed), this would probably work out as a net loss. The other technical merit of bytecode is that it is hypothetically cross platform (Whether it would remain so in practice remains to be determined)
Now, from our point of view:
Creating a bytecode, language, language compiler and bytecode compiler, debugging and optimizing them, and properly and efficiently connecting them to an operating system's driver interface is a lot of extra work: The existing UDI specification is already large and complicated, and this makes it larger
We already have a pretty good specification with which to work. It has some issues - for example, I would like to see it respecified against C99 - but these are not big enough to warrant a breaking change. Any changes needed can be made by minor revisions, deprecation of obsoleted interfaces, and introduction of new ones. We should provide reference code in order to translate the old interfaces to the new ones.
My view is that making a byte code language part of the core specification would be ill advised. It may have technical merits; but technical merits alone will not win UDI support. UDI exists primarily to solve a social issue: Incompatible driver interfaces and duplicated work. We should not make it irrelevant by creating new social issues for it.
Herein lies the specification for ODI 1.0 (Open Driver Interface).
All drivers must support the following 4 methods:
1) init - initialises the driver
2) destroy - frees all resources used by the driver
3) write - writes data to the device
4) read - reads data from the device
* callers of read and write methods must supply a pointer to a buffer and its size.
Seriously folks, a camel is a horse designed by committee.
jasonc122 wrote:Herein lies the specification for ODI 1.0 (Open Driver Interface). ...
This is hardly the way to go about designing a driver interface: they are much more complicated than this, and much more powerful. Simply being able to read and write would be useless for a graphics driver or network card driver, for example. You need a way to handle various classes of devices and controllers and buses and even architectures. Still, the name isn't too bad.
jasonc122 wrote:Seriously folks, a camel is a horse designed by committee.
I suppose, but there's nothing to do but wait for a specification to appear from one of the more usable OS projects here (I can count them on my right hand), if nobody tries to design one deliberately. Horses aren't designed by committee, but they also aren't designed by one person.
Owen wrote:However, this also incurs the loss of the ability of the developer to hand optimize with his/her own assembly routines.
Well, using assembly in drivers is not a very good idea, it goes against the UDI philosophy because it makes the driver unportable. The UDI specification even prohibits the use of the standard library (even a freestanding implementation).
Perhaps we should look into the UEFI bytecode and modify it according to our needs (unless the license restricts us from doing so).
"Computers in the future may weigh no more than 1.5 tons.", Popular Mechanics (1949)
[ Project UDI ]
After considering and some research I think the entire debate has little to no merit.
* The standard has not been updated for 10 years. Its just gathering dust. It is highly unlikely that any of the original participants are still interested in the project.
* There are hardly any drivers for the specification. Aside from few sample cases and perhaps few written by hobbyists. Any links to actual existing drivers?
* I hardly think that any hardware company will really care about specification that has little practical use in the real world. Windows and mac steal most of the show.
Considering all this I think we should focus on the needs of this community. I think most of us here are hobbyists and working on our OS projects in spare time. Therefor simplicity in both implementing and understanding is paramount.
Introduction of bytecoded drivers, new languages, tools etc... will increase the complexity and learning curve exponentially. Rendering whole project pointless because most people will simply opt out.
I am working on my own project and I am seriously considering implementing the existing standard.
albeva wrote:After considering and some research I think the entire debate has little to no merit.
* The standard has not been updated for 10 years. Its just gathering dust. It is highly unlikely that any of the original participants are still interested in the project.
Maybe in your research you should have at least read this thread
"Computers in the future may weigh no more than 1.5 tons.", Popular Mechanics (1949)
[ Project UDI ]
that doesn't invalidate what I have said. It will be cool and fine if the industry would catch on. but ultimately here I think we should think about needs of this community and amateur OS developers. Who perhaps don't have enough time or simply experience to deal with extraordinarily complicated solution.
You are correct, however. But I think we need to resolve the problems before getting the community to adopt it. No one wants to implement it and then have to switch to something new. We have an actual flaw that we need to fix.
"Computers in the future may weigh no more than 1.5 tons.", Popular Mechanics (1949)
[ Project UDI ]
You missed the entire point of the thread. We know that the spec hasn't been updated, we know that there are not many drivers.
The problem we have here is a little of a Chicken-and-the-egg problem. People don't use UDI, because there are no drivers, and there are no drivers because no-one uses UDI.
Once this issue is solved, it is hoped that the more "open" hardware manufactures will start releasing UDI drivers, potentially expanding their market and (if someone creates a Windows-UDI interface and/or a Linux-UDI interface) reducing the amount of work they need to do.
Personally I fail to see what is fundamentally wrong with the current specification. The issue around bytecoded drivers is more a matter of preference and taste. It would add another layer of abstraction.
Realistically speaking what would the language look like? How long would it even take to get the specification, reference implementation, testing? I'm sorry if I sound pessimistic, but many such grand ideas pop up and disappear never to be heard of again.
Proprietary kernel modules are a bad thing, maybe companies with oodles of money and resources can efficiently debug some random kernel blob, but I seriously doubt anyone here would welcome that task.
Proprietary software in the user land is one thing, but kernel modules poking a probing around is just asking for trouble.. and makes you continuously reliant on that vendor.
NVidia is one company that likes using the IP card, but, practically every other vendor has released documentation or source for their GPU's.
This brings up an interesting point, recently that same company hinted that they would no longer release proprietary driver updates targeting newer versions of X on older graphics card families with Linux (..and other Unixies), and I believe they no longer release new versions for Windows.
So you can no longer use the latest version of the operating system, you must continue using outdated software.. or buy a newer graphics card.
Hardware documentation is vital, all this insane competition from vendors is just counterproductive, do it with marketing.. and let people who buy your hardware actually use it without encouraging bondage.
Twitter: @canadianbryan. Award by smcerm, I stole it. Original was larger.
My view is that we shouldn't enforce anything on the user. I should be able to write a proprietary VGA driver just as well as I'm allowed to write a proprietary text editor or movie player.
To respond to albeva, the problem is with out ABIs. The way instruction sets and micro-optimizations are handled at the moment was good at the time but now it's somewhat of a "bottleneck" (see previous posts for more details). Bytecoded drivers are not really an extra layer as far as the OS is concerned, they are merely a way to package the drivers.
"Computers in the future may weigh no more than 1.5 tons.", Popular Mechanics (1949)
[ Project UDI ]
I disagree because not only you need to implement the driver specific stuff, but also handle AOT or JIT or whatever compilation and ensure that this works. The complexity just goes up. Of course arguably well designed reference implementation could ease the use.
However second and more important point is that you need a completely new tool set for driver development. I'm talking about compiler, debugger, bytecode design, VM+jit/AOT compiler / interpreter. Many CPU/platform specific modules that the latter part would make use of. How likely is it that hardware vendors will accept a new language and tool set?
albeva wrote:I disagree because not only you need to implement the driver specific stuff, but also handle AOT or JIT or whatever compilation and ensure that this works. The complexity just goes up. Of course arguably well designed reference implementation could ease the use.
No JIT, only AOT and that is part of the installation. If the reference implementation provides it, people can port that just as they do with ACPICA in order to add ACPI support (like virtually every OS out there does). On the other hand, I don't see any other way around it. If someone can come up with a better solution that will not prohibit proprietary drivers I'm open.
However second and more important point is that you need a completely new tool set for driver development. I'm talking about compiler, debugger, bytecode design, VM+jit/AOT compiler / interpreter. Many CPU/platform specific modules that the latter part would make use of. How likely is it that hardware vendors will accept a new language and tool set?
For a bytecode C can generate, a new language would not be needed. The "managed" part was just brainstorming.
"Computers in the future may weigh no more than 1.5 tons.", Popular Mechanics (1949)
[ Project UDI ]
I'll take your word for it. Don't get me wrong though -I'm not against your ideas. The potential benefits would be well worth it. However I am pessimistic about such projects in general as very rarely anything comes out other than endless bickering.
Nevertheless we would still need a new Compiler since LLVM / GCC wouldn't do because too much info is lost. This entails a very sophisticated and comprehensive byte code design to account for all the possibilities. And of course AOT compiler. Would the resulting binary be compatible with the current specification?