Page 6 of 8
Re: Should I get on the UDI train?
Posted: Sat Jul 24, 2010 11:17 pm
by Love4Boobies
Even if you were, I'm not against you being against my ideas. It is often productive, no one is always right.
Not sure whether fully compatible (but UDI 2.0 would imply that). As long as we use a portable bytecode (with inherent portability) we might as well lose some of the portability in the specification since it's just extra complexity. I have made some notes about this but we first need to really decide.
As for the bytecode we could look into existing ones and modify them for our needs. LLVM can still be used even if we don't use its bitcode.
Re: Should I get on the UDI train?
Posted: Sat Jul 24, 2010 11:23 pm
by albeva
llvm ir does support metadata that you could make use of in the AOT. But to do this I guess clang would have to be modified. Or perhaps port tcc ...
Re: Should I get on the UDI train?
Posted: Sun Jul 25, 2010 3:20 am
by Combuster
Come on people.
You don't even practically need to have an bitcode compiler on your system. You only need one system in existance that can create a native driver from a bytecoded driver and then recast it back into an UDI package - pretty much like what assembling LLVM does.
Therefore, you can run all UDI drivers without having a bytecode compiler yet, whether it be due to porting or storage reasons. Especially since such a tool is theoretically possible (Its exactly what an AOT compiler is used for), there's no need for UDI implementations to even accept bytecoded drivers. You can just tell the user to go elsewhere for now. Just treat it as a separate architecture that happens to be compatible with all other architectures.
Similarly, you can very well choose between AOT or JIT for your local implementation. After all, you only have to provide the driver to a implementation-specific utility, behind which it becomes a black box. The specification does not care, and you should not care.
Bytecode problem solved, AOT problem solved, Toolchain/Proprietary driver problem solved. Nobody forced into submission, everybody happy.
Re: Should I get on the UDI train?
Posted: Sun Jul 25, 2010 6:51 am
by Owen
Love4Boobies wrote:Owen wrote:However, this also incurs the loss of the ability of the developer to hand optimize with his/her own assembly routines.
Well, using assembly in drivers is not a very good idea, it goes against the UDI philosophy because it makes the driver unportable. The UDI specification even prohibits the use of the standard library (even a freestanding implementation).
Perhaps we should look into the UEFI bytecode and modify it according to our needs (unless the license restricts us from doing so).
Nothing about incorporating hand optimized assembly routines makes a driver unportable. Nothing stops them being provided as a fast alternative to a C routine.
Combuster wrote:Come on people.
You don't even practically need to have an bitcode compiler on your system. You only need one system in existance that can create a native driver from a bytecoded driver and then recast it back into an UDI package - pretty much like what assembling LLVM does.
Therefore, you can run all UDI drivers without having a bytecode compiler yet, whether it be due to porting or storage reasons. Especially since such a tool is theoretically possible (Its exactly what an AOT compiler is used for), there's no need for UDI implementations to even accept bytecoded drivers. You can just tell the user to go elsewhere for now. Just treat it as a separate architecture that happens to be compatible with all other architectures.
Similarly, you can very well choose between AOT or JIT for your local implementation. After all, you only have to provide the driver to a implementation-specific utility, behind which it becomes a black box. The specification does not care, and you should not care.
Bytecode problem solved, AOT problem solved, Toolchain/Proprietary driver problem solved. Nobody forced into submission, everybody happy.
Precisely what I've been saying: Supporting bytecoded drivers does not require a new specification revision. It should be possible to compile them down to architecture-dependent binaries, but nothing requires a specific system to do so - it is quite feasible that some systems will support faster backends for supported drivers.
In other words:
- We can stick with the current revision (Its fine)
- We can keep natively compiled drivers (No issue with that)
- If someone wants to, they can create a bytecode system
(And, by the way, I'm still not convinced about bytecode compilation giving much better performance
. Well designed drivers - which UDI encourages - are not often the bottleneck)
Re: Should I get on the UDI train?
Posted: Sun Jul 25, 2010 8:22 am
by Love4Boobies
Owen wrote:Nothing about incorporating hand optimized assembly routines makes a driver unportable. Nothing stops them being provided as a fast alternative to a C routine.
It makes the source code unportable and this is one of the reasons UDI exists in the first place (there are others, sure). The binary distribution of the driver would still work.
Re: Should I get on the UDI train?
Posted: Sun Jul 25, 2010 8:39 am
by Owen
Love4Boobies wrote:Owen wrote:Nothing about incorporating hand optimized assembly routines makes a driver unportable. Nothing stops them being provided as a fast alternative to a C routine.
It makes the source code unportable and this is one of the reasons UDI exists in the first place (there are others, sure). The binary distribution of the driver would still work.
"Nothing stops them being provided as a fast alternative to a C routine."
In other words: The driver selects
at compile time whether to use the C or assembly implementation, based upon information like target architecture.
Incorporating assembly into a driver or application does not automatically make it unportable. It only becomes unportable when no portable alternative exists.
Re: Should I get on the UDI train?
Posted: Sun Jul 25, 2010 8:44 am
by gravaera
Hi,
There seem to be a lot of people here who are skeptical about bytecode, or against it outright, and a small number of stubborn 2.0 supporters who keep re-tabling the same arguments regardless of how many times they are cast into obvious dubious light or demolished completely.
I've said this before, but it seems to need repeating: The UDI environment provided by the host implementation is the embodiment of pretty much all of the notions which are brought up by the bytecode crowd. It provides abstraction, isolation, and a very large amount of portability by relying on common behaviour across implementations. In this manner, drivers are made binary compatible across alike platforms, and source compatible across architecturally different platforms.
The UDI environment provides all of the platform specific augmentations needed to ensure that the UDI driver does not need to care. This is what it's for. The specification is well designed. It is fine just as it is. It needs no bytecode for added portability. This is the whole purpose of the UDI environment.
Also, for the "we'll be missing out on proprietary drivers" argument, it's obvious that a manufacturer can, with the current specification, provide a binary packaged driver. There is no stopping them from doing this, and they won't stop realistically. UDI 2.0 does not in any way help them to accomplish it either. They can do it even better in C with a compiled binary distribution.
And again, The bytecoded driver specification is different enough in scope, method, scale that it is very different from the current spec, even to the extent of being tantamount to being a whole other interface altogether. I have said this already, but there is no need therefore, to take the current spec and radically re-structure it to such a degree. This set of radical changes can be proposed, developed, and promoted completely separately from UDI as a completely separate driver interface specification. Because that is what it is.
Re: Should I get on the UDI train?
Posted: Sun Jul 25, 2010 9:21 am
by Love4Boobies
Actually, there are a lot of people here that do understand the problem, you just read whatever you like. As I've told you on IRC before you started making a scene, it is impossible to provide optimized binary packages (which UDI was designed for) without having huge packages and recasting the ABIs every couple of months when new CPU models are released. Given this fact, people will only provide generic binaries which will work but... I don't see why we shouldn't take advantage of optimized binaries since we can. An extra advantage is that we will stop needing to define ABIs altogether.
Re: Should I get on the UDI train?
Posted: Sun Jul 25, 2010 9:53 am
by Owen
Love4Boobies wrote:Actually, there are a lot of people here that do understand the problem, you just read whatever you like. As I've told you on IRC before you started making a scene, it is impossible to provide optimized binary packages (which UDI was designed for) without having huge packages and recasting the ABIs every couple of months when new CPU models are released. Given this fact, people will only provide generic binaries which will work but... I don't see why we shouldn't take advantage of optimized binaries since we can. An extra advantage is that we will stop needing to define ABIs altogether.
You keep repeating yourself
Please tell me what drivers you expect to use that will consume enough CPU for all of this effort to be worthwhile.
In particular, for the common "high bandwidth" drivers you have mentioned in IRC, they fall into two classes:
- Devices like graphics drivers which are normally accompanied by libraries that applications should (dynamically) link against in order to gain access to the device. In these cases, any expensive computation is done upfront in the application library, and so irrelevant to the optimizations allowed within the device driver. Additionally, these application libraries can't really be bytecoded because they are accessed by well defined C APIs.
- Devices like 10 gigabit Ethernet and FiberChannel adaptors. However, the drivers for these tend to do little more than collect data from/into DMA buffers for forwarding to/from higher up the system stack. In other words, they have only tiny overhead anyway
The only compelling case I see is things like sound card software mixers; and my opinion is that these belong higher up the stack (i.e. the kind of things you would expect to be implemented by the operating system)
There is no performance need for what you are proposing, and there is no reason why implementation need obsolete the existing interfaces
As for the cost of updating and maintaining platform ABIs: Its negligible. This should be obvious from any cursory examination of how often the platforms get architecture extensions; generally only once a year or such.
Please: Bytecoding requires a huge amount of work for tiny gain.
Re: Should I get on the UDI train?
Posted: Sun Jul 25, 2010 9:57 am
by Love4Boobies
It doesn't require as much work as you think. The specification wouldn't require anyone to write super-optimized compilers, they can write a normal translator if they wish (and remember, one would be provided by the reference implementation so it ends up being no work for the OS developer). The specification would merely give the opportunity to write optimized drivers. And they would indeed make a difference.
I've given a good example before: the 8086 does not allow "shl ax, 2" only "shl ax, 1". An x86 cannot execute x86-64 instructions even though the instruction set is almost the same; it's basically an extension. Do you think there isn't a huge amount of similar cases for different architectures?
Re: Should I get on the UDI train?
Posted: Sun Jul 25, 2010 10:11 am
by Owen
Love4Boobies wrote:It doesn't require as much work as you think. The specification wouldn't require anyone to write super-optimized compilers, they can write a normal translator if they wish (and remember, one would be provided by the reference implementation so it ends up being no work for the OS developer). The specification would merely give the opportunity to write optimized drivers. And they would indeed make a difference.
I've given a good example before: the 8086 does not allow "shl ax, 2" only "shl ax, 1". An x86 cannot execute x86-64 instructions even though the instruction set is almost the same; it's basically an extension. Do you think there isn't a huge amount of similar cases for different architectures?
You're proposing
- Rewriting the UDI specification, which is already a lot of work just to read
- Designing a new language to go with this specification
- Designing a new bytecode to go with this specification
- Implementing a language to bytecode compiler
- Implementing a bytecode to native compiler
and saying that its not a lot of work?
For what benefit? What benefit does
forcing bytecode on people bring over simply allowing it?
Why does it have to be mandatory? Why can't it just be a "virtual architecture" and language binding? Why does it require a spec rewrite?
Re: Should I get on the UDI train?
Posted: Sun Jul 25, 2010 10:12 am
by gravaera
Hi:
As I said before, the 2.0 crowd is bringing up the same arguments repeatedly, no matter how impractical they are, and no matter how many times they are demolished.
This thing about per-CPU-model optimizations being somehow better enabled by using a bytecode has been discussed already on IRC. It was brought up multiple times, and thwarted each time. There are people who were in the channel who can testify to that. But in order to expediently ensure that it is solidly discredited, I'll list the points yet again.
"it is impossible to provide optimized binary packages (which UDI was designed for) without having huge packages and recasting the ABIs every couple of months when new CPU models are released"
1. Nobody is going to be writing a different driver per CPU-model and distributing it. No manufacturer who chooses to distribute in binary form will *ever* provide a huge archive with binaries for each model of CPU. Otherwise, you'd have a separate binary for i486, i586, i686, etc, etc, and then optimized packages for AMD's x86-32 line, and then optimized packages for each AMD-64 CPU model, and so on.
This is what an architectural specification is for. Engineers sit for hours, and draft a software architecture specification to ensure that code written according to it will work on all models which conform to that architecture. And where you say that it is necessary to optimize per-model, I would like to forward the simple notion that source distributions of drivers can be compiled with per-model optimizations turned on with the compiler flags. This is what GCC's -mtune option is for.
Also, the idea that a bytecoded language is somehow easier to compile with CPU-specific optimizations than a solid language with lots of old, working, and tested compilers like C is not sound. To make this point even clearer: maintaining drivers is a hard enough job as it is. Nobody here is going to write a kernel for different models of the Intel line of CPUs. This is both impractical, and time consuming with little net benefit. In the same way, nobody is going to be writing drivers for each model of CPU.
If a vendor wants SO BADLY that everyone be able to have a version of the driver tailored to his or her CPU, that vendor can provide the driver in source form, and let me compile it locally with GCC's, or whatever compiler I choose, -mtune for my CPU.
2. Why are you calling normal, arch-specific binaries which are compliant with a particular arch, and are the norm, and are distributed all around the world, "generic" binaries, as if this is something disadvantageous? Is it not normal for people to develop for an architecture and deploy in binary form? Why this sudden use of "generic" to describe this practice, as if it now holds some exotic connotation?
3. There is no need to define an ABI for every model. No vendor will ever be writing per-model drivers. No OS Developer who has a copy of driver source will ever waste his or her time adding #ifdef -s to driver files to switch on the command line for their OS between CPU models either. It's perfectly fine to use a single ABI (or dual for different endiannesses) for any architecture.
4. A bytecode language does not in any way make the process of compiling with per-model optimizations any easier or more efficient. A bytecode compiler called with its own '-mtune' option to work on UDILANG, or whatever you want to call it, is no different from say GCC being called on a C source file with '-mtune'.
So here we see that a bytecoded language would be no different from a good solid language like C in the case that was presented: per-cpu-model optimizations; which is in itself a questionable argument to bring up anyway. And I'll mention again that this is not the first time this argument was proven impractical.
Re: Should I get on the UDI train?
Posted: Sun Jul 25, 2010 10:14 am
by gravaera
Love4Boobies wrote:
I've given a good example before: the 8086 does not allow "shl ax, 2" only "shl ax, 1". An x86 cannot execute x86-64 instructions even though the instruction set is almost the same; it's basically an extension. Do you think there isn't a huge amount of similar cases for different architectures?
And here's another impractical argument: talking about a 16 bit CPU. Where does an 8086 even come in?
Re: Should I get on the UDI train?
Posted: Sun Jul 25, 2010 10:16 am
by Love4Boobies
Strawman. This is not an argument, it's an example of what can happen. Why do you pretend not to understand just because you don't like it? I've mentioned before that we are not to focus on the 8086.
Re: Should I get on the UDI train?
Posted: Sun Jul 25, 2010 10:22 am
by Combuster
Love4Boobies wrote:I've given a good example before: the 8086 does not allow "shl ax, 2" only "shl ax, 1". An x86 cannot execute x86-64 instructions even though the instruction set is almost the same; it's basically an extension. Do you think there isn't a huge amount of similar cases for different architectures?
That's a very poor example. You're pointing out the differences between i8086, i386 and x86-64. Those are different architectures for a reason: 32-bit OSes don't like 16-bit drivers and 64-bit OSes don't like 32-bit drivers, because they involve a completely different configuration. Something heavily avoided due to speed issues: ABI conversion for each call.
As for other platforms: only ARM has truly different operating modes, so you'd expect to have two ARM targets: ARM and THUMB. Just as real-mode/protected mode/long mode, each requires a different compiler backend.
Edit: and you are a hypocrite for suggesting i8086 and then flaming someone else for doing the same
Tell me, can you make an argument that a) has not been given before, and b) does not assume somebody's ignorance? I dare you.