Page 7 of 8

Re: Should I get on the UDI train?

Posted: Sun Jul 25, 2010 10:33 am
by Owen
Combuster wrote:As for other platforms: only ARM has truly different operating modes, so you'd expect to have two ARM targets: ARM and THUMB. Just as real-mode/protected mode/long mode, each requires a different compiler backend.
ARM and Thumb are one and the same platform with different instruction set. AAPCS (the ABI) is the same for both; and all* Thumb supporting processors support inteworking between them. As I see it, Thumb use would mostly be dependent upon target architecture version; i.e. permitted in ARMv5** targetted drivers and not in ARMv4.

(Admittedly things get more complex if we intend to support all ARMv7M platforms, but I don't think the processors not supporting ARM mode are really the kind which would be running UDI-based operating systems anyway)

* Excluding the aforementioned ARMv7Ms, and noting that said interworking system has changed slightly over time in an upwards compatible manner
** The UDI implementation would have to note some caveats with regards to Thumb interworking; for example, when running on ARMv6 and earlier it cannot use Thumb and support ARMv4 drivers (Or rather it cannot implement any driver-visible functions in Thumb)

(Also note that any such standard would not support ARMv3 and earlier. They are thoroughly obsolete)

Re: Should I get on the UDI train?

Posted: Sun Jul 25, 2010 12:03 pm
by Love4Boobies
Combuster wrote:
Love4Boobies wrote:I've given a good example before: the 8086 does not allow "shl ax, 2" only "shl ax, 1". An x86 cannot execute x86-64 instructions even though the instruction set is almost the same; it's basically an extension. Do you think there isn't a huge amount of similar cases for different architectures?
That's a very poor example. You're pointing out the differences between i8086, i386 and x86-64. Those are different architectures for a reason: 32-bit OSes don't like 16-bit drivers and 64-bit OSes don't like 32-bit drivers, because they involve a completely different configuration. Something heavily avoided due to speed issues: ABI conversion for each call.
My point is merely that instruction sets are evolving and we cannot base our decision on whether someone finds SSE in particular useful or not.
Edit: and you are a hypocrite for suggesting i8086 and then flaming someone else for doing the same
The 8086 was the example I used because it was the first one off the top of my head, even if it's 16-bit. Do you think such things happen only when you go from 16 to 32 bits or more? The 80486 added things of its own. Should we stick to 80386? Intel just added AES support recently and it's still Intel 64, not Intel 128. VIA had AES for a long time and the instructions are quite different. Hell, there are huge incompatibilities even between Intel's and AMD's SSE. I also cannot give any guarantee for anything else that might happen in the future for any manufacturer.
Tell me, can you make an argument that a) has not been given before, and b) does not assume somebody's ignorance? I dare you.
Why should I not assume somebody's ignorance? Look at what gravaera thinks I've proposed... Did I say that we should design a new language? I specifically said what I proposed. Are the arguments I made no good? Has anyone explained why proprietary drivers and/or micro-optimizations are unimportant?

Re: Should I get on the UDI train?

Posted: Sun Jul 25, 2010 2:48 pm
by Owen
Love4Boobies wrote:Why should I not assume somebody's ignorance? Look at what gravaera thinks I've proposed... Did I say that we should design a new language? I specifically said what I proposed. Are the arguments I made no good? Has anyone explained why proprietary drivers and/or micro-optimizations are unimportant?
Yes. I said why micro-optimizations are unimportant and you completely ignored it

Re: Should I get on the UDI train?

Posted: Sun Jul 25, 2010 2:49 pm
by gravaera
Hi:

Love4Boobies, maybe you need to re-explain what you're talking about since I'm not the only person who thinks you're proposing a new language:
Owen wrote: You're proposing
  • Rewriting the UDI specification, which is already a lot of work just to read
  • Designing a new language to go with this specification
  • Designing a new bytecode to go with this specification
  • Implementing a language to bytecode compiler
  • Implementing a bytecode to native compiler
and saying that its not a lot of work?

For what benefit? What benefit does forcing bytecode on people bring over simply allowing it?

Why does it have to be mandatory? Why can't it just be a "virtual architecture" and language binding? Why does it require a spec rewrite?
Next, I'll continue with the essentials: proving why a UDI bytecode is a waste of time:
Love4Boobies wrote: My point is merely that instruction sets are evolving and we cannot base our decision on whether someone finds SSE in particular useful or not.
UDI 1.01 already provides for identification of drivers which use floating point operations. It clearly allows for region identification of FP utilizing code. However you take it, if a platform has a CPU which does not have an FPU available for use, then no-matter what bytecode you use, you're not going to get FP operations on that platform, regardless of how awesome your bytecode is. It will then have to do the same thing as equivalent code written in C according to the UDI 1.01 spec: identify its FP utilizing region in the .udiprops section.

From there, when the environment is loading the driver, it will see the indication of FP use, and decide whether or not to load it. It will then be up to the environment. Whether you use a bytecode in JIT style, or invent some new language which will then be compiled down to native, this is the most sound way to do it; either that or you leave the duty to detect the availability of an FPU to the driver itself by calling an enumeration API. This will introduce a bunch of unnecessary branches in the driver hot path. The current UDI spec is yet again seen to be perfectly sound, and once again, bytcode is seen to be unneeded.

And again, this point was brought up on IRC, and demolished soundly. Again, if a manufacturer cares about older CPUs without FPUs, it can provide the driver in source form, and have you build it from source. This way, you can tell your compiler to use SSE, SSE2, or whatever instruction set when compiling floating point operations. GCC has this ability.

Again, most vendors do not care, about per-model optimizations and will simply tell you that "SSE2" is required for this driver.
The 80486 added things of its own. Should we stick to 80386? Intel just added AES support recently and it's still Intel 64, not Intel 128. VIA had AES for a long time and the instructions are quite different
So how many devices do you know about whose drivers will require AES extensions? Please do tell.
Has anyone explained why proprietary drivers and/or micro-optimizations are unimportant?
Yes, I explained why micro-optimizations are a waste of time twice in two posts, and in one post where I specifically addressed that point.

Now I'll move on to "Why proprietary drivers are unimportant".

They are not unimportant. They are in fact very essential. But a bytecode will not in any way provide added ease for proprietary development. You claim that the use of the current specification will make you "lose out on proprietary drivers". Yet UDI 1.01 is binary portable across implementations of the same architecture.

The next thing is that UDI 1.01 allows for the packaging of drivers such that you may have one for each architecture. Of course, a vendor may also choose to have, on their website, a simple page with multiple links, each to a differently compiled driver for a different architecture so you don't have to download the whole archive with all the builds. I should not have had to mention this.

After that comes the fact that the bytecode language proposed with UDI 2.0 would still be discernable with some the use of some tool or other, and more so than a native distribution of machine code. A manufacturer is already not going to want to convert its existing C codebase into UDILANG; talk less about it when they realize that it adds nothing to the current level of obscurity they enjoy from a compiled driver. So your point about proprietary drivers has nothing to do inherently with a bytecoded language. A bytcoded language is not in any way more attractive to a vendor than the current specification, both economically and strategically.

Also, I'd like to point out this small thing here, which is actually quite major in meaning:
Has anyone explained why proprietary drivers ... are unimportant?
Not using a bytecode, and not considering a bytecode to be important is not *in any way* implicitly linked to discouraging proprietary drivers. Proprietary drivers will remain important for as long as important vendors distribute them. But I do not like the way you weasel in a relationship between "proprietary driver unavailability" and "the opposition of bytecode in UDI" when you are fully aware of the fact that there is NO RELATIONSHIP. And you know English well enough to have been aware of the intent behind your wording when you typed that. If you were not aware of it, then I'm showing it to you now.

If anything, native language compiled drivers distributed as binaries and not bytecode provide more assurance of privacy to proprietary vendors. And if they are going to distribute their drivers as binaries directly, they are not going to be writing them in UDILANG first before compiling and releasing them. They will, like every other normal company, use good, stable, well-known old C, and follow the UDI core specification, much like it has stated before in this thread by multiple people who told you about the obvious infeasibility of enforcing a new language on developers.

I await your next point, although I know fully well that in the discussion on IRC, while in the presence of at least three other people, I have already disproved all of them.

Re: Should I get on the UDI train?

Posted: Sun Jul 25, 2010 5:29 pm
by Love4Boobies
I did not ignore your comment. In fact, I went further and explained how certain useful instructions are sometimes added later to an architecture (read my previous 2 posts, I think).
gravaera wrote:Love4Boobies, maybe you need to re-explain what you're talking about since I'm not the only person who thinks you're proposing a new language:
Indeed, I don't know why I mentioned you instead of Owen. My appologies. It's irrelevant to the point though.
UDI 1.01 already provides for identification of drivers which use floating point operations. It clearly allows for region identification of FP utilizing code. However you take it, if a platform has a CPU which does not have an FPU available for use, then no-matter what bytecode you use, you're not going to get FP operations on that platform, regardless of how awesome your bytecode is. It will then have to do the same thing as equivalent code written in C according to the UDI 1.01 spec: identify its FP utilizing region in the .udiprops section.
We're not really talking about FPUs in particular. But as long as you've brought it up, 80386s can have 8087, 80287 or 80387 FPUs and there is no same way for a binary driver to know what it can use. With bytecodes you can even go as far as to emulate a FPU in its absence if you so wish. It's not mandatory but it gives you this opportunity.
Whether you use a bytecode in JIT style,
As mentioned many times before, I believe JIT is a very poor idea for drivers.
or invent some new language which will then be compiled down to native
My strongest wish is to find a portable bytecode that can be used with C. This is why I said we should look into something similar to the UEFI bytecode.
And again, this point was brought up on IRC, and demolished soundly.
The point you made on IRC was demolished only in your oppinion, I didn't see anyone else at least agree with you. You went further to say that we shouldn't care about the environment being safe and that we should rely on vendors testing for bugs and on users knowing whether their drivers contain viruses. I strongly disagree with all of this.
Again, if a manufacturer cares about older CPUs without FPUs, it can provide the driver in source form, and have you build it from source. This way, you can tell your compiler to use SSE, SSE2, or whatever instruction set when compiling floating point operations. GCC has this ability.
The driver packaging system still does not.
So how many devices do you know about whose drivers will require AES extensions? Please do tell.
I can imagine that they would be useful in a variety of cases. I however do not understand why you are trying to be very particular. Even if AES was completely unuseful can you guarantee that the most generic instruction set will always work just fine no matter for which architecture or CPU model? I cannot.
They are not unimportant. They are in fact very essential. But a bytecode will not in any way provide added ease for proprietary development. You claim that the use of the current specification will make you "lose out on proprietary drivers". Yet UDI 1.01 is binary portable across implementations of the same architecture.
...

Did I ever said that the current specification doesn't work for proprietary drivers? No, I said the source code distributions don't work for proprietary drivers. I'm sure you can understand why.
After that comes the fact that the bytecode language proposed with UDI 2.0 would still be discernable with some the use of some tool or other, and more so than a native distribution of machine code.
I'm glad that you never said that I proposed such a language... Oh, wait! :roll:
A manufacturer is already not going to want to convert its existing C codebase into UDILANG; talk less about it when they realize that it adds nothing to the current level of obscurity they enjoy from a compiled driver. So your point about proprietary drivers has nothing to do inherently with a bytecoded language. A bytcoded language is not in any way more attractive to a vendor than the current specification, both economically and strategically.
I'm sorry, there's no such thing as a bytecoded language. I think you are confusing the term "bytecode" with "managed".
Also, I'd like to point out this small thing here, which is actually quite major in meaning:
Has anyone explained why proprietary drivers ... are unimportant?
Not using a bytecode, and not considering a bytecode to be important is not *in any way* implicitly linked to discouraging proprietary drivers. Proprietary drivers will remain important for as long as important vendors distribute them. But I do not like the way you weasel in a relationship between "proprietary driver unavailability" and "the opposition of bytecode in UDI" when you are fully aware of the fact that there is NO RELATIONSHIP. And you know English well enough to have been aware of the intent behind your wording when you typed that. If you were not aware of it, then I'm showing it to you now.
That was in reply to your source code proposal mostly. If binary distributions do not indeed satisfy everyone then it's either source code, bytecode or another solution that I have yet to hear.

The rest of your post talks about UDILANG again...

Re: Should I get on the UDI train?

Posted: Mon Jul 26, 2010 1:07 am
by jal
Love4Boobies wrote:[We're not really talking about FPUs in particular. But as long as you've brought it up, 80386s can have 8087, 80287 or 80387 FPUs and there is no same way for a binary driver to know what it can use. With bytecodes you can even go as far as to emulate a FPU in its absence if you so wish. It's not mandatory but it gives you this opportunity.
The funny thing is, you are right about that one. Ok, you are lagging 25 years behind, but still right. But seriously - do you expect anyone (really, anyone) to care about the ancient 80386 and whether it has a 8087, 80287 or 80387 FPU? And don't tell us again it's just an example, because all examples you are bringing up are of these ancient nature.

Yes, we have different varieties of the same architecture, but these are limited. If you want to provide a driver for an x86-64 platform, and you really need the latest SIMD instructions for your driver (which, as Owen clearly explained, you don't), it is as simple as either including both Intel and AMD code in the same binary and switch between them dynamically (nothing that a few function pointers can't solve) or providing two binaries: one for Intel, and one for AMD. You do not need bytecode at all.

You are proposing a solution to a problem that simply is not there. You are fighting strawman, ghosts of your imagination.


JAL

Re: Should I get on the UDI train?

Posted: Mon Jul 26, 2010 1:37 am
by Combuster
Love4Boobies wrote:
I did not ignore your comment. In fact, I went further and explained how certain useful instructions are sometimes added later to an architecture (read my previous 2 posts, I think).
And thus you missed point entirely. New opcodes are only about doing things you already can in slightly less clock cycles. UDI drivers only take a fraction of CPU time. Optimize where it matters. You don't.
Love4Boobies wrote:
Whether you use a bytecode in JIT style,
As mentioned many times before, I believe JIT is a very poor idea for drivers.
Red Herring: As mentioned many times before: JIT vs AOT is an implementation detail and completely irrelevant to the discussion.
UDI 1.01 already provides for identification of drivers which use floating point operations. It clearly allows for region identification of FP utilizing code. However you take it, if a platform has a CPU which does not have an FPU available for use, then no-matter what bytecode you use, you're not going to get FP operations on that platform, regardless of how awesome your bytecode is. It will then have to do the same thing as equivalent code written in C according to the UDI 1.01 spec: identify its FP utilizing region in the .udiprops section.
We're not really talking about FPUs in particular. But as long as you've brought it up, 80386s can have 8087, 80287 or 80387 FPUs and there is no same way for a binary driver to know what it can use. With bytecodes you can even go as far as to emulate a FPU in its absence if you so wish. It's not mandatory but it gives you this opportunity.
Is the 287 even part of the ABI, i.e. do you have to bother? The older FPUs don't even work with 386s. I doubt gcc can even emit 287-compatible code. I have at least never seen an option to do that.
And again, this point was brought up on IRC, and demolished soundly.
Let's not drag private conversations into this.
Again, if a manufacturer cares about older CPUs without FPUs, it can provide the driver in source form, and have you build it from source. This way, you can tell your compiler to use SSE, SSE2, or whatever instruction set when compiling floating point operations. GCC has this ability.
The driver packaging system still does not
Wrong. The compile script doesn't even specify what to use for -o -I -L -l, which will get used when gcc is called by the backend. If you install a source driver you can just feed in -march/-mtune/-fpmath/-pipe together with all the other options that are needed for compilation, without the driver caring.
So how many devices do you know about whose drivers will require AES extensions? Please do tell.
I can imagine that they would be useful in a variety of cases. I however do not understand why you are trying to be very particular. Even if AES was completely unuseful can you guarantee that the most generic instruction set will always work just fine no matter for which architecture or CPU model? I cannot.
That wasn't an answer. Actually, you indirectly proved his point.
They are not unimportant. They are in fact very essential. But a bytecode will not in any way provide added ease for proprietary development. You claim that the use of the current specification will make you "lose out on proprietary drivers". Yet UDI 1.01 is binary portable across implementations of the same architecture.
Did I ever said that the current specification doesn't work for proprietary drivers? No, I said the source code distributions don't work for proprietary drivers. I'm sure you can understand why.
Non-sequiteur: Why does the point mention bytecode and the supposed counterargument mention sourcecode?

Please fix your logic before continuing.

Re: Should I get on the UDI train?

Posted: Mon Jul 26, 2010 1:56 am
by Solar
I find this talk about whether and how to extend the UDI specification to be somewhat irritating.

Are any of us even competent to create such a specification extension? A bytecode that could be compiled to make use of the peculiarities of present and future architectures? Has anything like that been done before?

I'm not even speaking of getting it endorsed "officially", as opposed to forking into a seperate project, possibly even incompatibly so, and severly damaging the whole concept of a Uniform Driver Interface...

I thought this would be about writing additional documentation (tutorials on how to write UDI drivers / frameworks / metalanguages, how to get a new metalanguage endorsed - and by whom, a guide to the reference implementation, a page or two about the status quo of UDI and its metalanguages, whatever). Assembling a list of available drivers and where to get them would be nice, too. Not even speaking of writing a couple of drivers ourselves.

Please don't be offended, but I've seen this pattern before: In Pro-POS, my own OS project. We talked the big concepts for about two years, didn't write a line of code, and in the end the whole thing was scrapped. A huge and frustrating waste of time.

I'd advise to start simple. Two paragraphs up is a nice list of simple things that could be done first, easily. It would also prove if there are more than two people willing to put work into this at all (and you'd need more than two people to come up with something like UDI v2, if it is at all needed).

Re: Should I get on the UDI train?

Posted: Mon Jul 26, 2010 2:04 am
by Combuster
To summarize, the consensus is so far:

- Add a new virtual architecture, whose binary distribution consists of bytecode as per an existing standard (mentioned are: LLVM, UEFI, ACPICA), as long as it can be compiled directly from the same source as it would on all other platforms. Emphasis from now on should be on making a choice on which one. (especially, which one satisfies demands best)

- Since the above works as an architecture binding, it doesn't modify the specification, and thus there is no need for UDI 2.0. Which is good since breaking compatibility/writing a new specification is a bad idea.

- We need new metalanguages for: Graphics and USB; these are key to a functional base system.
- We want new metalanguages for, amongst others, Audio i/o and Video input.

- We need actual implementation effort.

Re: Should I get on the UDI train?

Posted: Mon Jul 26, 2010 4:07 am
by Thomas
Solar wrote:
Please don't be offended, but I've seen this pattern before: In Pro-POS, my own OS project. We talked the big concepts for about two years, didn't write a line of code, and in the end the whole thing was scrapped. A huge and frustrating waste of time.

I'd advise to start simple. Two paragraphs up is a nice list of simple things that could be done first, easily. It would also prove if there are more than two people willing to put work into this at all (and you'd need more than two people to come up with something like UDI v2, if it is at all needed).
Solar +1 :) .

--Thomas

Re: Should I get on the UDI train?

Posted: Mon Jul 26, 2010 6:02 am
by Owen
Combuster wrote:To summarize, the consensus is so far:

- Add a new virtual architecture, whose binary distribution consists of bytecode as per an existing standard (mentioned are: LLVM, UEFI, ACPICA), as long as it can be compiled directly from the same source as it would on all other platforms. Emphasis from now on should be on making a choice on which one. (especially, which one satisfies demands best)

- Since the above works as an architecture binding, it doesn't modify the specification, and thus there is no need for UDI 2.0. Which is good since breaking compatibility/writing a new specification is a bad idea.

- We need new metalanguages for: Graphics and USB; these are key to a functional base system.
- We want new metalanguages for, amongst others, Audio i/o and Video input.

- We need actual implementation effort.
Agreed. I notice you haven't joined the UDI list yet; could you do so - I think that's a more appropriate place for discussing metalanguages

(Sorry if you have - you haven't made it obvious!)

Re: Should I get on the UDI train?

Posted: Mon Jul 26, 2010 10:45 am
by Love4Boobies
Combuster wrote:And thus you missed point entirely. New opcodes are only about doing things you already can in slightly less clock cycles.
Ah, I wasn't aware that you studied all the architectures in the world. I should then stop using SSE since there's no real benefit.
Combuster wrote:
Love4Boobies wrote:
Whether you use a bytecode in JIT style,
As mentioned many times before, I believe JIT is a very poor idea for drivers.
Red Herring: As mentioned many times before: JIT vs AOT is an implementation detail and completely irrelevant to the discussion.
Indeed. But since he mentioned that JIT is not a good idea I wanted to show that there's also an alternative that (usually?) makes more sense. It's perfectly fair to talk about whether there's a good way to implement something or not.
UDI 1.01 already provides for identification of drivers which use floating point operations. It clearly allows for region identification of FP utilizing code. However you take it, if a platform has a CPU which does not have an FPU available for use, then no-matter what bytecode you use, you're not going to get FP operations on that platform, regardless of how awesome your bytecode is. It will then have to do the same thing as equivalent code written in C according to the UDI 1.01 spec: identify its FP utilizing region in the .udiprops section.
We're not really talking about FPUs in particular. But as long as you've brought it up, 80386s can have 8087, 80287 or 80387 FPUs and there is no same way for a binary driver to know what it can use. With bytecodes you can even go as far as to emulate a FPU in its absence if you so wish. It's not mandatory but it gives you this opportunity.
Is the 287 even part of the ABI, i.e. do you have to bother? The older FPUs don't even work with 386s. I doubt gcc can even emit 287-compatible code. I have at least never seen an option to do that.
Well, if GCC cannot do it then we shouldn't bother, right? Maybe in 5 years we should do something to break compatibility again because new FPU instructions will be put into the ISA and everyone will be using them or because GCC will stop supporting them.
The driver packaging system still does not
Wrong. The compile script doesn't even specify what to use for -o -I -L -l, which will get used when gcc is called by the backend. If you install a source driver you can just feed in -march/-mtune/-fpmath/-pipe together with all the other options that are needed for compilation, without the driver caring.
I'm sure you can understand the difference between the packaging system and a compile script. Your only option is to compile the generic version of the driver from the package tuned to your needs but that will break binary portability because another environment will not know that that's not a generic driver. Either that or create a non-standard installation procedure for drivers, a hack. I'm trying to fix the problem not find a quick fix.
So how many devices do you know about whose drivers will require AES extensions? Please do tell.
I can imagine that they would be useful in a variety of cases. I however do not understand why you are trying to be very particular. Even if AES was completely unuseful can you guarantee that the most generic instruction set will always work just fine no matter for which architecture or CPU model? I cannot.
That wasn't an answer. Actually, you indirectly proved his point.
And how is that? Can you guarantee that in a few years we won't all be using some awesome new instruction and will need to chage the ABI to reflect this?
They are not unimportant. They are in fact very essential. But a bytecode will not in any way provide added ease for proprietary development. You claim that the use of the current specification will make you "lose out on proprietary drivers". Yet UDI 1.01 is binary portable across implementations of the same architecture.
Did I ever say that the current specification doesn't work for proprietary drivers? No, I said the source code distributions don't work for proprietary drivers. I'm sure you can understand why.
Non-sequiteur: Why does the point mention bytecode and the supposed counterargument mention sourcecode?

Please fix your logic before continuing.
Please fix your inability to read, my lord. He talked about a claim I never made so I rectified by dumping the correct claim. When being rude it's always a good idea to check everything twice.

Re: Should I get on the UDI train?

Posted: Mon Jul 26, 2010 11:24 am
by Combuster
I'm not going to troll back. There's no point in arguing over how you disagree about some murky details in which the consensus was established. Also, three people here have stated their opinion that you can't make proper arguments, one says you don't. Please draw your conclusions from that observation.

For the record, I will go and report all pointless debates past this point, in the hope to give some sense of progress to this thread.





So, LLVM as an virtual architecture: advantages, disadvantages? Same for CIL/UEFI/ACPICA?

Re: Should I get on the UDI train?

Posted: Mon Jul 26, 2010 11:39 am
by Owen
Combuster wrote:I'm not going to troll back. There's no point in arguing over how you disagree about some murky details in which the consensus was established. Also, three people here have stated their opinion that you can't make proper arguments, one says you don't. Please draw your conclusions from that observation.

For the record, I will go and report all pointless debates past this point, in the hope to give some sense of progress to this thread.





So, LLVM as an virtual architecture: advantages, disadvantages? Same for CIL/UEFI/ACPICA?
In the hope of making progress, I submitted a graphics architecture proposal to the mailing list :)

For the mentioned bytecodes:
  • LLVM is not able to represent C code portably. The primary casualty is pointer-sized integers, which will bet lost as i64/i32s. Add an "iptr" and you're pretty much there
  • UEFI is too far gone to build optimized code from. The compiler has already built fixed stack frames and such; you can't feed it to something like the optimizers of LLVM or GCC
  • CIL is too high level (i.e. hard to compile C to)
  • AML is vile and not a C compilation target
We would need to create a "UDI 1.02" in order to lock down some presently architecture-specific stuff to make it possible

Re: Should I get on the UDI train?

Posted: Mon Jul 26, 2010 3:49 pm
by Love4Boobies
Don't worry about it, it is obvious that I am just as unable to have a civilized conversation as you are. I am glad that you guys are apparently going for a bytecoded design but I decided not to waste any more energy on this because it is obvious I don't have what it takes. Not being sarcastic. I really wish you guys good luck because I think UDI is a good interface :)