Page 2 of 6

Re: Theoretical: Why the original 386 design was bad

Posted: Sat Jan 22, 2011 8:21 pm
by tom9876543
ring 1 usage I already mentioned Xen
Xen is a hypervisor. I am confused, why wouldn't Xen use Intel virtualization (VT-x)????

Also Wikipedia mentions that Xen runs on ARM. So if Xen can run on ARM with no CPL1, it surely can run on a theoretical 386 with no CPL1.

Hypervisors should be implemented by adding extra CPU registers, instructions etc, not hacking the operating system visible features.

Re: Theoretical: Why the original 386 design was bad

Posted: Sat Jan 22, 2011 11:18 pm
by bewing
Your 16bit suggestion is misaligned 50% of the time, which will crash many CPUs.

Re: Theoretical: Why the original 386 design was bad

Posted: Sun Jan 23, 2011 5:02 am
by fronty
tom9876543 wrote:Xen is a hypervisor. I am confused, why wouldn't Xen use Intel virtualization (VT-x)????

Also Wikipedia mentions that Xen runs on ARM. So if Xen can run on ARM with no CPL1, it surely can run on a theoretical 386 with no CPL1.
Xen can use both VT-x and AMD-V. It doesn't use only them, because it predates both of them and they want to support older CPUs. They chose implementing Xen that way, because it was natural and the best way to implement a hypervisor in their opinion, and I believe they didn't just start coding right away before very good planning and thinking. Of course they didn't rewrite the x86 port to be similar to ARM port, because they have to be seperate, so there really wasn't any reason for a rewrite.

Re: Theoretical: Why the original 386 design was bad

Posted: Sun Jan 23, 2011 7:01 am
by Brendan
Hi,
tom9876543 wrote:
That was not Intel's fault - since the beginning the first 32 interrupts were reserved by Intel for exceptions.
Yes you are correct, my bad. IBM is supposed to be the pinnacle of computing excellence but they stuffed that up.
IBM did a lot of good stuff, and a lot of average stuff, and a lot of other stuff. The original PC fits in the "other" category - something IBM slapped together by recycling parts from other systems and gluing them together to get something to market quickly.
tom9876543 wrote:
Intel make a CPU and had no say in how that CPU is used. A20 wasn't Intel's problem
I would disagree with you there. I found the Intel 8086 Users Manual on the web. It clearly says the following:
- offsets wrap around
- the memory address space is limited to 1 megabyte

It does not clearly say what happens when the physical address is 21 bits, but based on the above, you would assume a wrap around.
The manual documents what does happen for that CPU. In 1978 (when that manual was probably written) Intel didn't have a working time machine, and therefore wasn't able to find out that people are too stupid to realise a CPU with more memory might not wrap until it was too late. They probably didn't even know if there was going to be an 80186 or not back then anyway.
tom9876543 wrote:
So you're suggesting that back in 1985, Intel should've used some sort of time travel to see what people would/wouldn't be doing 15 years later? Hindsight is easy. Foresight isn't.
I am suggesting Intel should have had the following philosophy when creating the 386:
Build a 32 bit CPU that is 100% compatible with the 8086 but make the design as elegant and clean as possible. Get rid of 286 compatibility as its "protection" is primitive and convoluted.
They did get rid of most 286 compatibility. They left just enough in so that applications (but not system software) would still work. In hindsight, this was probably a bit too risky, and maybe Intel would be closer to having 100% market share now if they didn't screw over the people that had developed software for 80286 protected mode 20 years ago. Heck, maybe it took ages for Microsoft to develop a 32-bit OS because Microsoft were worried Intel would break backward compatibility again and leave them with an expensive OS that would run on 80386 and nothing else.

For 80386 they made segmentation very robust; so that you could have several pieces of code (at different privilege levels) all relying on each other without going through any intermediate/kernel code. For example, a process (at CPL=3) could use a task gate to cause an immediate task switch to another processes (also at CPL=3), and the new process could use a call gate to access a driver (at CPL=2), without any need to switch to/from CPL=0 and without any security problems. Intel couldn't have known that these remarkably powerful features wouldn't be used very much when they first designed it.

If you want to blame someone for most of these things, then you should blame yourself. You should've built a time machine and travelled back in time (to both 1978 and to 1985) and warned Intel (and IBM?) about the future. You didn't; therefore all of us programmers, and IBM and Intel and lots of other companies should all sue you for failing to invent time travel.


Cheers,

Brendan

Re: Theoretical: Why the original 386 design was bad

Posted: Sun Jan 23, 2011 7:01 am
by Owen
tom9876543 wrote: Also Wikipedia mentions that Xen runs on ARM. So if Xen can run on ARM with no CPL1, it surely can run on a theoretical 386 with no CPL1.

Hypervisors should be implemented by adding extra CPU registers, instructions etc, not hacking the operating system visible features.
Hypervisors should be made by making the CPU follow the Popek and Goldberg virtualization rules. Unfortunately neither x86 or ARM follow these; x86 is indeed very bad at them (e.g. SGDT should not be possible from user mode, privileged flags stored in CR0, etc) and ARM is very close to them except the behavior of some privileged instructions executed in user mode is UNPREDICTABLE (i.e. won't do anything privileged, but could do anything else and may not trap)

Obviously the best way of implementing a hypervisor on ARM then would be to use the ARM TrustZone features. Good luck finding a core which supports one though (And good documentation, though the info on ARM's website gives me an overview from which it should be relatively easy to reverse engineer)

CPL1 on x86 is useful as an optimization method.

Oh, and with regards to some of your suggestions:
  • 16-bit OS/2 code cannot be run in v8086 mode. It depends heavily on 286 features.
  • Windows 9X used 16-bit protected mode to run code brought over from Windows 3.X, which inherited it from Windows/286. It also cannot run in v8086 mode.

Re: Theoretical: Why the original 386 design was bad

Posted: Sun Jan 23, 2011 7:31 am
by Kevin
tom9876543 wrote:Xen is a hypervisor. I am confused, why wouldn't Xen use Intel virtualization (VT-x)????
Because there was no VMX when Xen was designed, because there are still enough CPUs that don't support it and because at least the first versions of it sucked. ;) (No real mode virtualizaion etc.)
Also Wikipedia mentions that Xen runs on ARM. So if Xen can run on ARM with no CPL1, it surely can run on a theoretical 386 with no CPL1.
So what? Brainfuck is turing complete, so you could write all your code in it. Still you don't want to do that.

You claimed that the features aren't used and I told you some systems that use them. Even if they could have been written to work without these features, they are examples that the feature seemed to be useful for some widely used software.
Hypervisors should be implemented by adding extra CPU registers, instructions etc, not hacking the operating system visible features.
For paravirtualization it's the design to be OS visible. For full virtualization, you only need to add new registers and instrucions if you need to work around restrictions of the original design. So this is a point where I would have actually agreed that the i386 design is bad: Running kernel code in ring 3 and catching exceptions should have been enough to implement virtualization software.

Re: Theoretical: Why the original 386 design was bad

Posted: Mon Jan 24, 2011 4:33 am
by tom9876543
The manual documents what does happen for that CPU..... Intel didn't have a working time machine, and therefore wasn't able to find out that people are too stupid to realise a CPU with more memory might not wrap until it was too late.
Well it is a bit sad Intel couldn't foresee that future processors would use more than 1 megabyte of RAM. Moore's law was first declared in 1965 apparently, so Intel should have realised memory requirements would also significantly increase in future.
Mistake number 2 was they then went and designed the 286 and 386 without making them 100% compatible with the 8086.

They did get rid of most 286 compatibility. They left just enough in so that applications (but not system software) would still work.
The 386 is virtually 100% compatible with the 286. Why don't you explain exactly what 286 features were removed from the 386? Until you do you obviously are in lala land.

Intel couldn't have known that these remarkably powerful features wouldn't be used very much when they first designed it.
I am cynical. I would say Intel invented the TSS, CPL0-3 etc to try and corner the market. Intel specific features makes operating systems less portable, making it harder for them to move to other CPU architectures.

16-bit OS/2 code cannot be run in v8086 mode. It depends heavily on 286 features.
Windows 9X used 16-bit protected mode to run code brought over from Windows 3.X, which inherited it from Windows/286. It also cannot run in v8086 mode.
I never said 286 operating systems could run on the proposed "pure" 386 cpu. I only think 286 16 bit applications could.

Re: Theoretical: Why the original 386 design was bad

Posted: Mon Jan 24, 2011 7:50 am
by qw
I think the original 6502 design was bad because it was only 8 bits, had way too few registers, did not seperate kernel from user mode and could address only 64 KiB of memory.

</cynical>

Re: Theoretical: Why the original 386 design was bad

Posted: Mon Jan 24, 2011 11:10 am
by Owen
tom9876543 wrote:
16-bit OS/2 code cannot be run in v8086 mode. It depends heavily on 286 features.
Windows 9X used 16-bit protected mode to run code brought over from Windows 3.X, which inherited it from Windows/286. It also cannot run in v8086 mode.
I never said 286 operating systems could run on the proposed "pure" 386 cpu. I only think 286 16 bit applications could.
I am referring to the applications...

Re: Theoretical: Why the original 386 design was bad

Posted: Mon Jan 24, 2011 2:15 pm
by tom9876543
Owen wrote:I am referring to the applications...
Ok. Can you please provide exact details of what 286 features a 16 bit protected mode OS/2 application used? Also same for win16 application?

For example, if you are referring to segment limits, yes you are correct, v8086 doesn't have them. But then why would the application go past its own segment limits? It seems only a badly written application would go past its segment limit. A badly written application running on a "pure" 386 would not affect any other application; they all have their own address space thanks to paging.

Re: Theoretical: Why the original 386 design was bad

Posted: Mon Jan 24, 2011 2:45 pm
by JamesM
Hi,
Owen wrote: except the behavior of some privileged instructions executed in user mode is UNPREDICTABLE (i.e. won't do anything privileged, but could do anything else and may not trap)

Obviously the best way of implementing a hypervisor on ARM then would be to use the ARM TrustZone features. Good luck finding a core which supports one though (And good documentation, though the info on ARM's website gives me an overview from which it should be relatively easy to reverse engineer)
The Cortex-A8 and above (A9, A15) all have TrustZone. A8's are in most smartphones.

Just out of curiosity, which instructions are unpredictable in user mode? sounds like a security flaw, does it not?

Re: Theoretical: Why the original 386 design was bad

Posted: Mon Jan 24, 2011 3:39 pm
by Owen
JamesM wrote:Hi,
Owen wrote: except the behavior of some privileged instructions executed in user mode is UNPREDICTABLE (i.e. won't do anything privileged, but could do anything else and may not trap)

Obviously the best way of implementing a hypervisor on ARM then would be to use the ARM TrustZone features. Good luck finding a core which supports one though (And good documentation, though the info on ARM's website gives me an overview from which it should be relatively easy to reverse engineer)
The Cortex-A8 and above (A9, A15) all have TrustZone. A8's are in most smartphones.
I wasn't aware TrustZone was supported for all cores. This is interesting... I shall have to fiddle when I get my PandaBoard ;)

(Also, I've just noticed that its officially called "Security Extensions" and theres a section on it in the ARM... shiny :-))
Just out of curiosity, which instructions are unpredictable in user mode? sounds like a security flaw, does it not?
I can't remember. I remember it being mentioned on the Linux KVM mailing list I think.

In any case, the definition of unpredictable is such that it precludes it being a security issue. From ARMv7-A/R ARM: "UNPREDICTABLE: Means the behavior cannot be relied upon. UNPREDICTABLE behavior must not represent security holes. UNPREDICTABLE behavior must not halt or hang the processor, or any parts of the system. UNPREDICTABLE behavior must not be documented or promoted as having a defined effect."

Re: Theoretical: Why the original 386 design was bad

Posted: Mon Jan 24, 2011 3:45 pm
by xyzzy
JamesM wrote:Just out of curiosity, which instructions are unpredictable in user mode? sounds like a security flaw, does it not?
LDM/STM with user registers are, possibly some others. I thought it sounded like a security flaw when I was reading through the ARM ARM, however I then looked at the definition of unpredictable:
ARM ARM wrote:UNPREDICTABLE behavior must not represent security holes.
Edit: Bah, Owen beat me to it :)

Re: Theoretical: Why the original 386 design was bad

Posted: Mon Jan 24, 2011 4:10 pm
by JamesM
LDM/STM with user registers are
Given we're talking about user mode, do you mean LDM/STM with supervisor registers?

LDM/STM with user regs in user mode is kind of expected ;)

Re: Theoretical: Why the original 386 design was bad

Posted: Mon Jan 24, 2011 4:20 pm
by xyzzy
JamesM wrote:
LDM/STM with user registers are
Given we're talking about user mode, do you mean LDM/STM with supervisor registers?

LDM/STM with user regs in user mode is kind of expected ;)
No, there's supervisor mode LDM/STM instructions that load from/store to specifically the user mode copies of registers that are marked as unpredictable when executed from user/system modes. Sorry, wasn't clear what I was referring to there :)