Theoretical: Why the original 386 design was bad
-
- Member
- Posts: 134
- Joined: Thu Aug 18, 2005 11:00 pm
- Location: Sol. Earth. Europe. Romania. Bucuresti
- Contact:
Re: Theoretical: Why the original 386 design was bad
I think that x86 and 80386 design is very good (can be improved) and that ARM / RISC design is very bad
Ambition is a lame excuse for the ones not brave enough to be lazy; Solar_OS http://www.oby.ro/os/
Re: Theoretical: Why the original 386 design was bad
I think you're wrongbontanu wrote:I think that x86 and 80386 design is very good (can be improved) and that ARM / RISC design is very bad
Re: Theoretical: Why the original 386 design was bad
Ahh, the LDM r0, {r1,r2,r3}^ with the hat?AlexExtreme wrote:No, there's supervisor mode LDM/STM instructions that load from/store to specifically the user mode copies of registers that are marked as unpredictable when executed from user/system modes. Sorry, wasn't clear what I was referring to thereJamesM wrote:Given we're talking about user mode, do you mean LDM/STM with supervisor registers?LDM/STM with user registers are
LDM/STM with user regs in user mode is kind of expected
-
- Member
- Posts: 391
- Joined: Wed Jul 25, 2007 8:45 am
- Libera.chat IRC: aejsmith
- Location: London, UK
- Contact:
Re: Theoretical: Why the original 386 design was bad
That's the one.JamesM wrote:Ahh, the LDM r0, {r1,r2,r3}^ with the hat?
Re: Theoretical: Why the original 386 design was bad
They knew memory requirements would increase. They just didn't know they'd be maintaining x86 compatibility 30 years later.tom wrote:Well it is a bit sad Intel couldn't foresee that future processors would use more than 1 megabyte of RAM. Moore's law was first declared in 1965 apparently, so Intel should have realised memory requirements would also significantly increase in future.
-
- Member
- Posts: 170
- Joined: Wed Jul 18, 2007 5:51 am
Re: Theoretical: Why the original 386 design was bad
So in summary
===========
Intel could have designed a "pure" 386 by eliminating 286 backwards compatibility (making the 386 only compatible with the 8086).
Intel could have got rid of all 286 protections. Paging supports User / Supervisor levels. The 286 had an IO Permission Bitmap, but I will suggest its not required if IN/OUT is restricted to Supervisor Level code.
Getting rid of 286 compatibility would not have been the end of the world for operating system developers, because 286 applications could still run in their own address space under v8086 mode.
Benefits of getting rid of 286 compatibility:
- I counted 18 single byte opcodes that are rarely used and basically wasted. They should have been reallocated to more useful opcodes (in protected mode).
- How many CPU cycles are wasted doing segment limit checks etc? Ironically, Intel invented SYSTENTER/SYSEXIT because these 286 checks are a total waste of time.
- How many cpu transistors are wasted to support GDT, TSS, Call Gates, Task Gates etc? Although I admit today 10,000 transistors would be barely noticed.
- The Interrupt Table could have been 4 bytes per interrupt, unlike the convoluted 286 inspired 8 byte design.
- Would have been a chance to eliminate the a20 hack, which was caused by a badly designed 286.
Anyway its all theoretical now.
===========
Intel could have designed a "pure" 386 by eliminating 286 backwards compatibility (making the 386 only compatible with the 8086).
Intel could have got rid of all 286 protections. Paging supports User / Supervisor levels. The 286 had an IO Permission Bitmap, but I will suggest its not required if IN/OUT is restricted to Supervisor Level code.
Getting rid of 286 compatibility would not have been the end of the world for operating system developers, because 286 applications could still run in their own address space under v8086 mode.
Benefits of getting rid of 286 compatibility:
- I counted 18 single byte opcodes that are rarely used and basically wasted. They should have been reallocated to more useful opcodes (in protected mode).
- How many CPU cycles are wasted doing segment limit checks etc? Ironically, Intel invented SYSTENTER/SYSEXIT because these 286 checks are a total waste of time.
- How many cpu transistors are wasted to support GDT, TSS, Call Gates, Task Gates etc? Although I admit today 10,000 transistors would be barely noticed.
- The Interrupt Table could have been 4 bytes per interrupt, unlike the convoluted 286 inspired 8 byte design.
- Would have been a chance to eliminate the a20 hack, which was caused by a badly designed 286.
Anyway its all theoretical now.
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: Theoretical: Why the original 386 design was bad
As has been established... 286 backwards compatibility was a design requirementtom9876543 wrote:Intel could have got rid of all 286 protections. Paging supports User / Supervisor levels. The 286 had an IO Permission Bitmap, but I will suggest its not required if IN/OUT is restricted to Supervisor Level code.
No they couldn't. 286 apps depend upon the specifics of the 286...Getting rid of 286 compatibility would not have been the end of the world for operating system developers, because 286 applications could still run in their own address space under v8086 mode.
Won't argue there. Except that quite a lot of them have been killed (Long mode) and a few of the others have been repurposed in other instructions. For example, see the operand size prefix's use in MMX/SSE.Benefits of getting rid of 286 compatibility:
- I counted 18 single byte opcodes that are rarely used and basically wasted. They should have been reallocated to more useful opcodes (in protected mode).
Of course, killing the single byte inc/decs was a much better coop for AMD.
None. Limit checks are cheap and fast, and use little core area. SYSENTER/EXIT/CALL/RET were created because loading GDT entries is a waste of time.- How many CPU cycles are wasted doing segment limit checks etc? Ironically, Intel invented SYSTENTER/SYSEXIT because these 286 checks are a total waste of time.
Next to none. Its mostly a tiny section of microcode ROM. On CPUs where half of their area and more is used for cache, it is truly irrelevant and below the noise floor- How many cpu transistors are wasted to support GDT, TSS, Call Gates, Task Gates etc? Although I admit today 10,000 transistors would be barely noticed.
Whoo! We save one whole kilobyte of RAM!. Major benefit!- The Interrupt Table could have been 4 bytes per interrupt, unlike the convoluted 286 inspired 8 byte design.
A20 wasn't important to Intel at the time. It is even less so now. It is trivial.- Would have been a chance to eliminate the a20 hack, which was caused by a badly designed 286.
Anyway its all theoretical now.
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: Theoretical: Why the original 386 design was bad
Segmentation is still used in relatively modern systems, small address spaces and 386-compatible no-execute. The fast traps are just for the common case (read: microsoft) where the system developer can guarantee that the checks and memory accesses truly are a waste of time.- How many CPU cycles are wasted doing segment limit checks etc? Ironically, Intel invented SYSTENTER/SYSEXIT because these 286 checks are a total waste of time.
286s used a 8-byte IDT as well, and that was not because of the address size - if your system call interface is based on interrupts, you will need to tell which interrupts may be called and which shouldn't.- The Interrupt Table could have been 4 bytes per interrupt, unlike the convoluted 286 inspired 8 byte design.
-
- Member
- Posts: 595
- Joined: Mon Jul 05, 2010 4:15 pm
Re: Theoretical: Why the original 386 design was bad
ARM is not really a RISC design. Maybe from the beginning their design goal was to make a RISC CPU but it has with thumb evolved to something that looks like a more modern CPU. RISC design isn't really a modern design goal anymore and current trend is variable length instructions again and often merged instructions. If you look at AVR32 and Xtensa, you clearly see modern design design decisions based on the current knowledge in CPU architecture.bontanu wrote:I think that x86 and 80386 design is very good (can be improved) and that ARM / RISC design is very bad
ARM is almost the new x86, especially now when LPAE is a copy of x86 PAE.
-
- Member
- Posts: 170
- Joined: Wed Jul 18, 2007 5:51 am
Re: Theoretical: Why the original 386 design was bad
You are full of ****.Owen wrote:No they couldn't. 286 apps depend upon the specifics of the 286...
Go back to post where I asked:
Can you please provide exact details of what 286 features a 16 bit protected mode OS/2 application used? Also same for win16 application?
Still waiting for an answer.....
-
- Member
- Posts: 170
- Joined: Wed Jul 18, 2007 5:51 am
Re: Theoretical: Why the original 386 design was bad
Yes you do need a way to tell which interrupts can be called by User level code. Thats 1 bit. If the CPU requires ISR routines to be aligned to 16 byte boundaries, there are 4 spare bits in a 4 byte IDT entry.Combuster wrote:286s used a 8-byte IDT as well, and that was not because of the address size - if your system call interface is based on interrupts, you will need to tell which interrupts may be called and which shouldn't.
Last edited by tom9876543 on Wed Jan 26, 2011 2:17 pm, edited 1 time in total.
-
- Member
- Posts: 170
- Joined: Wed Jul 18, 2007 5:51 am
Re: Theoretical: Why the original 386 design was bad
Yes you can laugh about RAM, but then you forgot about CPU CacheOwen wrote: Whoo! We save one whole kilobyte of RAM!. Major benefit!
Whats the size of CPU cache? Especially on older CPUs like 486 or Pentium? 32KB? 1KB makes a noticeable difference there.
Overall, I agree the only significant benefit of dropping 286 compatibility would be to allow 18 single byte opcodes to be reallocated to more useful instructions.
Re: Theoretical: Why the original 386 design was bad
Given that the instruction set is microcoded down to an internal RISC representation, I don't see the benefit of this. It would cause code to be a tiny amount smaller - not exactly a big issue.Overall, I agree the only significant benefit of dropping 286 compatibility would be to allow 18 single byte opcodes to be reallocated to more useful instructions.
Re: Theoretical: Why the original 386 design was bad
IMO, the 386 processor design is one of the best processor designs ever done. The whole protection model is superior to the current "flat" memory models only protected by paging. Keeping compability with 286 protection models and real mode (V86) was done so well that it actually did not break DOS or 16-bit protected mode applications.
In retrospect, they should have done this differently:
* Selectors should have been extended to 32 bits.
* Maybe the TSS feature should have been done differently (or perhaps omitted)
In comparison to the really bad design of the 64-bit extension for x86, the 386 design was a major technological feat. OTOH, Intel was the first to totally break the x86 architecture with Itanium, but AMD too broke the whole design with their extension. It is easy to change AMDs spec so that it does not break backwards compability with protected mode, and this would not have affected performance, yet AMD chose to break the x86 architecture.
In retrospect, they should have done this differently:
* Selectors should have been extended to 32 bits.
* Maybe the TSS feature should have been done differently (or perhaps omitted)
In comparison to the really bad design of the 64-bit extension for x86, the 386 design was a major technological feat. OTOH, Intel was the first to totally break the x86 architecture with Itanium, but AMD too broke the whole design with their extension. It is easy to change AMDs spec so that it does not break backwards compability with protected mode, and this would not have affected performance, yet AMD chose to break the x86 architecture.
Re: Theoretical: Why the original 386 design was bad
That is an easy one. The 286 processor reinterpreted the use of segment registers from real mode. In real mode, addresses are calculated by shifting the segment four bits left and adding the offset. In 286 protected mode, the segment register (called a selector) loads a 24-bit base (and 16-bit limit) from the GDT/LDT, and adds this base to the offset to form the physical address. Therefore, 286 protected mode applications can address 16MB physical memory, and thus cannot execute in real mode which only can address 1MB. The addressing scheme is completely incompatible. Some 16 bit protected mode applications also define "huge segments" by allocating several adjacent selectors, where all but the last have a 64k limit. Many 16-bit protected mode applications also depend on being able to allocate and manipulate selectors.tom9876543 wrote:Can you please provide exact details of what 286 features a 16 bit protected mode OS/2 application used? Also same for win16 application?