Brendan wrote:
8086 was a 16-bit CPU that was "sort of source compatible" with older 8-bit Intel CPUs. The segment registers were a silly hack so that the CPU could use more than 64 KiB (without needing larger registers). It was ugly, but Intel didn't care much - at the time 8086 was just a temporary thing to keep people happy until their
Intel's iAPX 432 was ready, and then (they hoped) it'd die.
There are still 8086 compatible designs running. This environment is kind of as bad (or good, whatever) as any flat memory model environment as segments are just a way to increase address space. They have no limits and no base (other than a hardcoded one).
Brendan wrote:
Intel's iAPX 432 was a hideous thing - designed for high level languages, with built in support for things like object oriented programming, garbage collection, etc (it's not like you need a managed environment for these things
). The "temporary" 8086 got popular fast, and the iAPX 432 failed.
Yes, but not because of Intel. It was the usage of 8086 that made it popular, and the non-usage of the other design.
Brendan wrote:
Eventually 8086's 1 MiB limit got too, um, limiting. Intel wanted to increase that, but they also hadn't quite given up on some of the failed ideas from the failed iAPX 432 chip either. By this time Intel had also learnt the importance of backward compatibility - they needed to make it work so that old software designed for the 8086's segmentation could at least run on a new protected mode OS. They combined the silly hack (from 8086) with failed ideas (from iAPX 432) and "80286 protected mode" was born.
Kind of. 80286 protected mode was ok, but it got used too much.
Brendan wrote:
Next comes 80386. How to make it 32-bit while providing the crucial backward compatibility? They extended the "silly hack combined with failed ideas" so that old software designed for older 80x86 CPUs would still be able to run on a new 32-bit OS. Of course Intel was getting smarter - they also added paging. Most OSs abandoned segmentation (except for where it's necessary to execute old software designed for older CPUs - e.g. DOS and 16-bit windows programs). OS/2 was the only OS that bothered with segmentation for new 32-bit executables, and it paid the price (in terms of additional complexity for a feature nobody bothered to use). It turned out that given the choice between "safety" (from segmentation) and performance (from not having to do the protection checks that segmentation required), every sane programmer chose performance.
Not really. The 386 processor defined partly a completely new environment. Old real mode (which wasn't supported in the 286) could be emulated in a new submode. The primary problem was how they extended the GDT and descriptors to be backwards compatible with 286 protected mode, which was not at all necessary since this only affected OS-kernels and not applications.
Brendan wrote:
Things ticked along nicely for a while, with a few extensions to paging to support systems with more than 4 GiB of RAM. Eventually both Intel and AMD decided it was time for 64-bit. Intel wanted people to shift to a "not so backward compatible" Itanium (where they could lock out competitors). AMD had other plans.
Itanium was the worse piece of junk ever from Intel. It was a really good thing that this designed died silently.
Brendan wrote:
AMD continued the old tradition - they provided enough backward compatibility so that old software designed for older (32-bit) CPUs would still be able to run under a 64-bit OS; which meant keeping segmentation for long mode. For 64-bit code (where backward compatibility wasn't important) they did what everyone had been hoping for - they took the old/deprecated "silly hack combined with failed ideas, now with 32-bit extensions" out behind the back shed and ended its suffering.
Not really. If you had snooped around it a little more writing an emulator, you'd noticed about the only thing that happens when you switch to long mode is that base registers stops working as do the limit checks (except for FS and GS, but their bases are not loaded with descriptors). It's still valid to load selectors in 64-bit mode, and the whole descriptor cache is maintained between mode-switches.
Worse is that in AMDs design, compability-mode will clobber upper halves of 64-bit registers on 32-bit register loads, and this cannot be compensated for in compability-mode as manipulating 64-bit registers is not available. This is contrary to how real mode can still use 32-bit instructions, and that 16-bit register loads will not clobber upper parts of 32-registers. This seems a lot like a bug, but it still will cause unnecessary 64-bit register pushes and pops in mixed designs.
Another problem is how all implementations and uses of 64-bit mode still are essentially 32-bit implementations with more and wider registers and a larger heap. Also, addressing a random 64-bit address requires loading the address in a general register and then using the general register as a base, which is pretty slow and similar to how segment registers must be loaded in the segmented environment.
Lastly, 64-bit is a misnomer as the paging hardware only supports 48-bit linear addresses, and this cannot be fixed without breaking the paging implementation (similar to how PAE broke 32-bit paging).