Himem.sys was the one that had A20 support, but it also was the driver that provided extended memory specification (XMS) and that specifically used unreal mode to access all the memory above 1MiB (including and not limited to the 65520 byte HMA). Note: on a 286 HIMEM.SYS accessed extended memory without entering protected mode by using the LOADALL instruction.rdos wrote: The HMA area exploited with himem.sys is not depedent on unreal mode. It exploited the "feature" that if you load a segment register with FFFF then you could access 64k - 16 bytes above the 1 MB barrier. Some systems let this address wrap around to zero while others didn't, which is also why we have the A20 address line hacks.
The 286 couldn't return back to real mode by changing the PE bit in CR0. But IBM developed a hack on the IBM-AT that tied the keyboard chip (8042) to the reset line to reset the processor to get it back to real mode. Eventually this was unneeded since a triple fault (set IDTR limit to 0 and use the INT instruction to cause an interrupt) reset the processor as well and this was the method OS/2 used on a 286. Triple faulting was a faster process.rdos wrote: As for switching back to real mode, it was the 286 processor that had a specific way to enter protected mode that couldn't be undone. AFAIK, it's still impossible to get back that way.
Intel originally had no intention of allowing the switch back to real mode on the 386. They took the view that there would be no need to (just like they did with the 286) especially since they had v8086 mode. Very early 386 batches (Step-A processors) of 386 CPUs didn't allow you to change back to real mode by zeroing the PE (bit 0) of CR0. For the processors that did allow it, it was considered undefined behaviour. Microsoft and other companies exploited it anyway (DOS HIMEM.SYS being the main driver). Eventually Intel backed down and embraced the fact that switching back to real mode would be a feature and it was fully documented including how the descriptor caches would work in that situation (setting the stage for unreal mode to in essence become something reliable). That remains true even today.rdos wrote: Intel added another possibility in the 386 processor that was possible to use to get back to real mode with. They also added the V86 mode to the 386 processor to be able to emulate real mode, as well as the 32-bit extention and paging. I think the 386 processor was a master-piece in good design, but unfortunately, software & compilers largely have been unable to use it as it was meant to be used.
There is what the BIOS does during startup sequence before user code is run and then there is what the BIOS does when you call most BIOS software interrupts and when hardware interrupts occur. Most modern day BIOSes either enter protected mode and/or use unreal mode prior to the boot sector being run and ensure they are not in protected mode when the boot sector starts running. Those BIOSes don't use unreal mode for things like VBE and video interrupts (lucky you), but nothing would have prevented manufacturers from doing so. I mention this because there were some *rare* non-modern BIOSes (or BIOS extensions) in the 90s that did actually temporarily place the processor back into protected mode (or used unreal mode) for some BIOS software interrupts. You couldn't even rely on the limits being the same afterwards as a result. For DOS programs that were well written with on demand unreal mode this was less of an issue. On demand unreal mode required chaining to the 0dh (#GP/IRQ5) interrupt to reenter unreal mode if a #GP had been raised.rdos wrote: Based on the VBE entrypoint, I don't think BIOSes rely a lot on unreal mode. If they did, we would not be able to run the BIOS in V86 mode since that mode has fixed 64k limits for segments, and practically every BIOS (except the new i3 that switches to protected mode) does support running the VBE interface in V86 mode.
In the late 80s though the issue of allowing DOS programs to run in protected mode (and by extension access beyond 1MiB) was the job of VCPI or DPMI hosts. VCPI fell out of favor since it was originally designed to be used from DOS while already in v8086 mode where as DPMI didn't have that limitation.
As for the 386 design I didn't think it was overly good. It has a lot of cruft because Intel decided to retain backwards compatibility. Backwards compatibility is a nice feature, but it also comes with a fair amount of silicon to support older features. That alone makes the processor more complex than it had to be. Hardware task switching was something else that shouldn't have been done, but Intel tossed it into the 386.