Memory reading problem in real hardware.

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
MichaelPetch
Member
Member
Posts: 797
Joined: Fri Aug 26, 2016 1:41 pm
Libera.chat IRC: mpetch

Re: Memory reading problem in real hardware.

Post by MichaelPetch »

rdos wrote: The HMA area exploited with himem.sys is not depedent on unreal mode. It exploited the "feature" that if you load a segment register with FFFF then you could access 64k - 16 bytes above the 1 MB barrier. Some systems let this address wrap around to zero while others didn't, which is also why we have the A20 address line hacks.
Himem.sys was the one that had A20 support, but it also was the driver that provided extended memory specification (XMS) and that specifically used unreal mode to access all the memory above 1MiB (including and not limited to the 65520 byte HMA). Note: on a 286 HIMEM.SYS accessed extended memory without entering protected mode by using the LOADALL instruction.
rdos wrote: As for switching back to real mode, it was the 286 processor that had a specific way to enter protected mode that couldn't be undone. AFAIK, it's still impossible to get back that way.
The 286 couldn't return back to real mode by changing the PE bit in CR0. But IBM developed a hack on the IBM-AT that tied the keyboard chip (8042) to the reset line to reset the processor to get it back to real mode. Eventually this was unneeded since a triple fault (set IDTR limit to 0 and use the INT instruction to cause an interrupt) reset the processor as well and this was the method OS/2 used on a 286. Triple faulting was a faster process.
rdos wrote: Intel added another possibility in the 386 processor that was possible to use to get back to real mode with. They also added the V86 mode to the 386 processor to be able to emulate real mode, as well as the 32-bit extention and paging. I think the 386 processor was a master-piece in good design, but unfortunately, software & compilers largely have been unable to use it as it was meant to be used.
Intel originally had no intention of allowing the switch back to real mode on the 386. They took the view that there would be no need to (just like they did with the 286) especially since they had v8086 mode. Very early 386 batches (Step-A processors) of 386 CPUs didn't allow you to change back to real mode by zeroing the PE (bit 0) of CR0. For the processors that did allow it, it was considered undefined behaviour. Microsoft and other companies exploited it anyway (DOS HIMEM.SYS being the main driver). Eventually Intel backed down and embraced the fact that switching back to real mode would be a feature and it was fully documented including how the descriptor caches would work in that situation (setting the stage for unreal mode to in essence become something reliable). That remains true even today.
rdos wrote: Based on the VBE entrypoint, I don't think BIOSes rely a lot on unreal mode. If they did, we would not be able to run the BIOS in V86 mode since that mode has fixed 64k limits for segments, and practically every BIOS (except the new i3 that switches to protected mode) does support running the VBE interface in V86 mode.
There is what the BIOS does during startup sequence before user code is run and then there is what the BIOS does when you call most BIOS software interrupts and when hardware interrupts occur. Most modern day BIOSes either enter protected mode and/or use unreal mode prior to the boot sector being run and ensure they are not in protected mode when the boot sector starts running. Those BIOSes don't use unreal mode for things like VBE and video interrupts (lucky you), but nothing would have prevented manufacturers from doing so. I mention this because there were some *rare* non-modern BIOSes (or BIOS extensions) in the 90s that did actually temporarily place the processor back into protected mode (or used unreal mode) for some BIOS software interrupts. You couldn't even rely on the limits being the same afterwards as a result. For DOS programs that were well written with on demand unreal mode this was less of an issue. On demand unreal mode required chaining to the 0dh (#GP/IRQ5) interrupt to reenter unreal mode if a #GP had been raised.

In the late 80s though the issue of allowing DOS programs to run in protected mode (and by extension access beyond 1MiB) was the job of VCPI or DPMI hosts. VCPI fell out of favor since it was originally designed to be used from DOS while already in v8086 mode where as DPMI didn't have that limitation.

As for the 386 design I didn't think it was overly good. It has a lot of cruft because Intel decided to retain backwards compatibility. Backwards compatibility is a nice feature, but it also comes with a fair amount of silicon to support older features. That alone makes the processor more complex than it had to be. Hardware task switching was something else that shouldn't have been done, but Intel tossed it into the 386.
rdos
Member
Member
Posts: 3297
Joined: Wed Oct 01, 2008 1:55 pm

Re: Memory reading problem in real hardware.

Post by rdos »

MichaelPetch wrote: The 286 couldn't return back to real mode by changing the PE bit in CR0. But IBM developed a hack on the IBM-AT that tied the keyboard chip (8042) to the reset line to reset the processor to get it back to real mode. Eventually this was unneeded since a triple fault (set IDTR limit to 0 and use the INT instruction to cause an interrupt) reset the processor as well and this was the method OS/2 used on a 286. Triple faulting was a faster process.
Yes, and I suppose that's why we can use this for booting AP cores. There is another "hack" here since there is a need to change CMOS to indicate you want to do a controlled RESET vs a real one.

However, I believe tripple fault doesn't automatically trigger a RESET. There needs to be hardware logic which decodes tripple fault and triggers a RESET. I suspect this based on one computer board I've used that lacks this logic and it just freezes when it tripple faults, but fortunately it has a hardware watchdog instead.
MichaelPetch wrote: In the late 80s though the issue of allowing DOS programs to run in protected mode (and by extension access beyond 1MiB) was the job of VCPI or DPMI hosts. VCPI fell out of favor since it was originally designed to be used from DOS while already in v8086 mode where as DPMI didn't have that limitation.
I remember those. I once had a DPMI server, but the main problem was that all operating system were only servers, and so it didn't help in the possibility of those coexisting. VCPI had poor protection abilities, while DPMI was well protected.
MichaelPetch wrote: As for the 386 design I didn't think it was overly good. It has a lot of cruft because Intel decided to retain backwards compatibility. Backwards compatibility is a nice feature, but it also comes with a fair amount of silicon to support older features. That alone makes the processor more complex than it had to be. Hardware task switching was something else that shouldn't have been done, but Intel tossed it into the 386.
Hardware task switching worked well in single processor systems, but doesn't work well in multicore systems. It also should have been designed as a two-step process. However, hardware task switching still is efficient for handling double faults.
MichaelPetch
Member
Member
Posts: 797
Joined: Fri Aug 26, 2016 1:41 pm
Libera.chat IRC: mpetch

Re: Memory reading problem in real hardware.

Post by MichaelPetch »

However, I believe tripple fault doesn't automatically trigger a RESET.
Triple faults cause the 286 and 386 to go into a shutdown cycle and the IBM-PC/AT and IBM XT-286 (and later 386 motherboards) would assert the reset line in this condition. The amazing thing is that when IBM designed their motherboard for their 286 systems they included the logic to assert the reset line if a shutdown cycle was detected but they didn't think to use the shutdown cycle from a triple fault to get out of protected mode to real mode. The 8042 kludge was really unnecessary. By the time IBM OS/2 for the 286 came out engineers knew about the triple fault method to get back to real mode and used that mechanism.

To this day the behaviour of a triple fault causing a hardware reset continues although the mechanisms used may be entirely different now. The only reason the 8042 reset kludge still exists is for compatibility purposes, but was never really needed in the first place given how IBM manufactured their motherboards.
Post Reply