Octocontrabass wrote:kerravon wrote:The mainframe AMODE is nothing remotely like the x86.
It's a mode selector that provides binary compatibility, exactly like x86.
I don't consider it to be anything remotely "exactly like the x86", where op codes do very different things depending on which mode you have selected. Simple address masking is a different concept.
kerravon wrote:The AMODE just sets what MASKING their is on the address bits - because (what I consider to be) badly written software decided to use the upper bits of the address for storing some data.
Software written to run on your OS can follow your rules. If you say all software that runs on your OS must not rely on AMODE to mask upper address bits, that's fine.
It's more of a hardware thing. Don't rely on address masking to exist. Software that follows the (conceptually simple) rule of "don't trash address bits expecting hardware to mask it for you" ALSO runs on standard IBM MVS (and z/OS), not just z/PDOS.
kerravon wrote:And what's wrong with that model?
Software written to run on a different OS (such as Windows) isn't going to follow your rules, so your Windows binary compatibility layer isn't actually compatible with Windows. What's the point of a binary compatibility layer when all software will need to be recompiled to work with it?
CERTAIN software - perhaps even yet to be written - is going to follow the rules. And if it doesn't, it will be considered a bug. If the vendor no longer exists and there is a bug, then I don't expect it to work.
I see a point in having rules. I will follow those rules myself. If no-one else follows them - so be it. But first I actually want some sensible rules to follow. That's the very first step. Which has taken me 37 years already. Once the rules are in place, then I will write and compile my code once.
Note that I don't really care that it has taken 37 years already. I posit that the gap between the 80386 and the x64 is 240 years, not the 24 years or whatever it actually was. I'm not trying to compete in some commercial market. I'm trying to create something that I (at least) consider to be "academically sound".
kerravon wrote:If not - why isn't the IBM/MIPS model the right way to do things for all processors from day 1?
Your question is invalid. IBM z/Architecture and MIPS do not follow the same model.
If you want to insist on your worldview, so be it - why isn't the MIPS (alone in the world supposedly) the right way to do things for all processors?
kerravon wrote:I think I asked this question before elsewhere but no-one knew the answer - there are some op codes that behave differently between RM16 and PM32. If those instructions are being executed in PM32 in a segment that has been marked (via the D-bit) as 16-bit, do the instructions get the RM16 behavior?
If the segment is marked as 16-bit, that's 16-bit protected mode. There are some instructions that behave differently between 16-bit real mode and 16-bit protected mode, but nothing that should affect an ordinary application.
Ok, cool, thanks. So it's just just the address default.
kerravon wrote:If not, then it would be a case of - RM16 code can run in PM32 so long as it doesn't use any reassigned op codes.
Intel actually designed the 8086 with this forward compatibility in mind: applications were meant to treat segment registers as opaque tokens so that an unspecified future processor (which was eventually released as the 286) could change how segment registers were interpreted without changing application behavior. Any 8086 software that followed all of the rules will run perfectly fine in 16-bit protected mode.
Of course, the 8086 couldn't enforce good behavior without an external MMU, so there's a lot of software out there that completely disregards the rules, but the rules did exist for anyone who wanted to follow them. (I don't think any IBM-compatible PCs ever included an 8086 MMU.)
That's fine - that's exactly what I want. I only care about well-behaved programs. And actually 8086 to 80286 is exactly one of the things I want to do - basically I want to create a PDOS/286 and I expect my RM16 programs to work.
This is another thing I don't quite have all the rules for. Specifically I think the MSDOS (actually PDOS/86) executables need to not have any functions that cross a 64k boundary. So that the PDOS/286 (also PDOS/386 running applications with the D bit set to activate PM16 for the applications so that more selectors are available) loader can load huge memory model programs and shuffle the data so that the exact same executable can address 512 MB of memory instead.
So Microemacs 3.6 built for MSDOS (read: PDOS/86) by tools that "follow the rules" can suddenly edit files up to 512 MiB in size instead of 640k.
As opposed to what happened in real timeline which is one day we woke up and that MSDOS executable said "sorry, this program cannot be run in 64-bit mode".
I don't want to go this V8086 route which continues the 640k limit. I want my executables to suddenly address massive amounts of storage - with an unchanged binary - and also unkludged (ie no conditional execution) binary.
I note that Microsoft/IBM had a "Family API". I don't know much about it, but I believe it contains some sort of stub to allow OS/2 1.0 emulation on DOS. I don't want a stub and I don't want conditional execution. I want exactly what you said above - follow Intel's rules and the definition of a selector suddenly changes, and your application doesn't care at all. Instead, your application says "whee - 16 MiB or 512 MiB instead of 640 KiB".
IBM and Microsoft probably have a business model that encourages the need for redevelopment for their new OS.
I instead have an academic interest in seeing if it is possible to follow Intel's rules (which I didn't know even existed until you told me just now). And I'm happy to write my own OS to make it happen. I realize I most likely lost that commercial competition 40 years ago - but I was never really in that commercial competition.