I don't. IBM mainframes have separate modes for 24-bit, 31-bit, and 64-bit software, the same way x86 has separate modes for 16-bit, 32-bit, and 64-bit software.kerravon wrote:You don't accept IBM mainframes as an example?
MIPS does not have modes.
I don't. IBM mainframes have separate modes for 24-bit, 31-bit, and 64-bit software, the same way x86 has separate modes for 16-bit, 32-bit, and 64-bit software.kerravon wrote:You don't accept IBM mainframes as an example?
Not really. 32-bit protected mode can run both real mode, 16-bit and 32-bit protected mode code. It was 64-bit mode that broke backwards compatibility by not extending protected mode with 64-bit registers, and dropping segmentation from long mode.Octocontrabass wrote:I don't. IBM mainframes have separate modes for 24-bit, 31-bit, and 64-bit software, the same way x86 has separate modes for 16-bit, 32-bit, and 64-bit software.kerravon wrote:You don't accept IBM mainframes as an example?
MIPS does not have modes.
By switching modes. We're talking about binary compatibility without switching modes.rdos wrote:Not really. 32-bit protected mode can run both real mode, 16-bit and 32-bit protected mode code.
Compatibility mode, though. It did break compatibility to real mode by removing v8086 mode, but user space 16- and 32-bit protected mode software can still run. And the mode switch to compat mode looks exactly the same as the mode switch to 16-bit mode in protected mode, namely by loading the relevant segment into CS. Now, can you please stop arguing against long mode in bad faith? It is getting tiresome.rdos wrote:Not really. 32-bit protected mode can run both real mode, 16-bit and 32-bit protected mode code. It was 64-bit mode that broke backwards compatibility by not extending protected mode with 64-bit registers, and dropping segmentation from long mode.
No - that's not correct. The mainframe AMODE is nothing remotely like the x86. Absolutely all instructions are available no matter what AMODE you choose. The AMODE just sets what MASKING their is on the address bits - because (what I consider to be) badly written software decided to use the upper bits of the address for storing some data.Octocontrabass wrote:I don't. IBM mainframes have separate modes for 24-bit, 31-bit, and 64-bit software, the same way x86 has separate modes for 16-bit, 32-bit, and 64-bit software.kerravon wrote:You don't accept IBM mainframes as an example?
I think he might be right. PM32 allows you to define segments as being 16-bit. I think I asked this question before elsewhere but no-one knew the answer - there are some op codes that behave differently between RM16 and PM32. If those instructions are being executed in PM32 in a segment that has been marked (via the D-bit) as 16-bit, do the instructions get the RM16 behavior?Octocontrabass wrote:By switching modes. We're talking about binary compatibility without switching modes.rdos wrote:Not really. 32-bit protected mode can run both real mode, 16-bit and 32-bit protected mode code.
It's a mode selector that provides binary compatibility, exactly like x86.kerravon wrote:The mainframe AMODE is nothing remotely like the x86.
Software written to run on your OS can follow your rules. If you say all software that runs on your OS must not rely on AMODE to mask upper address bits, that's fine.kerravon wrote:The AMODE just sets what MASKING their is on the address bits - because (what I consider to be) badly written software decided to use the upper bits of the address for storing some data.
Software written to run on a different OS (such as Windows) isn't going to follow your rules, so your Windows binary compatibility layer isn't actually compatible with Windows. What's the point of a binary compatibility layer when all software will need to be recompiled to work with it?kerravon wrote:And what's wrong with that model?
Your question is invalid. IBM z/Architecture and MIPS do not follow the same model.kerravon wrote:If not - why isn't the IBM/MIPS model the right way to do things for all processors from day 1?
If the segment is marked as 16-bit, that's 16-bit protected mode. There are some instructions that behave differently between 16-bit real mode and 16-bit protected mode, but nothing that should affect an ordinary application.kerravon wrote:I think I asked this question before elsewhere but no-one knew the answer - there are some op codes that behave differently between RM16 and PM32. If those instructions are being executed in PM32 in a segment that has been marked (via the D-bit) as 16-bit, do the instructions get the RM16 behavior?
Intel actually designed the 8086 with this forward compatibility in mind: applications were meant to treat segment registers as opaque tokens so that an unspecified future processor (which was eventually released as the 286) could change how segment registers were interpreted without changing application behavior. Any 8086 software that followed all of the rules will run perfectly fine in 16-bit protected mode.kerravon wrote:If not, then it would be a case of - RM16 code can run in PM32 so long as it doesn't use any reassigned op codes.
RM16 is handled with V8086 mode. RM16 use segment registers in other ways than protected mode (the base is the register value << 4). RM16 is entered by setting a flag bit. Protected mode have two different "operation modes" that depend on descriptors loaded in CS. One is PM16 where default addresses and operation size is 16-bit, and the other is PM32 where they are 32-bit. PM16 and RM16 (including real mode) allows the usage of 32-bit registers and operands by using the address size and operand size override op-code.kerravon wrote: I think he might be right. PM32 allows you to define segments as being 16-bit. I think I asked this question before elsewhere but no-one knew the answer - there are some op codes that behave differently between RM16 and PM32. If those instructions are being executed in PM32 in a segment that has been marked (via the D-bit) as 16-bit, do the instructions get the RM16 behavior?
It's severely broken, mostly because compatibility mode thrashes upper 32-bits of registers. It's also broken because 64-bit registers cannot be used in protected mode, like 32-bit registers could be used in real mode and 16-bit protected mode. Also, long mode is essentially a "banked" 32-bit mode since 64-bit addresses cannot be used directly.nullplan wrote:Compatibility mode, though. It did break compatibility to real mode by removing v8086 mode, but user space 16- and 32-bit protected mode software can still run. And the mode switch to compat mode looks exactly the same as the mode switch to 16-bit mode in protected mode, namely by loading the relevant segment into CS. Now, can you please stop arguing against long mode in bad faith? It is getting tiresome.rdos wrote:Not really. 32-bit protected mode can run both real mode, 16-bit and 32-bit protected mode code. It was 64-bit mode that broke backwards compatibility by not extending protected mode with 64-bit registers, and dropping segmentation from long mode.
Long mode lacks essential memory protection mechanisms, and so I will not write a 64-bit OS. You could use the upper 16-bits like a "selector", but popular C compilers cannot handle this properly either. Just like they never could handle segmentation properly.devc1 wrote:It's 2023 and you guys are still worried about 32 bit mode
Use 32 bit registers in 64 bit mode, it's as easy as it is.
Why do some people have to discuss a simple problems for years, pheww
Other architectures work this way too. Why is it only a problem for x86?rdos wrote:It's severely broken, mostly because compatibility mode thrashes upper 32-bits of registers.
That's intentional. Allowing 16-bit software to use 32-bit registers is a major design flaw. A 16-bit OS won't save and restore 32-bit registers when it switches tasks, so all running tasks share the same set of registers!rdos wrote:It's also broken because 64-bit registers cannot be used in protected mode, like 32-bit registers could be used in real mode and 16-bit protected mode.
You have to put the address in a register. Other architectures work this way too. Why is it only a problem for x86?rdos wrote:Also, long mode is essentially a "banked" 32-bit mode since 64-bit addresses cannot be used directly.
Most CPU architectures lack segmentation, so clearly it's not essential.rdos wrote:Long mode lacks essential memory protection mechanisms,
Code written with such 32-bit overrides will end up doing 16-bit if run on an 80386. Not what I wish to do.devc1 wrote:It's 2023 and you guys are still worried about 32 bit mode
Use 32 bit registers in 64 bit mode, it's as easy as it is.
I don't consider it to be anything remotely "exactly like the x86", where op codes do very different things depending on which mode you have selected. Simple address masking is a different concept.Octocontrabass wrote:It's a mode selector that provides binary compatibility, exactly like x86.kerravon wrote:The mainframe AMODE is nothing remotely like the x86.
It's more of a hardware thing. Don't rely on address masking to exist. Software that follows the (conceptually simple) rule of "don't trash address bits expecting hardware to mask it for you" ALSO runs on standard IBM MVS (and z/OS), not just z/PDOS.Software written to run on your OS can follow your rules. If you say all software that runs on your OS must not rely on AMODE to mask upper address bits, that's fine.kerravon wrote:The AMODE just sets what MASKING their is on the address bits - because (what I consider to be) badly written software decided to use the upper bits of the address for storing some data.
CERTAIN software - perhaps even yet to be written - is going to follow the rules. And if it doesn't, it will be considered a bug. If the vendor no longer exists and there is a bug, then I don't expect it to work.Software written to run on a different OS (such as Windows) isn't going to follow your rules, so your Windows binary compatibility layer isn't actually compatible with Windows. What's the point of a binary compatibility layer when all software will need to be recompiled to work with it?kerravon wrote:And what's wrong with that model?
If you want to insist on your worldview, so be it - why isn't the MIPS (alone in the world supposedly) the right way to do things for all processors?Your question is invalid. IBM z/Architecture and MIPS do not follow the same model.kerravon wrote:If not - why isn't the IBM/MIPS model the right way to do things for all processors from day 1?
Ok, cool, thanks. So it's just just the address default.If the segment is marked as 16-bit, that's 16-bit protected mode. There are some instructions that behave differently between 16-bit real mode and 16-bit protected mode, but nothing that should affect an ordinary application.kerravon wrote:I think I asked this question before elsewhere but no-one knew the answer - there are some op codes that behave differently between RM16 and PM32. If those instructions are being executed in PM32 in a segment that has been marked (via the D-bit) as 16-bit, do the instructions get the RM16 behavior?
That's fine - that's exactly what I want. I only care about well-behaved programs. And actually 8086 to 80286 is exactly one of the things I want to do - basically I want to create a PDOS/286 and I expect my RM16 programs to work.Intel actually designed the 8086 with this forward compatibility in mind: applications were meant to treat segment registers as opaque tokens so that an unspecified future processor (which was eventually released as the 286) could change how segment registers were interpreted without changing application behavior. Any 8086 software that followed all of the rules will run perfectly fine in 16-bit protected mode.kerravon wrote:If not, then it would be a case of - RM16 code can run in PM32 so long as it doesn't use any reassigned op codes.
Of course, the 8086 couldn't enforce good behavior without an external MMU, so there's a lot of software out there that completely disregards the rules, but the rules did exist for anyone who wanted to follow them. (I don't think any IBM-compatible PCs ever included an 8086 MMU.)