Page 2 of 3

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 8:01 am
by rdos
bluemoon wrote:
rdos wrote:2. TR register is always reloaded with every thread-switch (per thread SS0 and IO-bitmaps)
No, You don't need to reload TR (ie. LTR instruction), you may just modify the content of TSS (by MOV and remap different page for io map).
I know you don't need it, but this is part of my design choices. This is used in a few of the synchronization primitives as well since str is available in ring 3, and thus serves as a fast thread-id.

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 8:51 am
by Cognition
In general it's possible, but not at all advisable. The performance penalty for switching between modes is likely very high. If you're going to do this sort of thing, the best way to go about it is to rely on hardware virtualization extensions that are designed to support this sort of thing and can transition between operating modes in a safe and somewhat quick manner.

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 9:00 am
by Antti
rdos wrote:
Brendan wrote:Being able to switching video modes (after boot) is not essential, and no amount of V86 is going to help for modern UEFI systems anyway.
It does work on all the EFI/UEFI systems I've tested, but I haven't tested Macs and similar.
This kind of design philosophy is strange for me. It would be more than wise to rely on "firmware features" that are standardized. As far as the UEFI is concerned, there is no guarantee that any BIOS/VBE/etc functions would exist in the future.

As a matter of fact, I am a little bit disappointed that those legacy functions work at all (currently) if the system is booted with a UEFI operating system loader.

A BIOS compatibility mode is a different thing.

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 10:01 am
by Brendan
Hi,
rdos wrote:
Brendan wrote:For long mode; call gates must switch to 64-bit code; interrupts must switch to 64-bit code and instructions like SYSCALL/SYSENTER must switch to 64-bit code. The penalty of having stubs that switch back to 32-bit code is identical in all of these cases. It's a generic penalty (e.g. the penalty of not having a 64-bit kernel) that can't be avoided regardless of what you do (unless you have a 64-bit kernel).
They can be avoided with mode switches. If 32-bit applications run in protected mode and 64-bit applications run in long mode, both types of applications can use the fastest syscalls available on a particular processor and doesn't need penalty for stubs (well, both SYSENTER and SYSCALL does have penalties for stubs, but those are unavoidable and part of the design).
If a 64-bit application running in long mode calls the kernel API and the kernel is running in protected mode, and you have to blow away all TLB entries (including any/all entries marked as "global" specifically to prevent unwanted/unnecessary TLB flushing); then in which way do 64-bit applications use the fastest syscalls available and avoid the penalty of stubs?
rdos wrote:
Brendan wrote:Being able to switching video modes (after boot) is not essential, and no amount of V86 is going to help for modern UEFI systems anyway.
It does work on all the EFI/UEFI systems I've tested, but I haven't tested Macs and similar.
How many UEFI systems did you test it on?

To be clear; virtual8086 mode should work and is supported on UEFI systems. However, there is no reason (other than ugly legacy hacks - see notes below) for UEFI firmware to ensure that the real mode IVT, any real mode BIOS functions, the real mode VBE interface, any other (protected mode) VBE interface or the video card's "ROM shadow" to exist or be usable.

Note 1: We are currently in a type of transition period, as the industry adopts UEFI and abandons BIOS. At this time, it's likely that UEFI firmware on some systems actually does (internally) use the video card's ROM code that was designed for legacy/BIOS; simply because video card manufacturers may not have provided ROMs designed for UEFI in their video cards (yet). Even if the UEFI firmware itself does use VBE internally, this still doesn't mean that VBE is left in working order for anyone else (UEFI applications, UEFI boot loaders or UEFI OSs) to use, especially after "ExitBootServices()" is called. Basically, if VBE works on any UEFI system at all, then it only works due to luck and hacks, and it is not intentional or by design. It would be entirely foolish to rely on this behaviour.

Note 2: You may be able the "cheat" and extract the legacy ROM from the PCI card directly (e.g. by manipulating the video card's BARs in PCI configuration space to map the video card's ROM into the physical address space). This would bypass the problems mentioned in "note 1". Unfortunately this won't work reliably either. For a lot of systems with inbuilt video (especially laptops) the "video ROM" is actually built into the system ROM and isn't part of the video card's PCI device at all. Also, in the longer term, the legacy "VBE" ROM will cease to exist in any form at all (especially for systems with inbuilt video where there's less need for backward compatibly with PC BIOS). It would be entirely foolish to rely on this behaviour too.
rdos wrote:
Brendan wrote:Switching between long mode and protected mode means completely destroying TLBs and reloading almost everything (TSS, IDT, all segment registers, etc).
1. CR3 will always be reloaded when switching from a 32-bit application to a 64-bit application (or the reverse), because they will not use the same page tables (different applications), and thus the TLB flush is inevitable
You may be right; if the OS is crap and doesn't bother using the "global pages" feature to avoid unnecessary TLB flushes when CR3 is loaded, then switching CPU modes like this won't make the OS's "unnecessary TLB flushing" worse because the OS is already as bad as it possibly can be.
rdos wrote:2. TR register is always reloaded with every thread-switch (per thread SS0 and IO-bitmaps)
OK - that's not well optimised either; so reloading the TSS during task switches doesn't make it worse.
rdos wrote:3. Segment registers will always be reloaded on thread switches.
OK, so the OS is already very bad at doing task switches (e.g. reloading segment registers during the task switch and not just reloading segment registers when you return to CPL=3); and because the OS is already bad it's hard to make it worse.
rdos wrote:
Brendan wrote:Switching back again is equally expensive. For 64-bit applications; the total cost of this (including TLB misses, etc and not just the switch itself) is going to be several thousand cycles for every system call, IRQ and exception.
IRQs and syscalls will not switch mode. Only the scheduler will switch mode as it switches between a 32-bit and 64-bit process or the reverse.
Sigh.

First you ask about running a 32-bit kernel in long mode. I spend my time writing a (hopefully) useful reply, then I figure out that you're actually thinking of running a 32-bit kernel in protected mode (and only running 64-bit applications in long mode) and that I wasted my time.

Then I spend more of my time writing another (hopefully) useful reply, assuming that you want to run a 32-bit kernel in protected mode (and 64-bit applications in long mode).

Now I'm wondering if you actually want to run a 32-bit kernel in *both* protected mode and in long mode (and not just in protected mode); and I'm wondering if I've wasted my time again.

Of course I'm also starting to wonder if you're a schizophrenic crack addict. ;)
rdos wrote:
Brendan wrote:There are 2 main reasons for applications to use 64-bit. The first reason is that the application needs (or perhaps only benefits from rather than needing) the extra virtual address space. RDOS's 32-bit kernel probably won't be able to handle "greater than 4 GiB" virtual address spaces so he'll probably completely destroy this advantage.
Buffers in syscalls will need to be memmapped into the 32-bit address space. Other than that, 64-bit applications are free to use the entire address space with no penalties. Compared to the cost of syscalls, remapping buffers is a minor overhead.
So, you're planning to add buffer remapping and support for long mode paging to your "32-bit" kernel?
rdos wrote:
Brendan wrote:The other reason for applications to use 64-bit is that the extra registers and the extra width of registers makes code run faster. RDOS will probably also completely destroy the performance advantages too.
Why? The application is free to use as many of the 64-bit registers it wants. The scheduler will need to save/restore additional state for 64-bit threads, but that overhead is required in any design.
It's impossible to save or restore a 64-bit process' state in 32-bit code; as 32-bit code can only access the low 32-bits of *half* the general purpose registers (and half of the 16 MMX registers, etc). To get around that you would have to do the state saving and state loading in 64-bit code. I thought you'd do it in stubs (e.g. saving the 64-bit state before passing control to the 32-bit kernel and restoring the 64-bit state after before returning to the 64-bit process), but now you're saying you won't need stubs.

If you're making modifications to the memory management specifically to support 64-bit applications, and also making modifications to the scheduler's state saving/loading to support 64-bit applications; do you have a sane reason to bother with the 32-bit kernel at all, and would it be much better in the end to write a 64-bit kernel for 64-bit applications (and then provide a "compatibility layer" in that 64-bit kernel for your crusty old 32-bit processes)?

rdos wrote:
Brendan wrote:There's 3 problems - NMIs (which you're dodgy enough to ignore),
Yes, I ignore them. I even setup NMI as a crash handler. :mrgreen:
Brendan wrote:Machine check exceptions (which you're probably dodgy enough to have never supported anyway),
Exactly :mrgreen:
Brendan wrote:and IRQ latency (e.g. having IRQs disabled for *ages* while you switch CPU modes). I'm guessing you're dodgy enough to ignore the IRQ latency problem too.
There is no larger IRQ latency involved in flushing the TLB with CR3 and with a change from PAE to IA32e or the reverse. Both flushes the TLB. It requires a few more instructions to change mode, and reload CR0, CR3 and IDTR, but I suspect this time is minor compared to the effects of flushing TLB.
These are all just more cases of "My OS is so bad already that it's almost impossible to make it worse"...
rdos wrote:
Brendan wrote:CPU designers have a tendency to assume that only new code will use new CPU modes. I doubt AMD expected anyone to want to switch from long mode back to protected mode often. To be honest, I see it as a curiosity with dubious practical applications myself. They intended for new 64-bit kernels (that are capable of supporting old 32-bit and 16-bit applications), not the other way around.
Erm. If they had avoided to break existing modes the above would be logical, but this is not the case. I see the 32-bit segmented mode as the "super mode" of the processor, and 64-bit, 16-bit and V86 as modes that are better run as sub-modes.
I see my genitalia as a massive 14 inch meat sausage that is capable of breaking concrete. Unfortunately, sometimes reality is different to how I see things.

In the same way, reality is different to how you see things. For example; segmented mode is not a "super mode" - it's an obsolete piece of crud that no sane person has used for about 30 years (since OS/2), that was obsolete when it was first introduced. It only exists in modern CPUs because Intel never removes existing CPU features regardless of how badly everyone in the world (including Intel) wishes those features never existed. The (mostly theoretical) advantages of segmented mode have never justified the practical disadvantages, on any CPU or architecture, for any possible piece of software; and they never will.

The fact that you still think segmented mode is actually good makes me wonder how we (the OSdev community) have failed you; and if there's something we could do differently to educate severely misguided people better in future.


Cheers,

Brendan

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 10:04 am
by AJ
=D>

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 11:54 am
by Brendan
Hi,
Brendan wrote:In the same way, reality is different to how you see things. For example; segmented mode is not a "super mode" - it's an obsolete piece of crud that no sane person has used for about 30 years (since OS/2), that was obsolete when it was first introduced. It only exists in modern CPUs because Intel never removes existing CPU features regardless of how badly everyone in the world (including Intel) wishes those features never existed. The (mostly theoretical) advantages of segmented mode have never justified the practical disadvantages, on any CPU or architecture, for any possible piece of software; and they never will.
I though about this, and decided it needed some clarification.

There are 3 basic models:
  • "No protection at all". This is mostly useful where software failures are either impossible (e.g. extremely reliable "trusted" code in an embedded system, possibly involving formal proofs) or the impact of software failures is negligible (e.g. most games machines like Playstation or X-Box, where any data loss and/or disruption caused by "crashed and rebooted" isn't that important).
  • "Protection between separate pieces of software". This is the normal model that most OSs use; where processes can't modify the kernel or other processes, but a process can trash itself. It's a very good compromise between performance and protection (but it *is* a compromise).
  • "Internal protection". In this case a process can't modify the kernel or other processes; but "attempts" have been made to also prevent a process from trashing itself. For high reliability systems it's almost entirely pointless (it doesn't matter much if a process crashes due to a protection violation or crashes due to trashing itself); and for most systems the overhead of extra checking (regardless of whether it's done in hardware or software) isn't justified. The only real reason to use this model is for debugging (catching problems as soon as possible, rather than later when the symptoms may be obfuscated/misleading) where performance doesn't matter anyway. Using the segmented model for this reason (catching problems as soon as possible for debugging) is ineffective because it's only a partial solution - it can't detect lots of problems (like checking if an array of "int" isn't incorrectly used as an array of "float", or that a pointer doesn't point to the wrong thing that happens to be accessible, or that a variable that should contain a value from 0 to 100 isn't being set to 123). The best solutions are managed code and virtual machines, as these techniques are both capable of detecting a much much larger range of possible problems.
Basically the overhead of the segmented model isn't justified for most purposes, and for cases where it is justified the segmented model is inferior.

Also note that the overhead of the segmented model may not come directly from the segment limit checks themselves, but may come from managing data that is used to improve or avoid the overhead of those checks (e.g. segment register loads and GDT/LDT management on 80x86).

I'd also say that the best possible solution isn't one of the models above, it's a combination (e.g. using managed code or virtual machines for debugging, and either "no protection" or "protection between separate pieces of software" when you're not debugging).


There is a completely separate issue; where segmentation is used instead of paging to protect processes and the kernel from other processes. This isn't what I'm talking about above; and is bad for entirely different reasons (physical address space fragmentation, problems efficiently implementing things like "copy on write", swap space, memory mapped files, etc). It is also possible to use segmentation for both purposes at the same time (protecting a process from itself, and protecting processes and the kernel from other processes) - for this case you can combine both sets of "reasons why it's bad". ;)


Cheers,

Brendan

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 1:35 pm
by rdos
Brendan wrote:If a 64-bit application running in long mode calls the kernel API and the kernel is running in protected mode, and you have to blow away all TLB entries (including any/all entries marked as "global" specifically to prevent unwanted/unnecessary TLB flushing); then in which way do 64-bit applications use the fastest syscalls available and avoid the penalty of stubs?
Ever since about my second post in the thread I've claimed this as not feasible, and that 64-bit applications would run the kernel in long mode while 32-bit applications would run the kernel in protected mode.
Brendan wrote:How many UEFI systems did you test it on?

To be clear; virtual8086 mode should work and is supported on UEFI systems. However, there is no reason (other than ugly legacy hacks - see notes below) for UEFI firmware to ensure that the real mode IVT, any real mode BIOS functions, the real mode VBE interface, any other (protected mode) VBE interface or the video card's "ROM shadow" to exist or be usable.

Note 1: We are currently in a type of transition period, as the industry adopts UEFI and abandons BIOS. At this time, it's likely that UEFI firmware on some systems actually does (internally) use the video card's ROM code that was designed for legacy/BIOS; simply because video card manufacturers may not have provided ROMs designed for UEFI in their video cards (yet). Even if the UEFI firmware itself does use VBE internally, this still doesn't mean that VBE is left in working order for anyone else (UEFI applications, UEFI boot loaders or UEFI OSs) to use, especially after "ExitBootServices()" is called. Basically, if VBE works on any UEFI system at all, then it only works due to luck and hacks, and it is not intentional or by design. It would be entirely foolish to rely on this behaviour.

Note 2: You may be able the "cheat" and extract the legacy ROM from the PCI card directly (e.g. by manipulating the video card's BARs in PCI configuration space to map the video card's ROM into the physical address space). This would bypass the problems mentioned in "note 1". Unfortunately this won't work reliably either. For a lot of systems with inbuilt video (especially laptops) the "video ROM" is actually built into the system ROM and isn't part of the video card's PCI device at all. Also, in the longer term, the legacy "VBE" ROM will cease to exist in any form at all (especially for systems with inbuilt video where there's less need for backward compatibly with PC BIOS). It would be entirely foolish to rely on this behaviour too.
Just to make this clear. I don't boot into UEFI mode, but usually use GRUB legacy to boot. In one case I used GRUB 2 in Linux Feodora to boot RDOS. In all these cases VBE is working, and I don't need any PCI snooping.
Brendan wrote:You may be right; if the OS is crap and doesn't bother using the "global pages" feature to avoid unnecessary TLB flushes when CR3 is loaded, then switching CPU modes like this won't make the OS's "unnecessary TLB flushing" worse because the OS is already as bad as it possibly can be.
I do have global page support, but it is currently disabled because it doesn't work properly. OTOH, there is no noticable difference if the OS runs with global pages or not. You seem to greatly overestimate the evil of flushing TLBs. It doesn't cost "thousands" of cycles to flush the TLB. It's more like a syscall.
Brendan wrote:
rdos wrote:2. TR register is always reloaded with every thread-switch (per thread SS0 and IO-bitmaps)
OK - that's not well optimised either; so reloading the TSS during task switches doesn't make it worse.
Wrong. This is well optimized when each task has it's own kernel SS selector. You won't save a SS reload since SS will be reloaded anyway (unless the SYSENTER method is used, but as presented in an older thread, this is only faster on some CPUs).
Brendan wrote: OK, so the OS is already very bad at doing task switches (e.g. reloading segment registers during the task switch and not just reloading segment registers when you return to CPL=3); and because the OS is already bad it's hard to make it worse.
The kernel is not flat, and thus needs to reload segment registers. As simple as that.
Brendan wrote: First you ask about running a 32-bit kernel in long mode. I spend my time writing a (hopefully) useful reply, then I figure out that you're actually thinking of running a 32-bit kernel in protected mode (and only running 64-bit applications in long mode) and that I wasted my time.

Then I spend more of my time writing another (hopefully) useful reply, assuming that you want to run a 32-bit kernel in protected mode (and 64-bit applications in long mode).

Now I'm wondering if you actually want to run a 32-bit kernel in *both* protected mode and in long mode (and not just in protected mode); and I'm wondering if I've wasted my time again.
Maybe if you read more careful you might avoid wasting your time? :wink:

But, yes, you are right in the last paragraph. I want to run the 32-bit kernel in both protected mode and in long mode. This is feasible if the kernel uses PAE paging. There is a need to design new fault-handlers, and interrupt stubs for 64-bit mode, but other than that, it should work pretty well.
Brendan wrote: It's impossible to save or restore a 64-bit process' state in 32-bit code; as 32-bit code can only access the low 32-bits of *half* the general purpose registers (and half of the 16 MMX registers, etc). To get around that you would have to do the state saving and state loading in 64-bit code. I thought you'd do it in stubs (e.g. saving the 64-bit state before passing control to the 32-bit kernel and restoring the 64-bit state after before returning to the 64-bit process), but now you're saying you won't need stubs.
Few things are impossible. Saving 64-bit state in the scheduler that normally runs in long legacy mode, is as simple as jumping to a 64-bit code chunk that does the save. For restore, it would be done by the switch code that reenters 64-bit mode.
Brendan wrote: If you're making modifications to the memory management specifically to support 64-bit applications, and also making modifications to the scheduler's state saving/loading to support 64-bit applications; do you have a sane reason to bother with the 32-bit kernel at all, and would it be much better in the end to write a 64-bit kernel for 64-bit applications (and then provide a "compatibility layer" in that 64-bit kernel for your crusty old 32-bit processes)?
Several sane reasons:
1. I don't want to start from scratch
2. I don't want a flat kernel
3. By the time the kernel is finished, x86-64 mode would be obsolete.

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 1:48 pm
by rdos
Brendan wrote:I'd also say that the best possible solution isn't one of the models above, it's a combination (e.g. using managed code or virtual machines for debugging, and either "no protection" or "protection between separate pieces of software" when you're not debugging).
The line between "debugging" and "not debugging" is not easy to draw, and managed code and virtual machines doesn't necesarily have the same bugs as the code without protection. I also wonder how you will go about using managed code in the kernel. And what about if your kernel runs for two days before driver 1 thrashes driver 2 and makes your kernel malfunction? I suppose you just shrug and state "things like that happen, next time I'll use my debugger".

The typical solution for flat designs is to move everything out of the kernel and into userland, and use massive amounts of IPC, task-switching and ring switches. That is not exactly an efficient way to minimize TLB shoot-downs. :mrgreen:

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 1:49 pm
by bluemoon
Have you consider to recompile the kernel as 64bit? How much code in your kernel is architecture dependent?

For me I disallow mixing 32-bit and 64-bit combo as it's virtually adding 4 times effort to QA; but run 32-bit app in 32-bit kernel, 64-bit app in 64-bit kernel.
This make life much easier as the architecture different only contribute to very few amount of code, as my kernel mostly written in C & C++.

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 2:00 pm
by Owen
rdos wrote:Several sane reasons:
1. I don't want to start from scratch
Then port
rdos wrote:2. I don't want a flat kernel
Then your kernel will die with IA-32... so, shortly after UEFI becomes ubiquitous, when you'll find every machine has 64-bit EFI that your kernel can't interact with
rdos wrote:3. By the time the kernel is finished, x86-64 mode would be obsolete.
On what basis? Fully porting your kernel should not take you more than a year. 32-bit Protected mode lasted 20 years. 64-bit will last longer (after all, this is a several million fold increase in address space size)

And even if AMD64 dies, you'll already have done the important bit: removing the architectural baggage that ties you to IA-32, which no other architecture has

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 2:05 pm
by rdos
bluemoon wrote:Have you consider to recompile the kernel as 64bit? How much code in your kernel is architecture dependent?
Everything except for the ACPI, TrueType and ini-file device-driver which are coded in C. I will port more of the complex device-drivers to C eventually, but the bulk of the code will remain in assembler.

Actually, if I complete this, I will use GCC as the userland compiler, and port LIBC to RDOS. Porting GCC to 32-bits today is not worth the trouble when I already have a functional environment with OpenWatcom. It would be kind of interesting to be able to run ewverything from MS-DOS applications, Win32 console applications, native 32-bit OpenWatcom applications, to 64-bit POSIX compliant applications in the same environment with no virtualization.

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 2:11 pm
by rdos
Owen wrote:Then your kernel will die with IA-32... so, shortly after UEFI becomes ubiquitous, when you'll find every machine has 64-bit EFI that your kernel can't interact with
That the boot-loader transfers control to 64-bit mode is nothing that stops me. GRUB already transfers to 32-bit flat mode, but that doesn't stop me from switching to segmented mode. It's almost as simple to switch to protected mode from long mode as it is to switch from flat to segmented mode. What would make my kernel die is if Intel / AMD removed protected mode from their processors, but given that it took over 20 years before they eventually removed V86 mode (and then only in long mode), I don't see this happening anytime soon. And even if protected mode is removed, it will take far longer before legacy-mode in long mode is removed, and I could run the kernel in long-mode legacy mode if I have to and do changes I've mentioned in the thread.

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 2:16 pm
by Griwes
They have left VM86 there for so long because the only sane (ha-ha) firmware on x86 was BIOS running in real mode. Now, with long mode UEFI, both real mode and protected mode are doomed. They simply don't give any advantage over long mode.

Oh, sorry, I forgot 'bout segmentation. But that's, y'know, obsolete. For a long time.

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 2:28 pm
by rdos
Griwes wrote:They have left VM86 there for so long because the only sane (ha-ha) firmware on x86 was BIOS running in real mode. Now, with long mode UEFI, both real mode and protected mode are doomed. They simply don't give any advantage over long mode.
I think you are wrong. Even if 16-bit protected mode has hardly been justified since 286, and could have been removed, it has not been because there are still 16-bit protected mode applications that people want to run. As long as there are many 32-bit applications remaining, protected mode or legacy-mode in long mode will not be removed. As long as Windows doesn't run 32-bit applications with emulators, neither Intel nor AMD could remove legacy-mode. Nobody would buy computers that suck at running 32-bit applications.

Re: Running 64-bit code in 32-bit x86 OS

Posted: Wed Aug 15, 2012 3:10 pm
by Griwes
All sane applications are written in way that allows compiling them to IA-32e instead of previous targets. Closed source applications, whose developers are neither continuing them, nor releasing 64bit versions of them, are doomed as well. It's just a matter of time, and I hope it will go faster than old that crappy stuff that disappeared after years of painful existence.

Also, that's kinda vicious cycle:
Software devs: processors and OSes are still supporting 32 bit execs, why should we bother going into 64 bit?
OS devs: software devs are still releasing 32 bit execs, we should keep supporting them.
CPU devs: OSes still use 32 bit submode, we need to keep it!