sj95126 wrote:*confused dog head tilt* 5-level paging has been out for about a year now.
It has? I admittedly did not keep up. I thought Ice Lake server CPUs were delayed. And only those have 5-level. Whatever, the 4-level barrier is still way above my head for the time being, and likely will be for the coming decade.
eekee wrote:Huh? *headtilt* The last time I heard anything like this, the reasonable limit was 3.5GB to allow space for kernel and MMIO. Where do you get the 3/4GB figure from? I agree with the bit about switching mappings, that part makes sense, I'm just confused about the size difference.
Well, I did say "a bit of a problem". You see, when you, like Linux, reserve 1GB of address space for kernel space, and you, like Linux, map all physical memory into kernel space (or try to, anyway), and you, like Linux, try to reserve some space for I/O memory, then, like Linux, you will run into a problem when physical RAM + I/O space starts to exceed 1GB. I was guessing that at the time, about 256MB of I/O space were needed. I have no clue if that estimate was conservative or liberal. Anyway, RAM sizes with more than 1 binary digit set are rarely meaningful, so the largest below 1GB would be 768 MB. And 1GB is definitely too large, since however much I/O space is needed, it will be more than zero.
Now, like Linux, you can invent the "high memory" scheme, where you map only a small part of physical RAM (low memory) and all of I/O space into kernel space permanently, and then switch out the mappings for all remaining memory ("high memory") as needed. And, like Linux, a few decades later, when the whole thing is obsolete through the introduction of 64-bit systems, you might notice that the entire thing left your kernel an unmaintainable mess, and strip it out again. Myself, I won't even bother putting it in. These days, you are either on a high-memory system, then it will have a 64-bit CPU, or you are on a 32-bit CPU, then it will have little memory. 64-bit CPU with little memory might happen in certain applications, but 32-bit with lots of memory makes no sense.
Or you go a different route, forego virtual memory entirely (except as memory protection), identity map everything, and fail to have a problem until physical RAM goes beyond 3.5 GB. Viable option, but not compatible with any established ABIs out there, but it is what a surprising amount of OSes beyond the large consumer OSes went for. The nonstandard ABIs unfortunately require patched versions of popular compilers to support, or outright bespoke toolchains, with all the problems that brings with it. That is the reason why at work, I'm stuck with a compiler that doesn't support C95 to this day. Because an innovation from the year the PC CD-ROM drive found widespread adoption is too recent for the toolchain vendor.
Windows used to boot with the 2GB option by default, so they would have been able to use the old system of memory management for longer. But I honestly have no idea where the 3.5 GB figure appears in higher-half kernels. It makes little difference if the RAM exceeds the available virtual memory by a little or a lot. I have also no idea where the popular "36 bit" figure for PAE addressing comes from, since PAE can support physical addresses up to 52 bits. Anyway, with PAE and "high memory" support (essentially virtual bank switching), a 32-bit OS can, in theory, support up to 2^52 bytes of physical memory (too lazy to look up the correct unit right now). Of course, I have no idea where the switching costs get so high as to make that whole thing impractical.