OK, let it be the #1 pro on behalf of paging.Schol-R-LEA wrote:To the best of my knowledge, the OS developer has only limited options for optimizing the L1, L2 and (when present) L3 caches on the x86 chips, mostly regarding flushing a cache or disabling a cache entirely, but the cache hardware itself will be more effective at controlling it's own mapping of segments of memory to cache cells when it can limit the set of memory it needs to map. Page maps, as I understand it, provide some of that information.
In case of paging overall algorithm, that you employ to manage memory, is split between hardware and software parts. In case of "no paging" there's the same algorithm, but without any separated part. So, the complexity is still the same.Schol-R-LEA wrote:Furthermore, using unpaged memory is not simpler from the perspective of the operating system. Most 'flat' memory modes are really only flat from the perspective of a userland application; any multitasking OS still needs to parse the memory up into multiple process spaces. The paging hardware makes this easier, by allowing non-contiguous memory regions to be mapped to a contiguous virtual space (important when memory is being allocated at runtime), among other things.
But there's a small part of the algorithm that is implemented in hardware and can be reused by system developer. As a time saver it's just nothing in comparison with the decrease of flexibility you will have, so it's not viable to consider this part at all. Another difference here is the speed of execution of the hardware part of the algorithm. It should be good. But the size of the part tells us that the overall gain will be just a tiny fraction of all efforts required for memory management. However, let mark this advantage as the #2 item in the pro list for paging.
I don't understand how it reduces the computational burden.Schol-R-LEA wrote:It also reduces the computational burden for the OS itself, again by automatically restricting the space of memory that the system needs to be concerned with at a given moment.
Simplicity for users won't be given for free. If OS is too complex then users will have a piece of trash instead of something manageable. So, we just must pay attention to the complexity of the OS. And generally it is achieved by creating extensible systems. The core should be very simple and then there will be no much problems about extending it.Schol-R-LEA wrote:nor am I convinced that simplicity for the operating system developers is really a wise goal. Providing simplicity for the users and the application developers often requires considerable complexity on the part of the OS implementation.
OK, let mark it as a required quirk for the x86 architecture.Schol-R-LEA wrote:In any case, it is my (admittedly imperfect) understanding that you cannot enter x86-64 long mode without enabling paging
My goal is about standard top-down approach. I want to have simple and manageable architecture while all extensions (including hardware related optimizations) will be implemented as manageable parts with simple interaction protocol.Schol-R-LEA wrote:I gather that you intend to write a single high-level garbage collector that would be used unchanged on all hardware, correct?
Schol-R-LEA wrote:it is copying because that is how copying collectors work - 'garbage collection' is something of a misnomer here, as it is the live data that is actually collected by a copying collector, with the memory containing any remaining dead data then being freed in bulk. The algorithm works by copying the live data to a reserved area of memory in order to both eliminate garbage and compact the heap; when the copying is done, the reserved area is then marked as the active area, and the previous active area is marked as freed. My idea is simply to use paging to trigger the garbage collector, and using pages to partition the memory for faster collection cycles.
Let's compare two algorithms - with paging and without it. It's easy to understand how the "no paging" algorithm works, but how paging helps to trigger garbage collector is not clear to me. The basic idea is about to find a way of detecting there's no more free bytes in a page, so, how paging can help the idea? Or there can be another condition that triggers the "copy to new page" procedure, but again - how paging is related to the condition detection?
Memory manager looks for information about free and used parts of memory. If you use paging then how it can help to determine if the page is free or full or partially used? Or you just move required data structures to the paging caches? And lose a lot of flexibility while being bound to the structures than in no way are optimized for garbage collection?Schol-R-LEA wrote:The same applies to using page misses to trigger collection - the memory manager has to track when memory is being exhausted within a given arena anyway, and the pager happens to be a convenient mechanism for detecting that.
Uncontrolled access in managed environment is something unexpected. It can be a bug (as it was mentioned before), but it can't be a result of a normal operation. So, it's just about debugging. And for debugging there's the idea of the debug session with many additional hardware features involved (like debug registers or even virtualization). That's why I see no point in the security related paging stuff.Schol-R-LEA wrote:You can, of course, just not as efficiently or securely (as a hardware mechanism, the paging can, at least in principle, both run faster than the software equivalents, and block uncontrolled access to pages from outside of the processes those pages are assigned to when running userland processes - and unlike segmentation, paging is fine-grained enough and automatic enough to be practical for my intended design).
And now we can look at the pros of the paging:
1. Probably there are some memory access optimizations available when paging is used.
2. A bit of speed increase is possible because of execution of a small part of overall algorithm on the hardware level.
However, the cons include such a beast as the big flexibility decrease.
So, we can trade a few cycle gains for the too constrained architecture that makes the system too complex. What's the best? I vote for the flexibility.