Hi,
Ameise wrote:I am referring to physical memory management, assigning actual physical memory to applications. For virtual memory, are you referring to things such as malloc, or are you referring to things like swapping and page tables? As for the former, of course the application can handle that as it sees fit. For the latter, that would be handled by this same module.
I am also referring to a single system process to handle this, not physical memory management per-process.
I break "memory management" into 3 layers.
The first (lowest) layer is physical memory management, and includes allocating and freeing individual physical pages (but may also include support for allocating/freeing contiguous physical pages for DMA buffers, managing which areas of the physical address space are used for which memory mapped PCI devices, handling MTRRs, etc).
The next layer is virtual memory management, and includes allocating/freeing virtual pages and mapping/unmapping other stuff, and uses the physical memory manager as a "back-end". This can include support for swap space and memory mapped files, handling shared memory areas, managing PAT, creating/destroying virtual address spaces, etc.
The last (highest) layer is heap management. This is your "malloc()" and "free()" (or new/delete, or garbage collection, or whatever a process wants), and belongs in user-space (such that each process does it's own heap management, and different processes can do their own heap management in completely different ways). It uses the virtual memory manager as it's "back-end".
So, you're talking about a single system process that does both physical memory management and virtual memory management (and not heap). In that case, you're looking at a minimum of 2 task switches (potentially including the "TLB trashing" overhead, etc plus the cost/hassle of mapping the paging structures into the virtual memory manager's address space).
Other disadvantages would include scalability problems - e.g. imagine 32 CPUs all trying to use the same (multi-threaded?) single system process to allocate or free memory at the same time, possibly with expensive user-space re-entrancy locking (e.g. mutexes/futexes rather than spinlocks, because the process can't easily tell the scheduler "disable task switching until I release this lock").
I also think that "
It would also let the kernel be much smaller, and make memory handling within the kernel far simpler." is misguided. It's exactly the same code shifted to a different place, with extra communication/isolation hassles. Basically, the kernel may be smaller and simpler, but the "sum of the parts" is larger and more complex.
The normal advantage of shifting things to user-space is that you can minimise risk (e.g. if it crashes in user-space you don't need to worry about kernel-space or the rest of the OS being trashed), but if your virtual and physical memory management crashes the OS is going to be screwed anyway, so "minimised risk" isn't an advantage in this case.
There's only 2 advantages I can think of. The first is that it'd be easier to debug (for e.g. if it tries to use an uninitialised pointer or something you've got a better chance of detecting the problem). The other advantage is that it'd be more flexible. For example, you wouldn't need to implement "kernel modules" and/or make the kernel open source to allow other people to change the physical/virtual memory manager.
Of course without knowing more about the OS it's hard to estimate how good/bad these advantages/disadvantages are. For example, for something like a single address space OS where everything runs at CPL=0, you wouldn't have the "TLB trashing" disadvantage (or the "easier debugging" advantage).
Also note that for the "heap management" stuff that is on top of this; most software pre-allocates a pool of memory to use and uses the pre-allocated pool for allocations; where the size of the pool is increased when the heap manager runs out of pre-allocated virtual memory and is decreased when there's "lots" of allocated virtual memory that isn't being used anymore. Usually the size of this "pool of pre-allocated virtual memory" is a compromise between wasting memory and the overhead of increasing/decreasing the size of the pool. Basically, if the virtual memory manager is fast then the size of the pool (for each process) can be small; and if the virtual memory manager has a lot of overhead then people writing user-space code will compensate by increasing the size of their pool (to reduce overhead by reducing the number of times they need to allocate/free virtual memory). The end result is that if the OS's virtual memory management is a lot slower, then it might have no effect on performance but increase the amount of RAM allocated/wasted instead.
Cheers,
Brendan