Page 1 of 1

Virtual allocator and paging design

Posted: Thu Nov 05, 2015 8:01 am
by Avidanborisov
Up till now, my physical and virtual page allocators both used a bitmap. The space for the bitmaps was allocated before identity mapping the kernel (and the frambuffer video memory) by increasing the "end of kernel" address (and making sure it doesn't collide with a memory hole). So, basically I had

Code: Select all

end-of-physical-memory / 1024 / 8 + 0x100000000 / 1024 / 8
of static storage for the physical and virtual allocators.

During paging initialization, every mapped page was "excluded" from the virtual allocator bitmap. After paging was set up, all was set up for a basic kernel heap implementation, i.e.:

Code: Select all

auto phys = PhysicalAllocator::allocate(pages);
auto virt = VirtualAllocator::allocate(pages);
Paging::map(virt, phys, pages, flags);
But then it hit me - why should the virtual allocator be responsible for finding free pages with it's own 4MiB bitmap, when the paging structures (PD's and PT's) already store this information?
The page table entries can be iterated in a similar fashion to bits in a bitmap. In addition, a 'next-free' variable could always store the address of the first unmapped virtual page. This way the virtual allocator just needs to ask for 'n' consecutive unmapped pages, allocate 'n' physical pages (I'm not sure whether it's advisable for physical pages to be consecutive too) and map all virtual pages to the physical ones.

I'm not too sure about this, especially since I haven't gotten to the point of processes and user-space management, and I have a feeling that once I get there I'll have to redesign this again. It seems like the general advice is to just implement Paging::map and rely on virtual and physical memory allocators to actually keep track of allocated space.

What's a good way to design the relationship between the physical allocator, the virtual allocator, the paging implementation and the kernel heap?

Re: Virtual allocator and paging design

Posted: Thu Nov 05, 2015 8:36 pm
by Brendan
Hi,
Avidanborisov wrote:But then it hit me - why should the virtual allocator be responsible for finding free pages with it's own 4MiB bitmap, when the paging structures (PD's and PT's) already store this information?
Avidanborisov wrote:What's a good way to design the relationship between the physical allocator, the virtual allocator, the paging implementation and the kernel heap?
Assume your VMM supports 3 different area types:
  • memory mapped file
  • usable RAM
  • unusable
Now assume for each area type there can be multiple "page states":
  • present memory mapped file
  • not present memory mapped file
  • present usable RAM
  • not present usable RAM (allocate on write)
  • not present usable RAM (data was sent to swap space)
  • unusable
If I have a 33 MiB file that's mapped at virtual address 0x12345000 and I've finished using that file and don't need the memory mapping any more, but do need more usable RAM; I ask the virtual memory manager to convert 33 MiB at 0x12345000 to the "usable RAM" area type. Imagine a function like this:

Code: Select all

VMM_convertPageAreaTypeToUsableRAM(void *virtualAddress, int count) {
    while(count > 0) {
        switch( getCurrentPageState(virtualAddress) ) {
        case PRESENT_MMAP:
            // Just change "available" bits in page table entry used to store page state to "PRESENT_RAM"
            break;
        case NOT_PRESENT_MMAP:
            // Just change "available" bits in page table entry used to store page state to "NOT_PRESENT_RAM_ALLOC_ON_WRITE"
            break;
        case PRESENT_RAM:
        case NOT_PRESENT_RAM_ALLOC_ON_WRITE:
        case NOT_PRESENT_RAM_ON_SWAP:
            // It must already be the area type we want
            break;
        case UNUSABLE:
            // Just change "available" bits in page table entry used to store page state to "NOT_PRESENT_RAM_ALLOC_ON_WRITE"
            break;
        default:
            kernel_panic("Unsupported page state detected in VMM(!)");
            break;
        }
        count--;
        virtualAddress += 4096;
    }
In this case the VMM never allocates or frees any physical RAM at all.

Then (later); I ask the virtual memory manager to convert 512 MiB at 0x12300000 to the "unusable" type. Imagine a function like this:

Code: Select all

VMM_convertPageAreaTypeToUnusable(void *virtualAddress, int count) {
    while(count > 0) {
        switch( getCurrentPageState(virtualAddress) ) {
        case PRESENT_MMAP:
        case PRESENT_RAM:
            // Free physical page and change "available" bits in page table entry used to store page state to "UNUSABLE"
            break;
        case NOT_PRESENT_MMAP:
        case NOT_PRESENT_RAM_ALLOC_ON_WRITE:
            // Just change "available" bits in page table entry used to store page state to "UNUSABLE"
            break;
        case NOT_PRESENT_RAM_ON_SWAP:
            // Tell whatever handles swap space to free the disk space it was using for this page
            // Change "available" bits in page table entry used to store page state to "UNUSABLE"
            break;
        case UNUSABLE:
            // It must already be the area type we want
            break;
        default:
            kernel_panic("Unsupported page state detected in VMM(!)");
            break;
        }
        count--;
        virtualAddress += 4096;
    }
Of course this is "very over-simplified" to make it easier to understand the general idea. In practice you might have more area types (memory mapped IO, shared memory, thread specific storage, "never send to swap" variants, etc) and more page states; plus things like page/cache colouring and NUMA optimisations to improve performance.

Now assume that some processes are using a general purpose heap like malloc() or new(); some use their own custom designed things (e.g. dedicated pools for different object types); some use no memory management at all (e.g. everything allocated at compile time with static arrays); some use a mixture of 2 or more completely different techniques, etc.

It would be wrong/bad for kernel to force its own assumptions onto processes and prevent processes from doing their own thing.

This means that kernel's VMM should not allocate virtual space on behalf of processes, and should only provide "convert area of size X at starting at virtual address Y to area type Z" functionality that the process can use however it likes to implement whatever heap (or lack of heap) it feels like.

For the kernel itself, nothing really changes. The kernel's heap (if it uses one) decides which areas of kernel space it wants to use how; and the kernel's VMM still only provides the same "convert area of size X at starting at virtual address Y to area type Z" functionality. As far as the VMM is concerned the only real difference is the security checks (kernel is allowed to change things in kernel space while processes aren't).

Also note that there's no sane reason why the kernel itself can't (with a little care) use features that the VMM supports that reduce physical RAM consumption and/or improve performance. For example; my micro-kernels tend to use "allocate on write" a lot; and (in conjunction with "compressed RAM as swap space" feature I'm planning) would benefit from being able to send message queues to swap space.


Cheers,

Brendan

Re: Virtual allocator and paging design

Posted: Fri Nov 06, 2015 6:55 am
by Avidanborisov
Hi Brendan, that's very informative, thanks.

However, staying on the topic of a kernel heap implementation (which is currently where I'm at), what is the recommended way to implement the basic "get me n free virtual page" for it (which is what I called VirtualAllocator in my initial post)?
I can see two different approaches:
  • VirtualAllocator holds it's own data structure that keeps track of free/used virtual pages, e.g. a 4MiB bitmap. when the kernel heap implementation asks VirtualAllocator to "get me n free virtual pages", it:
    1. Searches it's bitmap for n free virtual pages
    2. Asks the physical allocator for n free physical pages*
    3. Calls Paging::map(virtual, physical)
    4. Returns pointer
  • VirtualAllocator doesn't keep track of free/used virtual pages itself, but uses a functionality provided by "Paging::findFree()" that searches the PD's and PT's for free virtual pages, i.e. a very similar process:
    1. Calls Paging::findFree() to get n free virtual pages
    2. Asks the physical allocator for n free physical pages*
    3. Calls Paging::map(virtual, physical)
    4. Returns pointer
* The process of getting n free physical pages also raises some questions: should the virtual allocator ask once for n consecutive physical pages, or ask for a single physical page for each virtual one (i.e. not consecutive)?

I'm not sure whether searching PD's and PT's for virtual pages makes sense, especially once I get to implement user-space and processes.

Re: Virtual allocator and paging design

Posted: Fri Nov 06, 2015 9:27 am
by Brendan
Hi,
Avidanborisov wrote:Hi Brendan, that's very informative, thanks.

However, staying on the topic of a kernel heap implementation (which is currently where I'm at), what is the recommended way to implement the basic "get me n free virtual page" for it (which is what I called VirtualAllocator in my initial post)?
I can see two different approaches:
  • VirtualAllocator holds it's own data structure that keeps track of free/used virtual pages, e.g. a 4MiB bitmap. when the kernel heap implementation asks VirtualAllocator to "get me n free virtual pages", it:
    1. Searches it's bitmap for n free virtual pages
    2. Asks the physical allocator for n free physical pages*
    3. Calls Paging::map(virtual, physical)
    4. Returns pointer
  • VirtualAllocator doesn't keep track of free/used virtual pages itself, but uses a functionality provided by "Paging::findFree()" that searches the PD's and PT's for free virtual pages, i.e. a very similar process:
    1. Calls Paging::findFree() to get n free virtual pages
    2. Asks the physical allocator for n free physical pages*
    3. Calls Paging::map(virtual, physical)
    4. Returns pointer
* The process of getting n free physical pages also raises some questions: should the virtual allocator ask once for n consecutive physical pages, or ask for a single physical page for each virtual one (i.e. not consecutive)?
The first method would be better for performance, because of cache locality during searches, because you don't need to care if the page table (or page directory) is present before checking if the page is present, and because it's much easier to tell the difference between "not present but in use" and "not in use". The first method would also cost extra physical RAM and extra virtual space though.

These aren't the only ways though.

What I'd be tempted to do (if I wanted "heap" - normally I don't) is:
  • During boot/kernel initialisation determine a "max. heap size" by taking total physical memory and dividing by maybe 4 (e.g. if the computer has 512 MiB of RAM then it'd be 64 MiB); then (in case that's too large) limit it to the size of remaining kernel space (e.g. if kernel space is 1024 MiB and kernel's executable and other things consume 20 MiB, then "max. heap size" would be limited to 1004 MiB). The intent here it to never increase the amount of space used for heap after boot.
  • Make all of that space "allocate on write" so that it doesn't consume (e.g.) 1004 MiB of actual physical RAM
  • Have a "bytes freed since last check" counter, where the "kfree()" function adds the number of bytes freed to it
  • If the OS is running low on physical memory and the "bytes freed since last check" counter is high enough; then:
    • Scan the heap looking for "unallocated" areas (using the metadata that you need for "kmalloc()" anyway)
    • For each "unallocated" area found; find any mapped physical pages and free them (revert the virtual page back to the "allocate on write" state)
    • Reset the "bytes freed since last check" counter back to zero
  • When a page fault occurs (due to kernel writing to an "allocate on write" page for any reason), allocate a physical page. If there isn't any free physical pages left to allocate then you go to "scavenge or swap mode" (e.g. doing the heap check mentioned here, VFS and file system cache scavenging, sending something to swap space, etc)

Cheers,

Brendan

Re: Virtual allocator and paging design

Posted: Fri Nov 06, 2015 9:53 pm
by Ready4Dis
This is similar to what I do. My virtual allocator just sets the pages as able to be used, but not actually in use. On first access, my page fault handler calls find physical page and maps it in. This makes things very simple, if I ask for more virtual memory, my virtual memory allocator simply sets up more pages as usable (with no physical ram allocated). My virtual allocator doesn't keep track of much honestly, it just gives more pages when requested. My memory allocator needs to keep track of where all the allocations end up, if it runs out of it's pool of memory, it asks the virtual allocator for more pages. I can ask for multiple pages, or a large page, which will then load the physical pages as their used.

I am unsure why you are tracking used/free virtual pages, how many sections do you have? Why not just allocate them linearly and all you need to know is which page your currently one? If you do this from the bottom and top (which is how most memory managers work), you only need to store two values for your virtual allocator. The only exception would be returned memory that is in the middle somewhere, which you can simply give the physical page back to the physical allocator, and leave the page for use again pushing the job of managing memory to the actual memory allocator.