OSDev.org

The Place to Start for Operating System Developers
It is currently Sat Apr 27, 2024 2:41 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 2 posts ] 
Author Message
 Post subject: Design buddy allocator that supports reserved memory?
PostPosted: Tue Sep 05, 2023 8:24 am 
Offline

Joined: Sun Jan 15, 2023 4:02 am
Posts: 3
Supposed I have the following memory map: (in [start - end) format)
0x0 - 0x10000
0x10000 - 0x40000
0x50000 - 0x70000

Assume the address not listed are reserved memory, as you can see, there is a hull between [0x40000 - 0x50000). If my buddy allocator allocated in this range and accessed it, I will immediately get a General Protection fault.

I'm using a buddy allocator to implement the physical memory management, that is, I want to implement kmalloc() (rather than vmalloc() which I can already do in some extent).

kmalloc() allocate physical memory contiguously, so a buddy allocator should be the best fitting algorithm here. But as far as I have researched beyond, most of the buddy allocator implementations out there do not account for reserved/disabled memory ranges like this, and just expect to pass a big single blob of memory (i.e. I have to pass in the whole [0x0 - 0x70000), and I cannot assume there are invalid ranges like [0x40000 - 0x50000)) which is infeasible for physical memory management (take for example, some of the memory regions [0x40000 - 0x50000) are reserved for BIOS and DMA).

Also, should vmalloc() be lazily allocating memory instead? because my current implementation of vmalloc() is to take all the available physical memory pages, carve a specific contiguous virtual address, and instruct the page table to map the memory continuously. I use 0xffffc90000000000 to follow Linux kernel as an example.

Then the algorithm will get all the pages and put map all of them on the virtual address:
0xffffc90000000000 -> 0x0
0xffffc90000001000 -> 0x1000
0xffffc90000002000 -> 0x2000
0xffffc90000003000 -> 0x3000
...
0xffffc90000040000 -> 0x40000
0xffffc90000041000 -> 0x50000
0xffffc90000042000 -> 0x51000


So, as you could see, 0xffffc90000041000 can skip 0x41000, but not only this is a wrong implementation of vmalloc() (this means kmalloc() will not have available physical memory pages for allocation), this is also pretty inefficient as I have to query all the 4K pages from the physical memory, which could be large (assume that I will run this on a virtual machine with more than 64GB of memory -- which means I will have to wait for a long time to wait the for vmalloc() to initialize)

All of those problem surfaced during my kernel development but I cannot find any definitive answers. I think Linux managed to solve this using memblock, but there are few documentations about this, and I still have no idea how to read through the crappy coding structure of Linux kernel...


Top
 Profile  
 
 Post subject: Re: Design buddy allocator that supports reserved memory?
PostPosted: Thu Sep 07, 2023 4:55 am 
Offline
Member
Member

Joined: Tue Apr 03, 2018 2:44 am
Posts: 403
stevefan1999 wrote:
Supposed I have the following memory map: (in [start - end) format)
0x0 - 0x10000
0x10000 - 0x40000
0x50000 - 0x70000

Assume the address not listed are reserved memory, as you can see, there is a hull between [0x40000 - 0x50000). If my buddy allocator allocated in this range and accessed it, I will immediately get a General Protection fault.

I'm using a buddy allocator to implement the physical memory management, that is, I want to implement kmalloc() (rather than vmalloc() which I can already do in some extent).

kmalloc() allocate physical memory contiguously, so a buddy allocator should be the best fitting algorithm here. But as far as I have researched beyond, most of the buddy allocator implementations out there do not account for reserved/disabled memory ranges like this, and just expect to pass a big single blob of memory (i.e. I have to pass in the whole [0x0 - 0x70000), and I cannot assume there are invalid ranges like [0x40000 - 0x50000)) which is infeasible for physical memory management (take for example, some of the memory regions [0x40000 - 0x50000) are reserved for BIOS and DMA).


Does your buddy allocator keep track of allocated memory? Or is it incumbent on the client code to track the physical mamory that is allocated, and hand it back including the allocated size?

Because if it is the latter, at bootstrap, the memory that should be free can be handed to your buddy allocator piecemeal, perhaps even page by page, and your buddy allocator release code can consolidate the "free'd" memory into larger and larger regions, as per the buddy algorithm to merge adjacent regions into a single larger region.

Once you've free'd all the physical memory you have available, your buddy allocator will have a consolidated list of physical address ranges that it considers free, and it should be in a form it can then use to satisfy allocations again. It probably won't have a single region, but it'll be correct so long as your buddy allocator free code works correctly.

stevefan1999 wrote:
Also, should vmalloc() be lazily allocating memory instead? because my current implementation of vmalloc() is to take all the available physical memory pages, carve a specific contiguous virtual address, and instruct the page table to map the memory continuously. I use 0xffffc90000000000 to follow Linux kernel as an example.

Then the algorithm will get all the pages and put map all of them on the virtual address:
0xffffc90000000000 -> 0x0
0xffffc90000001000 -> 0x1000
0xffffc90000002000 -> 0x2000
0xffffc90000003000 -> 0x3000
...
0xffffc90000040000 -> 0x40000
0xffffc90000041000 -> 0x50000
0xffffc90000042000 -> 0x51000


So, as you could see, 0xffffc90000041000 can skip 0x41000, but not only this is a wrong implementation of vmalloc() (this means kmalloc() will not have available physical memory pages for allocation), this is also pretty inefficient as I have to query all the 4K pages from the physical memory, which could be large (assume that I will run this on a virtual machine with more than 64GB of memory -- which means I will have to wait for a long time to wait the for vmalloc() to initialize)

All of those problem surfaced during my kernel development but I cannot find any definitive answers. I think Linux managed to solve this using memblock, but there are few documentations about this, and I still have no idea how to read through the crappy coding structure of Linux kernel...


Not sure how others do it, but I allocate a single contiguous VM region for my kernel heap, that is lazily allocated to physical memory on demand. The benefit of this is that I don't need to map any heap memory 1:1, like Linux does. My heap is in entirely virtually mapped pages, which need not be backed by physically contiguous physical pages. The lazy allocation is done using the page fault handler and an anonymous zero-filled virtual memory region.

It does create the problem, though, that now the kernel heap memory has to compete for physical memory with all other consumers of physical memory. Extending the heap can now fail, and require memory be stolen from other consumers, which is typically cache or user level memory. If your page fault handler requires heap allocations, then you can deadlock, so you'd need to refactor the page handling code to not use the heap recursively or be able to handle this failure. In my case, when a kernel heap based PMM allocation fails, it triggers a garbage collector to free up physical pages, as well as freeing existing unused heap items.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 2 posts ] 

All times are UTC - 6 hours


Who is online

Users browsing this forum: Google [Bot] and 5 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group