Is this a viable page frame allocation algorithm?
Posted: Sat Dec 31, 2022 6:38 pm
Hello! I'm currently writing a page frame allocator in Rust for my kernel. So far, it's been going great (after a rewrite ). Taking after one of the pages on the wiki, I planned out how my memory was going to be allocated. I came up with a solution that works pretty well and is decently performant, but I'm unsure how it will scale in the future. I haven't heard of this before, which probably means I'm Googling for the wrong thing, or it's obviously terrible and I'm missing something. Here is how I've laid things out.
If I'm not mistaken, very often, when allocating memory, it can be allocated O(1) (it's just looking up an index, doing some pointer arithmetic, and writing a simple allocation header). In the worst case, it's O(n), where n is the number of pages in a memory area.
It seems weird I haven't come across this approach. Is it inherently flawed? The closest thing I could find is the watermark allocator, but this is obviously different and respects page frame boundaries.
- At boot, the kernel gets the memory area, reclaims what can be reclaimed, and passes that data to a function initializing the page frame (pf) allocator. The initializer calculates the number of page frames per area and sets a struct member that holds the index of the page at the start of the greatest free page frame extent in the area to 0.
- When malloc is called (or realloc, calloc, etc...), it looks through each memory area (there are only 6 on my bare metal machine) and decides on the first one with free space. It then allocates the correct amount of memory starting from the index of the greatest free page frame extent in the area. It increments that index by the number of pages requested. This obviously doesn't get the accurate value, but it's close enough until we run out of memory.
- If no memory areas can spare the requested space, a full recalculation to get the actual index is performed. This doesn't have to be performed very often at all, and with some optimization I think I can get it to be reasonably fast.
- Freeing memory is done by consulting an allocation header at the start of the allocated memory, and then doing simple subtraction on the index.
Code: Select all
Assume one memory area. The marker 'A' shows what the allocator thinks the index of the start
of the maximum consecutive page range is.
After initialization:
|-------------------------------------------------------|
| Free Memory |
|-------------------------------------------------------|
^
A
After some malloc(n)s:
|-------------------------------------------------------|
| Data | Data | Data | Data | Data | Free Memory |
|-------------------------------------------------------|
^
A
After freeing all most of the existing allocations.
|-------------------------------------------------------|
| Data | Free Memory | Data | Free Memory |
|-------------------------------------------------------|
^
A
Note how the marker is out of date, there is a larger amount of free memory behind it. Imagine this
continues, but in the future, allocation fails because the marker has reached the end. This
triggers a recalculation to see if the problem can be fixed. The result is this:
|-------------------------------------------------------|
| Data | Free Memory | Data | Free Memory |
|-------------------------------------------------------|
^
A
More allocations can now be preformed.
It seems weird I haven't come across this approach. Is it inherently flawed? The closest thing I could find is the watermark allocator, but this is obviously different and respects page frame boundaries.