Now its time to implement lowlevel physical memory manager. But which algorithm?
Bitmap or stack is easy and simple. But i don't want to modify it in future. So i would go for buddy allocation as in linux. What kind of physical mm are you using? Some pointers to buddy alloc algorithms? (osfaq i studied)
physical mm
Re:physical mm
I'm at a similar place to you, since I'm re-designing my physical mm at the moment. 2 things I would consider are:
1) Do you need to store any information about each page (maybe for all pages or just allocated pages), such as owner and a set of flags?
2) What sort of request will be most common (single page vs. group of pages, normal memory vs. I/O memory).
I have the first question answered for my OS, but I'm still trying to work out the second. Anyone know if I could get some stats on physical memory allocation from linux?
1) Do you need to store any information about each page (maybe for all pages or just allocated pages), such as owner and a set of flags?
2) What sort of request will be most common (single page vs. group of pages, normal memory vs. I/O memory).
I have the first question answered for my OS, but I'm still trying to work out the second. Anyone know if I could get some stats on physical memory allocation from linux?
Re:physical mm
What is the need of storing information about each page if it is not going to visible to the userspace. Couldn't the informations be handled by the vmm itself?
How to store information about pages is i am using bitmap or buddy allocator?
I like to implement buddy allocator. But what should be the lower size of block and possible higher block size to be allocated? Suggestions and any possible codes.
How to store information about pages is i am using bitmap or buddy allocator?
I like to implement buddy allocator. But what should be the lower size of block and possible higher block size to be allocated? Suggestions and any possible codes.
Re:physical mm
It's that second question from my last post, and I'll give you my thoughts so far:
A single page allocation will be common, for example when growing a data segment or stack. For DMA, the maximum single transfer is a 64k (16 page) block, but supporting 16k (4 page) and 32k (8 page) allocations would help. I see no reason to support 8k (2 page) blocks, or block sizes above 64k (but I need to think about this a bit more).
A buddy allocator would work well, the thing to think about is how to re-combine a set of single pages into a group. I can expand on that if you don't understand what I mean.
There may not be a need to store information about each page in your OS, its just something to consider. In my system, every page will have a page_t struct which stores:
A single page allocation will be common, for example when growing a data segment or stack. For DMA, the maximum single transfer is a 64k (16 page) block, but supporting 16k (4 page) and 32k (8 page) allocations would help. I see no reason to support 8k (2 page) blocks, or block sizes above 64k (but I need to think about this a bit more).
A buddy allocator would work well, the thing to think about is how to re-combine a set of single pages into a group. I can expand on that if you don't understand what I mean.
There may not be a need to store information about each page in your OS, its just something to consider. In my system, every page will have a page_t struct which stores:
- Next pointer, since the allocator works with lists.
- Owner, this is a pointer to the virtual memory region (VMR) which has the page.
- Offset within the owner (A VMR is defined by a virtual start address and a size, (offset + owner->base) is the virtual address of the page.
- Flags, to be defined as I need them.