Idea for physical memory management
Posted: Mon Jan 02, 2012 10:11 am
Hello all,
I've got an idea for a physical memory manager, which I will briefly introduce here. I wonder if there are any real downsides to this method...
Basically, the data structure for the information page is a tree with a fixed depth and size. It is always known which node is a child of which, as it is always the same. Each node in the tree is associated with a page index (the actual page's address should probably be calculated as the information page's index plus one plus the index).
Each node has n child nodes, except for the last ones, of course.
Now each node contains n+1 bits (n is the number of child nodes). Bit 0 indicates whether the page this node is associated with is available or not. Bits 1 through n is a bit mask indicating whether the associated child of the current node, or any of its descendants has a free page.
Finding a page for allocation would be done as follows:
1. If the first byte in the information page is 0, all pages are allocated and we move to the next page (we could construct a similar information page so we don't have to loop through all these pages in serial though, so a second order information page, but I won't cover that for now).
2. Otherwise, set index = 0.
3. If the info[index] indicates that this page is available, we're done: allocate the page.
4. If not, find the first child that has an available page (either the page itself or its descendants), and set index to the index of this child.
5. Continue with step 3.
Actually allocating the page involves updating the parents of the allocated nodes, flagging this child node not to have any pages anymore. We're done as soon as a node still has pages other than the currently allocated one. (worst case number of steps, the depth of the tree).
Deallocating the page is simply the opposite of allocating it.
Allocating a page at a specific address is easy: find the proper index and test if available. If not, you can even test its siblings to see if they are available.
All in all, it sounds like a fairly easy scheme, and very efficient. The parameters can be tweaked, but for example one can take:
- 4 children per node.
- 1 byte per node. (4 + 1, rounded up for optimization)
- 4096 bytes per page, so 4096 pages per information page.
- Tree depth of 6 (4^6 = 4096)
This would involve one page every 4096 pages (0.02% overhead). Allocating or deallocating a page would have a worst-case scenario of 6 memory reads/writes.
My questions:
- Are there any problems with this technique I have to be careful about?
- Is this technique known about? (unless it has a flaw, I can't imagine it not being known)
- Would you implement this scheme in an OS? Or would you favor for instance stack based memory management.
I've got an idea for a physical memory manager, which I will briefly introduce here. I wonder if there are any real downsides to this method...
Basically, the data structure for the information page is a tree with a fixed depth and size. It is always known which node is a child of which, as it is always the same. Each node in the tree is associated with a page index (the actual page's address should probably be calculated as the information page's index plus one plus the index).
Each node has n child nodes, except for the last ones, of course.
Now each node contains n+1 bits (n is the number of child nodes). Bit 0 indicates whether the page this node is associated with is available or not. Bits 1 through n is a bit mask indicating whether the associated child of the current node, or any of its descendants has a free page.
Finding a page for allocation would be done as follows:
1. If the first byte in the information page is 0, all pages are allocated and we move to the next page (we could construct a similar information page so we don't have to loop through all these pages in serial though, so a second order information page, but I won't cover that for now).
2. Otherwise, set index = 0.
3. If the info[index] indicates that this page is available, we're done: allocate the page.
4. If not, find the first child that has an available page (either the page itself or its descendants), and set index to the index of this child.
5. Continue with step 3.
Actually allocating the page involves updating the parents of the allocated nodes, flagging this child node not to have any pages anymore. We're done as soon as a node still has pages other than the currently allocated one. (worst case number of steps, the depth of the tree).
Deallocating the page is simply the opposite of allocating it.
Allocating a page at a specific address is easy: find the proper index and test if available. If not, you can even test its siblings to see if they are available.
All in all, it sounds like a fairly easy scheme, and very efficient. The parameters can be tweaked, but for example one can take:
- 4 children per node.
- 1 byte per node. (4 + 1, rounded up for optimization)
- 4096 bytes per page, so 4096 pages per information page.
- Tree depth of 6 (4^6 = 4096)
This would involve one page every 4096 pages (0.02% overhead). Allocating or deallocating a page would have a worst-case scenario of 6 memory reads/writes.
My questions:
- Are there any problems with this technique I have to be careful about?
- Is this technique known about? (unless it has a flaw, I can't imagine it not being known)
- Would you implement this scheme in an OS? Or would you favor for instance stack based memory management.