stack base page allocator question
stack base page allocator question
ok, i have been readiong several tutorials on page managment and i think i'm going to be implementing the stack based technique. If i understand it correctly i have a stack which should be big enough to hold the address of each physical page. to allocate, just pop to free just push the address.
obviously there should be a page count increment/decrement to test for stack over/underflows...
My question is, is there any way to protect against say a "double free" error? suppose i "free" a page multiple times, it will then likely cause the allocator to give the same page for several requests...which is obviously bad.
should i have some sort of check for this (which would involve scanning the stack for duplicate values) if so, this removes the benifit over bitmap doesn't it?
what do you guys think?
proxy
obviously there should be a page count increment/decrement to test for stack over/underflows...
My question is, is there any way to protect against say a "double free" error? suppose i "free" a page multiple times, it will then likely cause the allocator to give the same page for several requests...which is obviously bad.
should i have some sort of check for this (which would involve scanning the stack for duplicate values) if so, this removes the benifit over bitmap doesn't it?
what do you guys think?
proxy
Re:stack base page allocator question
You could scan the stack on each free, to check whether that page had already been freed. This would only be acceptable as an optional feature to help debug low-level kernel mode.proxy wrote:My question is, is there any way to protect against say a "double free" error? suppose i "free" a page multiple times, it will then likely cause the allocator to give the same page for several requests...which is obviously bad.
Re:stack base page allocator question
One thing what I cannot imagine is how can you allocate 4MiB pages with a stack?
edit::
Also I think that it is faster to scan a bitmap if a page is used or not, better to say I calc the position where the bit of the page is. So I see no real advantage for the stack allocator.
edit::
Also I think that it is faster to scan a bitmap if a page is used or not, better to say I calc the position where the bit of the page is. So I see no real advantage for the stack allocator.
Re:stack base page allocator question
/Stack/FlashBurn wrote: Also I think that it is faster to scan a bitmap if a page is used or not, better to say I calc the position where the bit of the page is. So I see no real advantage for the stack allocator.
Get a free page: get the top most page off the stack (O(1))
Free a page: dump it on the stack (O(1))
Wastage when memory is in heavy use: minimal (range of bytes)
/Bitmap/
Get a free page: scan through the bitmap (you may optimize, will remain O(n) no matter what you do)
Free a page: toggle the bit (O(1))
Wastage when memory is in heavy use: large (at least size of memory * 0.001, which always wastes 0.1%)
-
- Member
- Posts: 1600
- Joined: Wed Oct 18, 2006 11:59 am
- Location: Vienna/Austria
- Contact:
Re:stack base page allocator question
Well, I am doing it the stack based way: pages of physical memory are piled onto the stack and popped off from there at need - and pushed upon freeing them. It is merely a matter of moving some pointer around and doing the pushing/popping in the right order. Mixing up pages and pushing occasional zeros or illegal pages - I can sing a song about that - ruins the complete thing and it craps out.
I am thinking about keeping a kind of list in the memory driver:
allocated page->process1 .. processn.
next-> points to the next allocated page.
- this in order to keep track of the allocated pages. It is dead awful to have the memory manager/mmdrv pass messages hither and thither just to remove a pagedirectory and page tables - it's worth an idea to try out ...
upon removing a process, the pages in its memory to which no other processes refer, are removed and pushed onto the stack.
let's flesh it out a bit:
It is in general the usual linked list thing. In our case many elements, and we have quite an amount of lookups to do. But in case, only one process refers to a page, it is entered directly into the page instead of allocating a pst header and putting it into the list. This is what I consider some speed up.
I admit, this model is kinda inspired by some Linux Memory Management stuff.
stay safe and ccw.
I am thinking about keeping a kind of list in the memory driver:
allocated page->process1 .. processn.
next-> points to the next allocated page.
- this in order to keep track of the allocated pages. It is dead awful to have the memory manager/mmdrv pass messages hither and thither just to remove a pagedirectory and page tables - it's worth an idea to try out ...
upon removing a process, the pages in its memory to which no other processes refer, are removed and pushed onto the stack.
let's flesh it out a bit:
Code: Select all
typedef struct page{
adresse_t phys_page;
union u{
pageowner_list_t *pol;
prozess_t *owner;
};
uint_t ref_count;
struct page next;
}page_t;
typedef struct pagelist{
page_t *head;
page_t *tail;
}pagelist_t;
typedef struct pageowner_list{
pageowner_t *head;
pageowner_t *tail;
}pageowner_list_t;
typedef struct pageowner{
prozess_t *pr;
struct pageowner *next;
}pageowner_t;
I admit, this model is kinda inspired by some Linux Memory Management stuff.
stay safe and ccw.
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
BlueillusionOS iso image
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:stack base page allocator question
@candy: i don't know actually how your stack is managed but i quite disagree with your 'memory wastage' results (or i misunderstood them completely).
It sounds like you said a stack wastes less memory than a bitmap ... The memory used for accounting memory should be locked to that purpose, no ? i wouldn't like a "oops, no physical page avl." when trying to free a page (or could i simply use that very page to continue accounting ... hmmmmh ahh... that opens new perspectives, but complicates the overal design )
It sounds like you said a stack wastes less memory than a bitmap ... The memory used for accounting memory should be locked to that purpose, no ? i wouldn't like a "oops, no physical page avl." when trying to free a page (or could i simply use that very page to continue accounting ... hmmmmh ahh... that opens new perspectives, but complicates the overal design )
-
- Member
- Posts: 1600
- Joined: Wed Oct 18, 2006 11:59 am
- Location: Vienna/Austria
- Contact:
Re:stack base page allocator question
STack needs more memory than bitmap.
calculate a bit: stack: an array of integers sizeof(integer)*No. of Pages available. But much easier to use. Maybe the Kung F00 master knows of some technique respectable to optimize/shrink this stuff.
bitmap: an array of *bits* -> bit*no of pages. calculate it down to an array of char and here you go. Drawback: no popping/pushing, searching is a bit awful and so forth.
calculate a bit: stack: an array of integers sizeof(integer)*No. of Pages available. But much easier to use. Maybe the Kung F00 master knows of some technique respectable to optimize/shrink this stuff.
bitmap: an array of *bits* -> bit*no of pages. calculate it down to an array of char and here you go. Drawback: no popping/pushing, searching is a bit awful and so forth.
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
BlueillusionOS iso image
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:stack base page allocator question
Well, i actually have several options in my mind, like having a single page dedicated as a 'chunk' of stack and chaining chunks, allocating physical pages for them on demand.beyond infinity wrote: calculate a bit: stack: an array of integers sizeof(integer)*No. of Pages available. But much easier to use. Maybe the Kung F00 master knows of some technique respectable to optimize/shrink this stuff.
Code: Select all
struct stk_chunk {
struct stk_chunk *next;
paddr_t data[PG_SIZE/sizeof(paddr_t) - 1];
};
struct stk {
stk_chunk *top;
int ptr;
};
What i'd do if i had to refactor my physical memory allocator once again would be a bitmap with a stack-based cache (8K, for instance) of the last operations:
- initially the stack is half-filled with pages from the bitmap
- when some process frees pages, they go to the stack
- when some process takes pages, they come from the stack
- when the stack is 'full' (or almost full), pages from the bottom of the stack are committed to the bitmap
- when the stack is 'empty' (or almost empty), the bitmap is scanned for free pages and the stack receives those pages.
Re:stack base page allocator question
As mentioned before, you have to scan through the stack to check if the page which shall be freed is in use or not.
I want the option to allocate a 4MiB page and if you use the stack you have to allocate 1024 4KiB pages. I lookup if there is a free 4MiB and if not I also allocate the 1024 4KiB pages, but I have another option And if there is a free 4MiB page I can use the PSE(???)
@pype
A mix of both is a good idea, I may think about it.
I want the option to allocate a 4MiB page and if you use the stack you have to allocate 1024 4KiB pages. I lookup if there is a free 4MiB and if not I also allocate the 1024 4KiB pages, but I have another option And if there is a free 4MiB page I can use the PSE(???)
@pype
A mix of both is a good idea, I may think about it.
Re:stack base page allocator question
well, patience is not only a game with cards ...
listen, flashburn, listen good:
If you use a bitmap approach, you have to scan each byte to find out which page is free.
If you use a stack approach, you have a pile of available = free pages and you pop off the page on top of this pile. Viceversa, if you free a page, you just push it on top of this pile.
I have the strong feeling that you 've mixed up these two concepts.
stay safe
listen, flashburn, listen good:
If you use a bitmap approach, you have to scan each byte to find out which page is free.
If you use a stack approach, you have a pile of available = free pages and you pop off the page on top of this pile. Viceversa, if you free a page, you just push it on top of this pile.
I have the strong feeling that you 've mixed up these two concepts.
stay safe
Re:stack base page allocator question
addendum:
to check whether a page is to be put back to the pool of free pages is another story:
you need to keep book of references to that page (and of the processes which keep hold of it to simplify live). Then you can tell whether to free the page or keep it. look at my rant above. I've laid out some pieces of the concept.
to check whether a page is to be put back to the pool of free pages is another story:
you need to keep book of references to that page (and of the processes which keep hold of it to simplify live). Then you can tell whether to free the page or keep it. look at my rant above. I've laid out some pieces of the concept.
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:stack base page allocator question
what seems to concern Flashburn is "is that page i'm asked to include as a Free page already exists in my Free Pool or not ?"
With a stack, solving this actually require to test each stack position to know if
With a bitmap, there's a single bit dedicated to keep whether $to_free is already freed or not, so no iteration is required.
And what concerns BI is "i need a free page, which one should i take ?"
With a stack, picking the first one is enough: no iteration needed, while with a map, you'll need to scan bits until you find one that tells you 'the page is free', which can be quite long if proper care is not taken ...
With a stack, solving this actually require to test each stack position to know if
Code: Select all
foreach $i (0..$stack_top) $stack[$i] == $to_free;
And what concerns BI is "i need a free page, which one should i take ?"
With a stack, picking the first one is enough: no iteration needed, while with a map, you'll need to scan bits until you find one that tells you 'the page is free', which can be quite long if proper care is not taken ...
Re:stack base page allocator question
One more thing. Do you set up a stack with all free pages or do you use a bitmap from which you get free pages which you then put onto the stack? (like pype) Because when you have 4GiB ram you have 1048756 4KiB pages and this would be 4MiB which you need for your stack.
There is another point which I haven?t seen till now. Imagine that the userland malloc deallocs a 4KiB page. Now the kernel have to lookup the physical address of the page and so you haven?t to check if the page is used or not.
There is another point which I haven?t seen till now. Imagine that the userland malloc deallocs a 4KiB page. Now the kernel have to lookup the physical address of the page and so you haven?t to check if the page is used or not.
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:stack base page allocator question
It shouldn't be too hard to maintain an array of available 4MB pages (note the absence of the 'i' ; ) telling how much pages have been allocated in a 4MB area. If the count is 1024, then a 4MB page can be allocated, otherwise only 4KB pages can be allocated ; )FlashBurn wrote: As mentioned before, you have to scan through the stack to check if the page which shall be freed is in use or not.
Now, removing all the pages that form a 4MB page from a stack once you detected the 4MB page is free is another non-trivial problem.
This is just a 'hint' so far, but i think physical memory allocator could benefit on 'multi-generation' garbage collectors from Java. 'allocs' and 'frees' are not required to affect the same structures.
For instance, if by 'filtering' certain 'frees' we can keep the allocation to a 'sequential' amount of pages, and write back 'compact' chains of free pages to the main bitmap rather than 'dispersed' pages, the average lookup time in the bitmap could be reduced ...
Re:stack base page allocator question
Exactly. If applied properly you can get the max results with 2 special cases:Pype.Clicker wrote: @candy: i don't know actually how your stack is managed but i quite disagree with your 'memory wastage' results (or i misunderstood them completely).
It sounds like you said a stack wastes less memory than a bitmap ... The memory used for accounting memory should be locked to that purpose, no ? i wouldn't like a "oops, no physical page avl." when trying to free a page (or could i simply use that very page to continue accounting ... hmmmmh ahh... that opens new perspectives, but complicates the overal design )
freeing:
page numbers before freeing
if (current_free_page_number == -1) map this page as first free page, and put it on itself as first free page
if (current_free_page_number & 0x1FE == 0x1FE) map this page as the next page to contain free pages, and add it in the 0x1FF location
allocing:
page numbers before alloc is satisfied
if (current_free_page_number == 0) put the page frame # in memory, unmap the page itself and return it
if (current_free_page_number & 0x1FF == 0x1FF) unmap this page (it's mapped right after the page its number is on) and return it.
This only doesn't really play nice with zeroed pages.