Well, I was just going over how I wanted to manage physical memory and I've come up with a really nice scheme that I'd like to share.
Instead of using a bitmap or a stack you can use a linked list. This has several advantages.
1. It is as fast a stack (to allocate/deallocate you just manipulate pointers)
2. Finding where memory is in the list can be done in constant time. If you have a big array with one entry per page, then the ith page is in the ith array spot. The linked list pointers can just point to other locations in the array.
3. If you use a doubly linked list, removing an element from the middle is easy.
4. It can be easily extended to also have information on the least recently used page. If whenever you allocate a page (or a page has just been loaded from disk), you move that page to the back of the list then the list will have all the free pages at the start and the back will be the allocated pages in LRU sequence. Then if you just have an extra variable that tells you how many free nodes you have at the beginning of the list, you can tell when you've run out of memory too.
5. If you have "reserved" memory (ie. memory that should not be paged to disk) then just don't add it to the end of the list when you allocate it.
The only drawback that I can see is if you want to allocate a contiguous chunk of memory you'd have to search for it.
Physical Memory Management
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:Physical Memory Management
i'm actually trying to develop something like a "smart bitmap" manager, that would group pages by chunks and keep an information about whether a chunk is 'mixed' (i.e. contains both used and unused pages) or 'complete' (either completely free or completely used), in which case you can use the bitmap's space to store information like the length of the consecutive completely free chunks run, the start of the next free run, etc. which should help allocating a range of contiguous physical memory.
I haven't been writing all the algorithms yet, so i can only hope it'll be O(1) for most things ...
I haven't been writing all the algorithms yet, so i can only hope it'll be O(1) for most things ...
Re:Physical Memory Management
That sounds somewhat like Cottontail memory management to me. How are you going to decide which page should be cached to disk though? Presumably you'll need some other data structure for this. That was actually how I originally came up with the idea. I figured I'd need a linked list to implement the least recently used paging scheme and I saw that it could be easily extended to general memory management.
The only thing that worries me though, is that I can't allocate contiguous memory and I can't check if a page is free or used (quickly). So I might have a bitmap structure as well that just keeps track of whether a page is free or used. Then when I free a page if the pages next to it are free I can insert it beside them in the list. This way I can use the bitmap structure to find chunks of free pages, and since they'll be beside each other in the list removing them will be easy.
What sort of ideas do you have for virtual memory management? Currently I'm thinking that whenever I set the present bit to 0, I'll change the page address to be an index into an array of cached pages. Then from there I'll get the information on where it is on disk.
The only thing that worries me though, is that I can't allocate contiguous memory and I can't check if a page is free or used (quickly). So I might have a bitmap structure as well that just keeps track of whether a page is free or used. Then when I free a page if the pages next to it are free I can insert it beside them in the list. This way I can use the bitmap structure to find chunks of free pages, and since they'll be beside each other in the list removing them will be easy.
What sort of ideas do you have for virtual memory management? Currently I'm thinking that whenever I set the present bit to 0, I'll change the page address to be an index into an array of cached pages. Then from there I'll get the information on where it is on disk.
Re:Physical Memory Management
i was thinking of using a double linked list of buckets. each bucket isa 4kb page, the first two 32bit entries are prev/next bucket pointers.
each other 32bit entry in the 4kb page size is a page number (offset/4096). top two bit is used to say if its 0=in memory,1=paged out,2=locked,3=mmm).
that gives you 1022 pages per bucket => 1022*4096=4088kb per bucket (just under 4mb per bucket).
i'll change my mind again in 10 minutes.
each other 32bit entry in the 4kb page size is a page number (offset/4096). top two bit is used to say if its 0=in memory,1=paged out,2=locked,3=mmm).
that gives you 1022 pages per bucket => 1022*4096=4088kb per bucket (just under 4mb per bucket).
i'll change my mind again in 10 minutes.
-- Stu --
Re:Physical Memory Management
Do you think you could explain this in a bit more detail? I get the structure that you're using, but I'm not sure what its supposed to be used for.df wrote: i was thinking of using a double linked list of buckets. each bucket isa 4kb page, the first two 32bit entries are prev/next bucket pointers.
each other 32bit entry in the 4kb page size is a page number (offset/4096). top two bit is used to say if its 0=in memory,1=paged out,2=locked,3=mmm).
that gives you 1022 pages per bucket => 1022*4096=4088kb per bucket (just under 4mb per bucket).
i'll change my mind again in 10 minutes.
How does everyone keep track of the virtual memory they have available? And what sort of algorithms do you use to determine which chunk of memory to allocate?
I have an array where the even elements tell you the size of free chunks, the odd elements tell you the size of allocated chunks. So if the first 5 pages were used, the next 3 were free and the last 8 were used (in a 16 page memory), then the list would be {0, 5, 3, 8} (the 0 is because the first element tells the free pages and the first page is used).
When allocating virtual memory memory I use what I call the exact/half/best fit algorithm. It first checks if there is a chunk of free memory that is exactly the size requested (with perhaps 5% of leeway). Then it checks if there is a page that is twice the size requested (with about 15% leeway, the justification being that more requests of approximately the same size will probably come, so breaking a piece of memory in half may let some other future memory request take the other half). If neither an exact fit or a half fit could be found it takes the best fit.
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:Physical Memory Management
In the sense that there is a two-level bitmapping scheme, yes, it sounds a bit like cottontail, but i'd like to avoid sweeping bitmaps by knowing *in advance* the length of a 'completely used' run of pages...thooot wrote: That sounds somewhat like Cottontail memory management to me.
Yes, in my opinion, the swapper is a completely different beast. I'm getting inspiration of the BSD scheme, here. Actually, i have so-called 'memory objects' that know what policy must be applied to select pages that are target for swap-out. The 'collect' command will then build a list of those pages and give that list to the swapper component.How are you going to decide which page should be cached to disk though? Presumably you'll need some other data structure for this.
Actually, each frame may be under the responsibility of any of the 'frames holders' in the system (swapping-out, disk cache, memory objects, frames cleaner, etc.). A 1 in the 'usage bitmap' is simply stating that 'nobody is responsible for this page and anybody may use it'.