Hey guys ...
Ive been pondering Userspace MM methods for about 4 days now. Personally, I like the idea of self contained Userspace allocation zones - the App developer just creates the object he wants, and allocates as he wishes from it - aware of the constraints he himself set.
However, my current system still has some kinks which I consider serious.
The kink being, when you splice a chunk into two chunks, thats all fine. Splice a spliced chunk - and thats all well and good - except for the fact that 4bytes isnt being registered by either chunk.
The thought train that im on right now is leading to some somewhat complicated answers to this - like ... if a block has been spliced, and is going to be spliced again - scan for a free chunk adjacent and expand that, instead of carving a new chunk and losing that 4bytes.
Another idea I had was ... like, Buddy allocation internal to the Object. The developer would create this object, and set the minimum block size, the maximum block size, the object size. And it would be standard buddy allocation within the confines of the Object entity.
Why am I investigating these ideas? Simply because one problem with standard malloc/free type implementations bother me :
I Want the heap to grow when needed, and to shrink when possible.
If a page was once allocated to satisfy some heap expansion, but those pages are no longer in use - free them back to the system for use elsewhere.
However, Ways to determine whether the heap has 'shrunk' below the 'heap tail pages' kinda confuses me. I figure expansion of the Heap is simple :
Keep a address in the Malloc system tellign you the HIGHEST address possible, if you are trying to allocate from the current 'free point', and it goes over that limit - allocation of pages to satisfy the allocation is done and the HIGHEST address is updated.
One way I was pondering to detect whether the trailing heap pages were being used or not was have another variable, that tracks the 'end' of the highest allocation, if that ever sunk below a trailing page, free that page.
Im curious to how a lot of you manage your userspace memory.
Do you just... keep the pages allocated for the heap as it grows until the process is destroyed? Do you expand and shrink it to suit the time and usage? Id also be interested in the basic concept of how you shrink / expand, if you do so.
Again, I personally like self contained entities of Memory space - which is completely customized by the Developer for whatever use he needs to meet.
Anywho, Im sleepy so I shall hop into bed and hopefully dream up a solution .
~Z
Ah, My memory aint what it used to be ...
My method is almost fully implemented, and is rather complex. I have a function to allocate a block of memory from the OS (actually, mark a virtual address as useable, and it gets demand loaded), and to free a virtual page (where the OS will free the phyiscal page associated with it if one was ever allocated). I think store a list of free memory chunks, and used memory chunks. If a full page gets de-allocated, I free it. Each chunk points to the next chunk, which so there can be holes of unused memory in-between (so a page in the middle of my memory that go freed can be given back to the kernel for other uses, rather than relying on freeing memory at the end of the pool!). It's a bit more complex than what I stated, but that is the jist of it, I can now allocate more memory (doesn't even have to be linear), and the memory manager hands it out as requested, when it is freed, if there is a free page, it will free it back up for other use, no matter where it's location was. This helps a lot, for example:
Someone loads a huge array to store temporary information loaded from a file, this information is the processed, and list is generated to store some of the important/calculations (an example would be generating normals for a 3d model), afterwards, the first buffer is freed. If I only deallocate memory when the list is below a threshold, it will never be freed until that model is unloaded, with my method, you can free that memory and leave the model where it is in memory without an issue, and you won't waste as much memory. Of course, this is special cases, and the amount of information I have to store to keep track of all these chunks is a bit more, and the complexity of finding contiguous free pages is a bit more, so allocation and deallocation speed may be sacraficed. I plan on implementing a few versions, and doing some benchmarking and memory allocation dumps to check on things like fragmentation, wasted memory, (amount of ram asked for vs. allocated). I have most of my implementation completed, a bit more testing and tweaking, and I'll start working on implementing doug lea's malloc or similar to do some testing against.
Someone loads a huge array to store temporary information loaded from a file, this information is the processed, and list is generated to store some of the important/calculations (an example would be generating normals for a 3d model), afterwards, the first buffer is freed. If I only deallocate memory when the list is below a threshold, it will never be freed until that model is unloaded, with my method, you can free that memory and leave the model where it is in memory without an issue, and you won't waste as much memory. Of course, this is special cases, and the amount of information I have to store to keep track of all these chunks is a bit more, and the complexity of finding contiguous free pages is a bit more, so allocation and deallocation speed may be sacraficed. I plan on implementing a few versions, and doing some benchmarking and memory allocation dumps to check on things like fragmentation, wasted memory, (amount of ram asked for vs. allocated). I have most of my implementation completed, a bit more testing and tweaking, and I'll start working on implementing doug lea's malloc or similar to do some testing against.