The following is for paging.
I've been trying to work on a memory manager for a while now (Most of the work has been simply gathering various bits of information on the matter and looking into multitasking (For consideration purposes)). Anyways, I've seen mention of the following methods to allocate pages:
* Flat Bitmap (Search linearly through an array of DWORDs for free page)
* Super Bitmap (Search linearly through a shorthand array of DWORDs for a free page area, then search that page area for a free page)
* Stack (From my understanding, make a fixed array of free pages, pop a page off the stack to get the free page, push on to mark as done with (I don't quite understand this entirely [note-a]))
* Linked List (Keep track of a linked list of free/used pages, write/read from the list, etc [note-b])
Question 1
I have the idea that I'd use a combination of the stack and a super bitmap, is this a better idea than to use a super bitmap alone?
[note-a]: What happens if I need more room in the stack? I can't simply reallocate it -- That's the reason I need a memory manager! I can't just set its array size to all of the possible free pages (That'd make a pretty big kernel).
[note-b]: Linked Lists are very memory sensitive by their nature. Problems which I can view as occuring from using "dynamic memory to manage dynamic memory" are stability, efficiency, and final quality output. Stability because you might think it's solid, but there could be one bug hiding that you never knew of. Then when you think you're done and go on to say the file system, multi-tasking, or device management you find something wrong and then debug the wrong system -- not very good. Efficiency because you'll have to keep an array of the all the list items (Unsorted of course, this is just for "book-keeping"). What happens if that array is used up? See [note-a] for more details on what I mean. Final quality output because of the problems with trying to make it efficient. You'll likely lose performance to keep memory from being fragmented. That, or gain a larger overhead.
The following is for a malloc/free.
The language I'm working on (Aex.Uni Basic) uses a pool system to keep track of allocated strings (ASCII and Unicode). The pool system has several structures in it:
(Main)
* A pool chunk (8-bit unsigned byte tells number of free, 8-bit unsigned byte tells number of used, array of 32-bit void pointers are the slots)
* A pool (Has various characteristics and manages all the various pool chunks)
(Creation)
* A pool descriptor (Describes a constructor to be called when memory is allocated, a destructor for when it's freed, and the default size for allocated data (For strings, 512 characters (Bytes used may be either 512 or 1024 depending on ASCII or Unicode)))
I plan on rewriting the pool system to make it more efficient before I release it to the public for free, however in the terms of a memory manager I figured "Hey, a pool system wouldn't be that bad!" and then I set out on designing a pool system which could be used as a memory manager. I'm not sure if I could call the final result of what I came up with a pool system, but I have this:
Code: Select all
// A single allocation (6-bytes)
struct sAllocation
{
u16 size; // How big is the allocation? 0 if not allocated
void* ptr; // Pointer to the allocation
};
// A region of allocations (1537-bytes)
struct sRegion
{
u8 numFree; // How many free allocations are there? If you do 256-numFree you get how many are used.
sAllocation allocs[256]; // The allocation slots
};
// A region pool designed to fit within one page (3075-bytes - 1 page = 4096 bytes)
struct sPool_4kb
{
u8 numFree; // How many regions are free?
sRegion regions[2]; // Can fit 512 allocations
};
// A region pool designed to fit within two pages (7686-bytes - 2 pages = 8192 bytes)
struct sPool_8kb
{
u8 numFree; // How many regions are free?
sRegion regions[5]; // Can fit 1280 allocations
};
// A region pool designed to fit within four pages (15371-bytes - 4 pages = 16384 bytes)
struct sPool_16kb
{
u8 numFree; // How many regions are free?
sRegion regions[10]; // Can fit 2560 allocations
};
// A region pool designed to fit within eight pages (32278-bytes - 8 pages = 32768 bytes)
struct sPool_32kb
{
u8 numFree; // How many regions are free?
sRegion regions[21]; // Can fit 5376 allocations
};
// The memory manager designed to fit in one page
struct sMemoryManager_4kb
{
sPool_4kb pool; // The memory manager pool
// Fill in with other attributes
};
// The memory manager designed to fit in two pages
struct sMemoryManager_8kb
{
sPool_8kb pool; // The memory manager pool
// Fill in with other attributes
};
// The memory manager designed to fit in four pages
struct sMemoryManager_16kb
{
sPool_16kb pool; // The memory manager pool
// Fill in with other attributes
};
// The memory manager designed to fit in eight pages
struct sMemoryManager_32kb
{
sPool_32kb pool; // The memory manager pool
// Fill in with other attributes
};
I'd simply allocate the various number of pages PER application. i'm deciding how many pages to give per application though. Since device drivers and the kernel will share a memory manager, I'll be giving the kernel 8 pages:
Code: Select all
typedef sMemoryManager_32kb sKernelMemoryManger;
Question 2
Is there a better memory management system I could implement? Is there any way to make this one better (In terms of maximum number of allocations -- speed isn't a concern as I have a few ideas to improve speed)?
I've "formated" this post to make it easier to read through (eg: bold, italic, and underlining various sections rather than a large clump of plain text).
Thanks in advance, your input (Including criticisms and random notes) will help.
Cheers,
-naota