Memory allocation
Posted: Sat Dec 01, 2012 8:41 pm
I've started considering what I should use for a memory allocator in my OS project, and I've come up with a design that I think might work. I doubt that it's as good as more sophisticated designs, especially under multithreaded operation, but it looks like it will be plenty efficient for development.
Each memory block would contain three metadata items: its allocation status, its length, and the length of the previous block. There would probably be some mechanism to pack those values down to limit overhead; I think that I can get the memory overhead for a small block down to a single word. The free blocks would be kept as a set of linked lists, with each list holding blocks of a single size or of a small range of sizes. The list nodes would be stored in the free blocks with one node per block. Allocation would be simple: find the list with the smallest block size that will still hold the desired number of bytes, pop a block from the list, and, if the block is larger than the allocation size by at least two words, split the extra space into a new block that is added to the free lists. Freeing a block would involve pushing it onto the free list and checking its neighbors to see if it can be coalesced with any adjacent blocks. If the lists are doubly-linked, coalescing the blocks should take constant time.
I'm unsure as to how original this method is (probably not very original at all) or if it would be competitive with other systems, but I can't think of any major flaws in it except for the possibility of heap fragmentation and potential concurrency issues. My OS design would probably emphasize thread-local heaps, so concurrency won't be as great of an issue as it would be in other systems, but I don't want a design that will leave threads fighting over locks all the time.
So, what do you think? Is this a decent allocator design, or should I just go with some other design?
Each memory block would contain three metadata items: its allocation status, its length, and the length of the previous block. There would probably be some mechanism to pack those values down to limit overhead; I think that I can get the memory overhead for a small block down to a single word. The free blocks would be kept as a set of linked lists, with each list holding blocks of a single size or of a small range of sizes. The list nodes would be stored in the free blocks with one node per block. Allocation would be simple: find the list with the smallest block size that will still hold the desired number of bytes, pop a block from the list, and, if the block is larger than the allocation size by at least two words, split the extra space into a new block that is added to the free lists. Freeing a block would involve pushing it onto the free list and checking its neighbors to see if it can be coalesced with any adjacent blocks. If the lists are doubly-linked, coalescing the blocks should take constant time.
I'm unsure as to how original this method is (probably not very original at all) or if it would be competitive with other systems, but I can't think of any major flaws in it except for the possibility of heap fragmentation and potential concurrency issues. My OS design would probably emphasize thread-local heaps, so concurrency won't be as great of an issue as it would be in other systems, but I don't want a design that will leave threads fighting over locks all the time.
So, what do you think? Is this a decent allocator design, or should I just go with some other design?