Slab allocator design decisions
Posted: Thu May 06, 2010 6:53 am
Hi!
I have been thinking about having a slab allocator in my kernel so that I don't have to use the kernel heap for everything. After some research and thought the plan I came up with is:
1. Split the memory in to different regions
- 0 - 512 MB - low & kernel areas used for PD and PT allocation, modules will be loaded into this region (position independent) and maybe something else (basically, this is page and page-range allocator)
- 512 - 786 MB - slab allocator area used for slab caches, which can be constructed at runtime (the slab allocator would use the heap to keep track of caches and slabs)
- 786 - 1024 MB - kernel heap area
- > 1024 MB - user space (split in its own way, i guess)
2. Set up the heap
3. Create slab caches as needed. This could be specified in a configuration file (cache sizes, total ram available to the allocator and so on) eventually.
But some things are still unclear:
1. Do I actually need 256 MB for the slab caches or could I use 64 MB for instance (or even less, maybe)?
2. Is a 256 MB heap to small? This is one of my main concerns (and #5)
3. How big should an individual cache be (how many objects would I be able to allocate)?
4. Lets say a file handle and a thread handle are the same size. Do I create separate caches and separate allocation functions for the two, or do I use a single cache twice the size?
5. Is the kernel area big enough to hold drivers and modules? Or should I split the kernel area to a kernel area for PDs and PTs and a module area (starting at 128 MB for example), to reduce fragmentation?
Currently, the heap area starts at 512 MB and is 512 MB - 4 kB in size. Everything else is the same.
How do you guys implement your slab allocators and split up your memory? Basically what I want to know is if this design is a solid one and what could be done to improve it.
Thanks,
rJah
I have been thinking about having a slab allocator in my kernel so that I don't have to use the kernel heap for everything. After some research and thought the plan I came up with is:
1. Split the memory in to different regions
- 0 - 512 MB - low & kernel areas used for PD and PT allocation, modules will be loaded into this region (position independent) and maybe something else (basically, this is page and page-range allocator)
- 512 - 786 MB - slab allocator area used for slab caches, which can be constructed at runtime (the slab allocator would use the heap to keep track of caches and slabs)
- 786 - 1024 MB - kernel heap area
- > 1024 MB - user space (split in its own way, i guess)
2. Set up the heap
3. Create slab caches as needed. This could be specified in a configuration file (cache sizes, total ram available to the allocator and so on) eventually.
But some things are still unclear:
1. Do I actually need 256 MB for the slab caches or could I use 64 MB for instance (or even less, maybe)?
2. Is a 256 MB heap to small? This is one of my main concerns (and #5)
3. How big should an individual cache be (how many objects would I be able to allocate)?
4. Lets say a file handle and a thread handle are the same size. Do I create separate caches and separate allocation functions for the two, or do I use a single cache twice the size?
5. Is the kernel area big enough to hold drivers and modules? Or should I split the kernel area to a kernel area for PDs and PTs and a module area (starting at 128 MB for example), to reduce fragmentation?
Currently, the heap area starts at 512 MB and is 512 MB - 4 kB in size. Everything else is the same.
How do you guys implement your slab allocators and split up your memory? Basically what I want to know is if this design is a solid one and what could be done to improve it.
Thanks,
rJah