Page 1 of 1

Slab vs List of sized portions

Posted: Fri Apr 18, 2014 5:09 pm
by TylerH
From reading Wikipedia, I get the impression that the idea behind slab allocation is caches of blocks of certain sizes, which leads to constant time allocations in the average case. Is there any benefit lost by abstracting this into a single interface similar to malloc? In the implementation of malloc one could have an underlying standard heap allocator and lists of blocks of size 4B, 16B, 64B, 256B, etc. Initially, these lists would be empty. Any block allocated would be rounded up to the smallest size block that would fit. The lists would be populated with blocks as previously alloced blocks are freed. In subsequent allocs, if any block of a matching size is available in the lists, it would be returned to the caller as a free block and removed from the free list. The underlying allocator is only invoked when a suitable block is not in the free list. This way, any performance penalties incurred from repeated allocs/frees of the same size of block would be avoided.

The line that doesn't make sense is:
"The allocation takes place quickly, because the system builds the objects in advance and readily allocates them from a slab."
The "system builds the objects in advance" sounds like the objects are being constructed before being allocated. I don't see how pre-constructing objects can help. They would be pre-constructed, eventually alloced, used (ie changed), then freed. But after being freed, they would no longer be in the initial state that they will need to be in when alloced again, so they must be constructed again before they can be realloced. The same work is must be done regardless. I don't see how caches result in anything more than average case constant time allocation.