- "Conventional". Tightly linearly indexed according to free chunk size. Coalescing is performed on all adjacent free memory. Used in Windows kernel-space and in the Windows user-space back end (one of two tiers). Linux user-space may use this too, but don't quote me on that. Comes paired with some front end or cache to ameliorate its drawbacks.
- Buddy allocator. Linux uses this for page sized allocations. No one seems to use this option for small allocations.
- Slab/segment based allocator. This is used in the Windows user-space front end (LFH) and in Linux kernel-space allocators.
- Biggest drawback to the conventional approach for me is that small free gaps may appear interspersed between larger busy chunks. Both other allocators theoretically solve this issue by carving small allocations from bigger blocks, thus leaving the possibility to reconstitute larger memory (eventually).
- The buddy allocator uses perfectly aligned chunks, but unfortunately, the chunk headers have to go somewhere and in the case of small allocations they will probably be stored internally. This breaks the perfect alignment of the allocator, but it doesn't make it worse when compared to the other options. The coalescing I imagine can be quite slower, especially compared to the slab/segment variant.
- The slab/segment allocator appears to be everyone's favorite, hence the question. It requires larger regions to coalesce before they can be repurposed. Coalescing is fast. Sizes are usually indexed with broader strokes to encourage coalescing. However, to me it has one fatal flaw - sporadic durable allocations will pin their corresponding slabs, causing excessive amount of external fragmentation in specific usage (which is not inconceivable in practice.)
Why do you think was the slab allocator chosen over the buddy allocator in the Linux kernel and for the Windows heap? Do you think it was performance based decision, or does it offer improvements with respect to fragmentation (which I fail to see.)