The "worst case" of the stack-based allocator is given when no memory is allocated, all pages are free and pushed on the stack. But if there is plenty of free memory, it really does not matter how much space the stack consumes.turdus wrote:Correct, but you compare the best case (free pages drops below 1024) with the worst case (fragments exceed 512).
Of course, these numbers are correct. But on a more realistic side, large servers have ~ 64 GB RAM, which can be managed with a 64 MB stack. That's really cheap.And the difference is even bigger if it comes to 64 bit. The stack would consume 256G for full RAM (2^48, 256Tb), while fifo can handle all of it in 4M easily. Scalability matters!
Again, that's right. But using my server example a second time, filling a 64 MB stack requires almost no time. Indeed, for larger amounts of memory require more time, but I prefer to spend this time once at boot time, instead of having a garbage collector run all the time to move around pages and merge free blocks.Do not forget that the free frame stack (as well as fifo) has to be filled in at boot time when almost every frame is free!
Imagine you have such a giant server with several TB RAM, and you allocate memory in blocks of several 100 MB - 1 GB. As time goes by, your memory gets fragmented and your allocator needs to move frames in order to merge them. It will take ages to move this large amounts of data, and all this has to be done when the OS is up and running.