Both from what I've read and what I've observed, most runtime libraries don't tend to take any action to return large free sections of their heap to the OS. Rather, if a chunk of memory is freed, they hang on to it so that they don't have to grab more address space to fulfill future allocations.
In general, this is a fairly good policy. We don't want to make a system call to give up address space that we'll turn around and want right back before our process's timeslice is over. However, it has some downsides. If the system is forced to start swapping to meet the memory demands of the running processes, and is using a least-recently-used replacement strategy, or something that approximates LRU, it may well be the case that the least recently used page on the system is in the middle of a free chunk of the owning process's heap and thus contains only junk data. If our allocator just holds on to free memory without letting the OS know that it's not currently in use, then the OS will write that page to disk, wasting I/O bandwidth. What's worse, whenever memory in that page is next allocated and used, all that junk data will be read right back in to memory before the page can actually be used.
The solution I would propose is this: The kernel ABI should include some data structures that a userspace allocator can use to communicate to the kernel what pages that it had previously allocated are currently unused. When a process starts, the runtime allocator initializes these structures and makes a system call to tell the kernel where it's storing them (or the ABI can define a static address for the structures). When the kernel runs out of free RAM, instead of starting to swap, it first checks to see if any running processes have pages marked as free in memory. If so, instead of freeing the page frame for use by swapping the data contained in that page to disk, it simply discards the data and remaps that page to a copy-on-write zero-filled page, which makes the page frame available for use much more quickly. If the allocator allocates memory in that page again, the kernel just obtains a free page frame and uses it to fulfill the copy-on-write, rather than pulling useless data in from disk.
Returning free memory to the OS
-
- Member
- Posts: 193
- Joined: Wed Jan 11, 2012 6:10 pm
Re: Returning free memory to the OS
That's an interesting idea... Have you also thought of a way for the kernel to notify the process that the page has been taken? And what does the program do when that happens?
Re: Returning free memory to the OS
This may do the job.linguofreak wrote:Both from what I've read and what I've observed, most runtime libraries don't tend to take any action to return large free sections of their heap to the OS. Rather, if a chunk of memory is freed, they hang on to it so that they don't have to grab more address space to fulfill future allocations.
In general, this is a fairly good policy. We don't want to make a system call to give up address space that we'll turn around and want right back before our process's timeslice is over. However, it has some downsides. If the system is forced to start swapping to meet the memory demands of the running processes, and is using a least-recently-used replacement strategy, or something that approximates LRU, it may well be the case that the least recently used page on the system is in the middle of a free chunk of the owning process's heap and thus contains only junk data. If our allocator just holds on to free memory without letting the OS know that it's not currently in use, then the OS will write that page to disk, wasting I/O bandwidth. What's worse, whenever memory in that page is next allocated and used, all that junk data will be read right back in to memory before the page can actually be used.
The solution I would propose is this: The kernel ABI should include some data structures that a userspace allocator can use to communicate to the kernel what pages that it had previously allocated are currently unused. When a process starts, the runtime allocator initializes these structures and makes a system call to tell the kernel where it's storing them (or the ABI can define a static address for the structures). When the kernel runs out of free RAM, instead of starting to swap, it first checks to see if any running processes have pages marked as free in memory. If so, instead of freeing the page frame for use by swapping the data contained in that page to disk, it simply discards the data and remaps that page to a copy-on-write zero-filled page, which makes the page frame available for use much more quickly. If the allocator allocates memory in that page again, the kernel just obtains a free page frame and uses it to fulfill the copy-on-write, rather than pulling useless data in from disk.
Code: Select all
posix_madvise(void *addr, size_t len, POSIX_MADV_DONTNEED);
If a trainstation is where trains stop, what is a workstation ?
Re: Returning free memory to the OS
Hi,
Of course for pages that are technically "in use" it won't work. For example, if a process is caching something, or has a scroll-back buffer, or keeps data around so the user can "undo", or has some sort of "lazy" code, or does pre-fetching of anything, etc. In these cases the process has to be notified.
You'd also want some sort of "global page priority" scheme. Is the page that a JVM stopped using 1 ms ago (but will probably want again in 2 ms) more or less important than the user's ability to press "back" on their web browser, or the RAM consumed by the file "foo/bar.txt" in the VFS cache, or that lookup table you generated to speed up CRC32 (which could be re-generated if you need it again), or....
This is all simple enough (e.g. you could just send message/s out to whoever wanted them saying "free all pages below priority X" and let them sort it out themselves). However; it's at this point you realise that if you want to do anything right you need to forget POSIX ever existed.
Cheers,
Brendan
If the page is truly unused; then the kernel can use "allocate on write" - essentially, map a single physical page full of zeros everywhere as "read only", and then allocate a new page (in the page fault handler) if/when something writes to the page.SoulofDeity wrote:That's an interesting idea... Have you also thought of a way for the kernel to notify the process that the page has been taken? And what does the program do when that happens?
Of course for pages that are technically "in use" it won't work. For example, if a process is caching something, or has a scroll-back buffer, or keeps data around so the user can "undo", or has some sort of "lazy" code, or does pre-fetching of anything, etc. In these cases the process has to be notified.
You'd also want some sort of "global page priority" scheme. Is the page that a JVM stopped using 1 ms ago (but will probably want again in 2 ms) more or less important than the user's ability to press "back" on their web browser, or the RAM consumed by the file "foo/bar.txt" in the VFS cache, or that lookup table you generated to speed up CRC32 (which could be re-generated if you need it again), or....
This is all simple enough (e.g. you could just send message/s out to whoever wanted them saying "free all pages below priority X" and let them sort it out themselves). However; it's at this point you realise that if you want to do anything right you need to forget POSIX ever existed.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.