Of course it's possible, depending on the load. For e.g. if everything fits in RAM anyway, even a system with no swap space at all would work. The question then becomes how "good" is it for loads that don't fit in RAM.bewing wrote:I'm wondering if it's possible to use RL memory compression, rather than paging to hard disk, to free up memory pages, under most realworld circumstances.
For working this out there's many factors that would effect performance:
- 1) the "working set" of pages that software is actually using
2) the "total set" of all pages that software has allocated
3) how quick pages can be fetched from swap and stored in swap
4) the compression ratio
With RLE, a page full of 0x33 (or any other non-zero byte) would compress just as well as a page full of zeros. Using RLE only wouldn't be as effective as using better compression algorithms though...Candy wrote:Nope. Typical memory usage would perform fairly bad under memory compression and the only kind of "compression" that would work with RLE would be zeroes - which are handled by only actually giving the process a page when it writes to it.
I'm guessing that (if it's implemented well), compression/decompression could be much faster than waiting for hard disk seek and sector transfers. However, often the CPU can do other work while it's waiting for hard disk (but can't do other things while it's compressing/decompressing). IMHO this means compression/decompression might be better for I/O bound tasks, while hard disk space might be better for CPU bound tasks.Combuster wrote:You could try huffman encoding. It can do fair compression under most circumstances and it encodes/decodes in linear time. The part of the equation that remains unknown is how well it actually performs, i.e. if the extra CPU time is worth the decrease in disk accesses...
Yes.jnc100 wrote:The other problem is that using some of your memory as a compressed store reduces the amount of working memory, thus increasing the likelihood that you need to compress/uncompress data, as well as increasing the number of tlb invalidations/refills.
edit: this might be useful
For a system with 1 GB of RAM and a compression ratio of 2:1 you'd be able to have a maximum of 2 GB of (compressed) data, but, unless the size of the working set is zero you'll be constantly thrashing (compressing and decompressing all the time). If the working set is 512 MB, then you'd be able to have a total of 1.5 GB of data (1 GB compressed and 512 MB uncompressed) without causing thrashing.
For the same system using hard disk space, the total amount of data you can have is 1 GB plus the amount of hard disk space you're using (typically much more than the 2 GB you'd get with compression), and the size of the working set could always be up to 1 GB without thrashing.
IMHO if I had to choose between compression or hard drive space, then I'd have to choose hard drive space, as it allows for larger working sets and larger total data (even though I think compression would improve performance in some cases).
However, why do we have to choose one or the other? I can imagine a system that uses both methods to get better characteristics than either method would have by itself....
Cheers,
Brendan