Re: Automatic memory management
Posted: Sun Dec 20, 2015 10:44 pm
Hi,
Note that sometimes (not always) the file system check that happens during boot does find and fix problems in "/" or "/home". Because the computer isn't rebooted often (mostly only when there's a power failure that takes too long for the UPS to cope with); I worry a little about the computer running for 100+ days (between reboots) with nothing checking for problems at all.
b) If you think a tiny (~5 ms) delay for the absolute worst possible case (OS busy prefetching the wrong thing at the time and has to stop the prefetch first) is noticeable when the OS has to load an application (and its DLLs, resources, etc) from disk (which will cost several orders of magnitude more than that pathetically tiny delay) then you're a deluded fool.
c) If you think that (for the average case, not some rare pathological case) the completely unnoticeable and irrelevant disadvantage (from b) is more important than the massive advantage (from a) then you're a deluded fool.
Note that for some software (prototypes/experiments/research, tools that are used once and discarded, etc) where development time is far more important than efficiency; you want programmers to be lazy. Being lazy (if/where appropriate) is a good thing, not a bad thing.
Cheers,
Brendan
My main file systems are using ext4. Directly from "/etc/fstab":onlyonemac wrote:File system checks aren't needed if you use a journalling filesystem. I can only assume that your system is mis-configured; since ext4 the startup filesystem checks have been disabled by default.Brendan wrote:The problem is when there's 10 GiB of RAM being wasted and all the disk drives and all the CPUs are idle; where the OS does absolutely nothing to improve "future performance". It should be prefetching. It should also be checking file systems for errors (to avoid the "You can't use your entire computer for 2 frigging hours because all of the file systems haven't been checked for 123 days" idiocy when it is rebooted), and should probably also be optimising disk layout (e.g. de-fragmenting, data de-duplication, compressing files that haven't been used for ages, etc).
Code: Select all
# <fs> <mountpoint> <type> <opts> <dump/pass>
/dev/md0 /boot ext2 noatime 0 2
/dev/md1 / ext4 noatime 0 1
/dev/md2 /home ext4 noatime 0 2
/dev/sdc2 /backup ext3 noatime 0 2
/dev/sdc3 /backup2 ext4 noatime 0 2
/dev/cdrom /mnt/cdrom auto noauto,user 0 0
No optimisation of any kind is ever "necessary" (it's always just desirable/beneficial).onlyonemac wrote:Also de-fragmentation is not necessary because of techniques in the filesystem (such as delayed allocation) to reduce the effects of fragmentation.
I'd rather have a system that's ready to spring into action quickly (without disk IO) than a system that's ready to lurch into action slowly (because of disk IO that could've/should've been avoided by pre-fetching). When the hard drives are not in use (e.g. everything that should be prefetched has been prefetched, etc) the hard drives will be idle.onlyonemac wrote:The only thing I've observed a Linux system doing when idle is swapping RAM to disk if the amount of free RAM is below the threshold, or swapping it back to RAM if there's enough free RAM. Other than that my Linux systems always keep their hard drives idle when not in use, and frankly to me that's a good thing because it means that the machine's ready to spring into action as soon as I need it to, rather than slowing my use of it down because it's still trying to do a whole pile of other disk activity.
That's most likely because of power management (e.g. the OS saving everything to disk so it can enter "sleep mode" and shut down most of the computer; and needing to turn memory and CPUs back on and load RAM contents back from disk when it wakes up). It has nothing at all to do with prefetching or swap in the first place.onlyonemac wrote:On the other hand, if I walk up to a Windows system that's been sitting idle for the last half an hour, it's always doing so much disk activity that for the first five minutes after sitting down at it I pretty much can't do anything.
a) If you think having to load files from disk because they OS is crap and didn't prefetch them is "better" then you're a deluded fool.onlyonemac wrote:I'd far rather have my hard drive immediately available to load all the files for the web browser than to have it "prefetching" the email client that actually I'm not going to use because I get my email on my phone. It doesn't matter how quickly it stops doing any background activity when I sit down, it's still going to take a while to realise that I'm needing the hard drive's full bandwidth now, and when we're talking about matters of seconds, any length of time is too long for something that maybe only speeds things up once in a while.
b) If you think a tiny (~5 ms) delay for the absolute worst possible case (OS busy prefetching the wrong thing at the time and has to stop the prefetch first) is noticeable when the OS has to load an application (and its DLLs, resources, etc) from disk (which will cost several orders of magnitude more than that pathetically tiny delay) then you're a deluded fool.
c) If you think that (for the average case, not some rare pathological case) the completely unnoticeable and irrelevant disadvantage (from b) is more important than the massive advantage (from a) then you're a deluded fool.
Programmers that aren't being lazy use tools (e.g. valgrind) to detect these problems, and then fix them. It's only programmers that are being lazy (that don't avoid, find or fix these problems) that benefit from garbage collection.onlyonemac wrote:It's not just programmer laziness; memory leaks due to forgetting to free pointers are pretty much non-existent (as are segfaults due to attempting to free a pointer twice, or other undefined behaviour when a pointer is mistakenly freed when it is still needed or without being marked as freed). These are all real issues, a pain to debug and even more of a pain to the end user.Brendan wrote:Of course this means searching for things to free (which is slower that being told what to free when); so essentially it sacrifices performance for the sake of programmer laziness.
Note that for some software (prototypes/experiments/research, tools that are used once and discarded, etc) where development time is far more important than efficiency; you want programmers to be lazy. Being lazy (if/where appropriate) is a good thing, not a bad thing.
Cheers,
Brendan