Page 1 of 1
Vista's SuperFetch tecnology?
Posted: Thu Mar 01, 2007 2:43 am
by muisei
Please read this this article
http://www.codinghorror.com/blog/archives/000688.html and tell what do you think of this memory management approach.
Re: Vista's SuperFetch tecnology?
Posted: Thu Mar 01, 2007 6:20 am
by Brendan
Hi,
The article is correct about free RAM being wasted RAM.
As for SuperFetch it depends on how well the OS can predict the future. For an office user or server where the same applications and data are used all day every day it'd be a huge bonus. For a home user who is unpredictable there's no way SuperFetch can know what will be needed and it'd probably be a waste of time (or more literally, a waste of disk head seeks and CPU cycles). For an unpredictable laptop user you'd want to disable SuperFetch to save battery power (let the disk drive sleep for once).
Of course people will compare (relatively bloated) Vista to (relatively less bloated) XP on the same hardware and say all the caching doesn't help. To some degree I think Microsoft are trying to fix performance problems caused by having an overcomplicated/bloated mess, with overcomplicated/bloated caching mechanisms.
I'm also wondering if Microsoft have implemented things well - from the "Task Manger" screenshot on the web page it looks like they've got 92 MB of the kernel in swap space while 1277 MB of RAM is being used for caching. You'd think that it'd make sense to keep the kernel in RAM (or strip that 92 MB out of the kernel if it isn't needed).
Cheers,
Brendan
Posted: Thu Mar 01, 2007 7:34 am
by Solar
I could picture it slowing down the boot process (already one hell of a wait with all the tools that have to be loaded by your average Windows), or making for sluggish behaviour when the usage profile changes ("enough of watching DVD, I'll play a round of Halo 3 now. Why am I dead?"). When something unexpected happens, you cannot just throw stuff into memory, you have to de-allocate some first. And there is the amount of work and bookkeeping involved playing smarty and predicting your usage profile.
All this can be solved by smart engineering.
And that is where I start getting afraid, since this is Microsoft we're talking about.
Posted: Thu Mar 01, 2007 11:17 am
by Tyler
Yeah i am not fan of microsoft new line of marketing features. If it starts with "Super" or "Ready" be afraid. It means more microsoft specific hardware (A boot time flash chip and specialist swap file flash chip) and lots more system slow down. I personally have implemented an always on optimizer for my system that runs at super low priority and performs a different tast on each preempt... either file system optimisation above and beyond that of the indexer, some level of pre request memory loading etc. But microsofts idea is a farse in its infancy. They have already admitted having no idea where this is going at all.
Re: Vista's SuperFetch tecnology?
Posted: Fri Mar 02, 2007 12:43 pm
by Candy
All solutions that get people excited are solutions to problems that don't involve solving the problem. (rule)
You either solve the problem (say - removing all unneeded dependencies and accelerating the actual disk access) but that doesn't sell. Or, you make something that fixes it in some cases, that sounds flashy and that appears to work most of the time, that does sell. Guess what Microsoft does?
Why cache files that aren't being used? What if I want to get some work done or just want to get something else done? Why not just solve the problem - bloated executables, loading sh*t you don't need and bashing the application developers that can't be bothered?
Posted: Sat Mar 03, 2007 7:54 am
by Kenny
I agree with the above about this new gimmick not actually solving any problems.
The point about a sudden drastic change of usage causing lower than expected performance is a good one, especially on, for example, a gaming machine where each game will try to use as much memory as it can, and predicting which game will be played next and when is impossible. Will it be that if the user returns to the game's menu, and the game unloads the current level and returns to an idle state, Windows will jump in and thrash the disk to preload your second-favourite game?
Also, someone above mentioned that as your machine starts up, Windows is going to load all available memory with it's idea of what is going to be used next. Even if this is a low priority task, it's still going to cause a lot of disk contention and reduced performance. Logically extending this, the more memory you have in your machine, the more time Windows is going to take to preload applications before it returns to any kind of idle state. Doesn't that go against any rational thought, that increasing the system RAM will reduce performance?
Anyway, my two pence, but if the endless march of technology forces me to install Vista on any machine, this is one "feature" that will be turned off right away.
Posted: Sat Mar 03, 2007 10:01 am
by Brendan
Hi,
Kenny wrote:Also, someone above mentioned that as your machine starts up, Windows is going to load all available memory with it's idea of what is going to be used next. Even if this is a low priority task, it's still going to cause a lot of disk contention and reduced performance. Logically extending this, the more memory you have in your machine, the more time Windows is going to take to preload applications before it returns to any kind of idle state. Doesn't that go against any rational thought, that increasing the system RAM will reduce performance?
Theoretically, if the disk hardware is using bus mastering, and if I/O requests are prioritised, and if an "in progress" request can be cancelled/postponed quickly when a higher priority request is received, and if the disk drivers are smart enough to know about all dependancies (e.g. a high priority request for disk drive A will cause a low priority request on disk drive B to be cancelled because the disk controller is used by both drives), and if it doesn't cause problems with seek times, and if caching never uses too much RAM (and never causes other pages to be sent to swap as a high priority operation), and if the file system overhead (working out which sectors belong to which file) is insignificant, then there shouldn't be any significant/noticeable overhead caused by this preloading.
That's a large number of ifs though, and I'm not sure if some of it is actually possible. For example, if the disk drive is told to do a multi-sector read of 12 MB, can that read be cancelled part way through, or would a new high priority request need to wait until the entire 12 MB low priority transfer is complete?
Cheers,
Brendan
Posted: Sun Mar 11, 2007 11:59 pm
by AndrewAPrice
A crappy web browser: You accidentally click a link to a 4GB DVD image.. You click cancel straight away.. The web browser in the background continues to download the entire image and then deletes it immediately after.