Microsofts Task Manager....sigh
- kenneth_phough
- Member
- Posts: 106
- Joined: Sun Sep 18, 2005 11:00 pm
- Location: Williamstown, MA; Worcester, MA; Yokohama, Japan
- Contact:
Microsofts Task Manager....sigh
I am doing a processor intensive computation so I set that program to the highest priority and changed my system setting to enhance my performance but the program didn't seem to be running so I opened task manager and found out that System Idle Process is taking 98% of the CPU and time to time my program would get a percent or mabye a two. How can I change this, to some how make System Idle Process's priority lower (well that doen't make sense because it should be doing that when there is nothing to do). I don't seem to be able to find tools that would change that but personally I do not wish to do that because I like to understand how it happens instead of finding a tool that would do things like a flick of a wond. Would anyone know a link that would explain these things or know how I can fix this problem. Microsoft's help wasn't usefull and MSDN was a terrible idea. (used MSDN because I also wanted to make a tool that would do that for me).
Thanks in advance,
Kenneth
Thanks in advance,
Kenneth
- Brynet-Inc
- Member
- Posts: 2426
- Joined: Tue Oct 17, 2006 9:29 pm
- Libera.chat IRC: brynet
- Location: Canada
- Contact:
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
I think your program is not as CPU-intensive as you think it is. Are you calling Sleep(), Yield(), or anything similar?
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
Yes. There is an "idle thread", but it does very little. It zeroes out pages that need zeroing, and then halts the processor. It's part of the kernel, so there isn't really an "Idle Process" per se. Task Manager has a lot of misleading terminology.Brynet-Inc wrote:It seems logical to assume that this "Idle Process" is not actually using 98% of the CPU.
It's probably stating the the system is 98% idle..
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
- kenneth_phough
- Member
- Posts: 106
- Joined: Sun Sep 18, 2005 11:00 pm
- Location: Williamstown, MA; Worcester, MA; Yokohama, Japan
- Contact:
No don't have those...I bet its more I/O intensive. It uses the CD/DVD drive and reads the data then copies it onto the hard drive, whether its a DVD or CD. Just that, I thought its would be CPU intensive because if it finds an error it would need to deal with it, and especially because on the log there's been a fair amount of errors printed and it's been running for approx 3 hours.
Yours,
Kenneth
Yours,
Kenneth
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
So, it's I/O-bound, not CPU-bound. Still, 98% idle time means something is very wrong. How much data are you reading at once?kenneth_phough wrote:No don't have those...I bet its more I/O intensive. It uses the CD/DVD drive and reads the data then copies it onto the hard drive, whether its a DVD or CD. Just that, I thought its would be CPU intensive because if it finds an error it would need to deal with it, and especially because on the log there's been a fair amount of errors printed and it's been running for approx 3 hours.
Yours,
Kenneth
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
this does make some since, your task doesn't use any CPU during I/O it sleep();'s until the device finishes, it should only take a few instructions to setup the next transfer, a low CPU usage is good because it means less code is used to transfer it means you code is efficient, not broken.
just my 2%
just my 2%
- kenneth_phough
- Member
- Posts: 106
- Joined: Sun Sep 18, 2005 11:00 pm
- Location: Williamstown, MA; Worcester, MA; Yokohama, Japan
- Contact:
The program should be reading from the drive at full speed but its only reading at 1x or 2x. That worries me, especially the fact that the CD is being read slow enough that it does not have to be buffered at all. I would expect the CPU to be used to process the data and rewrite it onto the hard drive and if it encounters error it should definitely be used. I'm not sure of the exact mbps from CD to mem but it was definitely not fast.
Also if I open Internet Explore the CPU kicks in but still the average CPU usage does NOT exceed 2% (at most 3%)... Whenever I open the task manager there are a total of 42 processes and the only ones that seem to have the CPU allocated to are System Idle Processes; Explore.exe; taskmngr.exe; my program; and Internet explore. Partly the reason why I started to be aware of my tasks is that I am at the stage of designing the scheduler for my OS and I was looking at how it's like on other systems. (windows is not a good example but teaches me what not to do) Also I happened to needed certain programs to have the CPU more on my windows time to time than others. I am actually starting to get worried about my computer.
Yours,
Kenneth
Also if I open Internet Explore the CPU kicks in but still the average CPU usage does NOT exceed 2% (at most 3%)... Whenever I open the task manager there are a total of 42 processes and the only ones that seem to have the CPU allocated to are System Idle Processes; Explore.exe; taskmngr.exe; my program; and Internet explore. Partly the reason why I started to be aware of my tasks is that I am at the stage of designing the scheduler for my OS and I was looking at how it's like on other systems. (windows is not a good example but teaches me what not to do) Also I happened to needed certain programs to have the CPU more on my windows time to time than others. I am actually starting to get worried about my computer.
Yours,
Kenneth
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
You didn't answer my question... Your program is repeatedly reading a chunk of data, processing it, then writing it again, right? How big is that chunk of data? 4KB? 4MB? Something else...? The reason I ask is that you may not be making the best use of your CD drive if you're reading too much or too little at once.
Also, to keep things going, you should be using asynchronous I/O and processing data as it gets returned to you. Right now, you're probably doing a blocking read, processing the chunk, then doing a blocking write. This is not going to give you the throughput you want. You should be aiming to keep both the CD drive and hard drive as busy as possible, since they are much slower than the CPU. The limiting factor should be the size of the buffer in memory -- the bigger it is, the higher your throughput should be, up to the maximum transfer speed of your CD drive (which is a lot slower than your hard drive).
Frankly, I think the Windows scheduler has nothing to do with the problem you're having. From an academic standpoint, it's a pretty typical priority-based scheduler. If you want to look at another one that's a little different, read up on QNX' scheduler.
Also, to keep things going, you should be using asynchronous I/O and processing data as it gets returned to you. Right now, you're probably doing a blocking read, processing the chunk, then doing a blocking write. This is not going to give you the throughput you want. You should be aiming to keep both the CD drive and hard drive as busy as possible, since they are much slower than the CPU. The limiting factor should be the size of the buffer in memory -- the bigger it is, the higher your throughput should be, up to the maximum transfer speed of your CD drive (which is a lot slower than your hard drive).
Frankly, I think the Windows scheduler has nothing to do with the problem you're having. From an academic standpoint, it's a pretty typical priority-based scheduler. If you want to look at another one that's a little different, read up on QNX' scheduler.
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
- kenneth_phough
- Member
- Posts: 106
- Joined: Sun Sep 18, 2005 11:00 pm
- Location: Williamstown, MA; Worcester, MA; Yokohama, Japan
- Contact:
Sorry, misunderstood your question. I read them in 4KB chunks in blocks. Every block I read is then (or should be) rewriten to the hard drive as a disk image. So I guess that if I am reading them in blocks there is no point in having a buffer because (obviously) the CPU is considerably faster than the I/O. I will definitely look into asynchronous I/O, thanks!
Yours,
Kenneth
Are there OSs that use multi-level feedback queue scheduling, because that is something I am thinking of implementing for my OS but I am also thinking of sticking to round-robin scheduling.
Yours,
Kenneth
Are there OSs that use multi-level feedback queue scheduling, because that is something I am thinking of implementing for my OS but I am also thinking of sticking to round-robin scheduling.
Hi,
Try reading/writing 1 MB at a time and see if your code works faster. If it does make a difference try 512 KB and 2 MB and experiment until you find the best transfer size.
BTW I've tried to use the POSIX functions for asynchronous IO in some portable code I was experimenting with. It worked very well on Linux (compiled with GCC) but asynchronous IO wasn't supported on Windows (compiled with MINGW). I'm not sure if this is a problem with the libraries that I was using with MINGW or if the problem was elsewhere...
Cheers,
Brendan
Data on CDs is written in a large spiral. If you read 4 KB, then wait, then read 4 KB, then wait, then read 4KB, etc, then it's possible that the CD-ROM's read head will be past the beginning of the next sector, which would make the CD-ROM do a seek before each 4 KB read. If Windows is doing read-ahead then this might not be the case...kenneth_phough wrote:Sorry, misunderstood your question. I read them in 4KB chunks in blocks. Every block I read is then (or should be) rewriten to the hard drive as a disk image. So I guess that if I am reading them in blocks there is no point in having a buffer because (obviously) the CPU is considerably faster than the I/O. I will definitely look into asynchronous I/O, thanks!
Try reading/writing 1 MB at a time and see if your code works faster. If it does make a difference try 512 KB and 2 MB and experiment until you find the best transfer size.
BTW I've tried to use the POSIX functions for asynchronous IO in some portable code I was experimenting with. It worked very well on Linux (compiled with GCC) but asynchronous IO wasn't supported on Windows (compiled with MINGW). I'm not sure if this is a problem with the libraries that I was using with MINGW or if the problem was elsewhere...
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
Funny you should ask, as I'm fairly certain that's what Windows uses.kenneth_phough wrote:Are there OSs that use multi-level feedback queue scheduling
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
Hi,
A quantum is typically around 10 ms of CPU time, and it can take interactive processes hundreds of milliseconds to render a frame full of fonts, menus, icons, etc - especially if there's a decent amount of processing involved to determine what is where (try browsing Intel manuals with "xpdf" on Linux for an example) . This means your interactive tasks get pushed down to lower and lower priorities while other tasks make the interactive tasks seem even less responsive.
On the other side of things you've got things that should be happening in the background that aren't. An example here is compiling a large set of source files. There's 2 problems here - first there's lots of processes (make, GCC for each source file, AS for each source file and LD) which would start out as highest priority. Secondly, they're constantly reading (and writing) data to/from files inbetween processing. This can make the processes stay at high priorities.
If you combine these problems (for e.g. consider trying browse Intel manuals with "xpdf" while compiling the Linux kernel in the background) you end up with the complete opposite of what you want - "xpdf" running at a lower priority while the compiler is running as several high priority processes.
Why make assumptions to begin with? Why not have a "this process is interactive" flag and use that to determine if a process is interactive or not? You could even make the GUI set the flag when the application opens a window (or even have 2 flags - one for "is interactive", and another for "has focus").
The problem with this approach is that an interactive task can rely on other tasks to do part of the interactive processing. For example, what if an application uses a font renderer that's implented as a seperate task? In this case the font renderer wouldn't have it's "is interactive" flag set and you'd get the same problems with the scheduler.
IMHO the only correct way is to force programmers to specify what priority their tasks are, and to penalise them if priorities aren't set correctly (e.g. give users easily understood information on what priority things are and how much CPU time is being used where, so they can figure out when something isn't right and complain).
This also means implementing a scheduler that does no "behind the scenes" priority adjustment based on faulty assumptions and guesswork (and it's just plain easier to write a scheduler that doesn't bother with dynamic priority adjustment).
Cheers,
Brendan
IMHO the problem with multi-level feedback queue scheduling is that it relies on incorrect assumptions. It assumes that an interactive task will handle an event in less than one quantum (i.e. interactive tasks block instead of being preempted). For modern systems (with GUIs insead of CLIs) this doesn't make that much sense.Colonel Kernel wrote:Funny you should ask, as I'm fairly certain that's what Windows uses.kenneth_phough wrote:Are there OSs that use multi-level feedback queue scheduling
A quantum is typically around 10 ms of CPU time, and it can take interactive processes hundreds of milliseconds to render a frame full of fonts, menus, icons, etc - especially if there's a decent amount of processing involved to determine what is where (try browsing Intel manuals with "xpdf" on Linux for an example) . This means your interactive tasks get pushed down to lower and lower priorities while other tasks make the interactive tasks seem even less responsive.
On the other side of things you've got things that should be happening in the background that aren't. An example here is compiling a large set of source files. There's 2 problems here - first there's lots of processes (make, GCC for each source file, AS for each source file and LD) which would start out as highest priority. Secondly, they're constantly reading (and writing) data to/from files inbetween processing. This can make the processes stay at high priorities.
If you combine these problems (for e.g. consider trying browse Intel manuals with "xpdf" while compiling the Linux kernel in the background) you end up with the complete opposite of what you want - "xpdf" running at a lower priority while the compiler is running as several high priority processes.
Why make assumptions to begin with? Why not have a "this process is interactive" flag and use that to determine if a process is interactive or not? You could even make the GUI set the flag when the application opens a window (or even have 2 flags - one for "is interactive", and another for "has focus").
The problem with this approach is that an interactive task can rely on other tasks to do part of the interactive processing. For example, what if an application uses a font renderer that's implented as a seperate task? In this case the font renderer wouldn't have it's "is interactive" flag set and you'd get the same problems with the scheduler.
IMHO the only correct way is to force programmers to specify what priority their tasks are, and to penalise them if priorities aren't set correctly (e.g. give users easily understood information on what priority things are and how much CPU time is being used where, so they can figure out when something isn't right and complain).
This also means implementing a scheduler that does no "behind the scenes" priority adjustment based on faulty assumptions and guesswork (and it's just plain easier to write a scheduler that doesn't bother with dynamic priority adjustment).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- kenneth_phough
- Member
- Posts: 106
- Joined: Sun Sep 18, 2005 11:00 pm
- Location: Williamstown, MA; Worcester, MA; Yokohama, Japan
- Contact:
Thanks, I'll do that! For some reason I thought that it had to be read in 4 bit aligned chunks so I decided on 4kb. I guess its due to too much OS dev dunno
About multi-level feedback queue (MFQ) scheduling I was wondering if it would be realistic to asign processes into categories. For example, font rendering would go under a user interface (UI) category or any graphical user interface (GUI) related process would also go under the UI category. So every process control block (PCB) would have a UI/background process (BP)/ or other flag which would determine if the process is a UI or a BP or some other kind of process. With this I thought of incorporating your focus flag idea only for UI processes.
Now here is what "should" solve the problem...I think...(I've only thought about it for an hour during one of my frees at school so I could be terribly wrong). If the process is a UI process with a focus flag set then it will preempt the currently running process (which cannot be a UI process with a focus flag set) and get the CPU till it finishes. This should solve the problem of the lagging. However this is not it. If the UI process with the focus flag set were to fork another process then the CPU is given to that child process ignoring what is next on the ready queue. One that child process is done it will give the CPU back to its parent and continue its execution till it finishes. If there are multiple child processes I think First-Come, First-Serve (FCFS) Scheduling is fine with in the child process queue (I guess there should be another queue for it).
Otherwise the CPU allocation will alternate between UI and BP, so for example the queue will always look like this (excluding UI with focus flag set):
And as said before, if a UI with a focus were to enter the above queue then it will preempt the currently running process and get the CPU to itself.
I thought that by alternating BPs with UIs we can get fair processing for both, lets say, the xpdf and the gcc. Also, I forgot to say, all UI processes cannot be preempted by anything other than a UI with a focus flag.
So, returning to the above queue diagram, the first UI process p0 would execute and will not give the CPU to p1 until it terminates. Then, p1 will execute but will be preemted in a certain set time quantum, lets say just for this example 4 so that would me 40ms. Then, if p1 terminates in time p2 will execute till it finishes otherwise p1 is preempted and put at the end of the next level queue and p2 starts to execute till it finishes. The problem here is keeping the queue in a UI BP sequence. That I haven't figured out yet and am still thinking.
All other processes and BP that took too long will be queued in the next level queue which may have a primitive priority based scheduling algorithm or simply round robin (I haven't decided and haven't thought much of the next level).
Yours,
Kenneth
About multi-level feedback queue (MFQ) scheduling I was wondering if it would be realistic to asign processes into categories. For example, font rendering would go under a user interface (UI) category or any graphical user interface (GUI) related process would also go under the UI category. So every process control block (PCB) would have a UI/background process (BP)/ or other flag which would determine if the process is a UI or a BP or some other kind of process. With this I thought of incorporating your focus flag idea only for UI processes.
Now here is what "should" solve the problem...I think...(I've only thought about it for an hour during one of my frees at school so I could be terribly wrong). If the process is a UI process with a focus flag set then it will preempt the currently running process (which cannot be a UI process with a focus flag set) and get the CPU till it finishes. This should solve the problem of the lagging. However this is not it. If the UI process with the focus flag set were to fork another process then the CPU is given to that child process ignoring what is next on the ready queue. One that child process is done it will give the CPU back to its parent and continue its execution till it finishes. If there are multiple child processes I think First-Come, First-Serve (FCFS) Scheduling is fine with in the child process queue (I guess there should be another queue for it).
Otherwise the CPU allocation will alternate between UI and BP, so for example the queue will always look like this (excluding UI with focus flag set):
Code: Select all
READY QUEUE:
( p0 | p1 | p2 | p3 | p4 | p5 | p6 | p7 | p8 )
-------------------------------------------------
| UI | BP | UI | BP | UI | BP | UI | BP | UI |
-------------------------------------------------
I thought that by alternating BPs with UIs we can get fair processing for both, lets say, the xpdf and the gcc. Also, I forgot to say, all UI processes cannot be preempted by anything other than a UI with a focus flag.
So, returning to the above queue diagram, the first UI process p0 would execute and will not give the CPU to p1 until it terminates. Then, p1 will execute but will be preemted in a certain set time quantum, lets say just for this example 4 so that would me 40ms. Then, if p1 terminates in time p2 will execute till it finishes otherwise p1 is preempted and put at the end of the next level queue and p2 starts to execute till it finishes. The problem here is keeping the queue in a UI BP sequence. That I haven't figured out yet and am still thinking.
All other processes and BP that took too long will be queued in the next level queue which may have a primitive priority based scheduling algorithm or simply round robin (I haven't decided and haven't thought much of the next level).
Yours,
Kenneth
Hi,
This gives a "spends a lot of time waiting but needs to respond fast" scheduling category for tasks that need the CPU in short bursts.
Background processes are typically the opposite - they don't need to respond to anything quickly, and can use huge amounts of CPU time until they complete (if they complete).
You don't want fair processing for BPs and UIs - you want the UIs to run whenever they can and the BPs to get any CPU time that's left over.
How about using a multi-level feedback queue without the feeback part? For example, have 4 queues where threads never change between queues (unless the user shifts them, or they ask to be shifted)? That way the GUI and task manager could run in the highest priority queue, normal applications can run in the second queue, the third queue could be for general processing and the last queue could be for background threads. The scheduler would run tasks in the highest priority queue it can, so that background threads never slow down general processing threads, general processing threads never slow down application threads, and application threads never slow down the GUI or task manager.
Cheers,
Brendan
Some tasks spend a lot of time doing nothing (blocked waiting for something to happen) and then need to do something quickly (e.g. use 100% of CPU time) when whatever they're waiting for happens. A simple example is an application like a calculator or wordprocessor, which spends ages waiting for the user to press a key or click the mouse. Another example would be a web server waiting for a request for a web page, which needs to send the requested web page as soon as it can. In this case it's mostly the same - there's till a user waiting for the results (even though they're probably on a different computer) and the scheduling considerations are basically the same.kenneth_phough wrote:About multi-level feedback queue (MFQ) scheduling I was wondering if it would be realistic to asign processes into categories. For example, font rendering would go under a user interface (UI) category or any graphical user interface (GUI) related process would also go under the UI category. So every process control block (PCB) would have a UI/background process (BP)/ or other flag which would determine if the process is a UI or a BP or some other kind of process. With this I thought of incorporating your focus flag idea only for UI processes.
This gives a "spends a lot of time waiting but needs to respond fast" scheduling category for tasks that need the CPU in short bursts.
Background processes are typically the opposite - they don't need to respond to anything quickly, and can use huge amounts of CPU time until they complete (if they complete).
Here's were it doesn't make sense to me. If the CPU could be running any UI process, why would it waste precious time running a BP process?kenneth_phough wrote:Otherwise the CPU allocation will alternate between UI and BP, so for example the queue will always look like this (excluding UI with focus flag set):And as said before, if a UI with a focus were to enter the above queue then it will preempt the currently running process and get the CPU to itself.Code: Select all
READY QUEUE: ( p0 | p1 | p2 | p3 | p4 | p5 | p6 | p7 | p8 ) ------------------------------------------------- | UI | BP | UI | BP | UI | BP | UI | BP | UI | -------------------------------------------------
I thought that by alternating BPs with UIs we can get fair processing for both, lets say, the xpdf and the gcc. Also, I forgot to say, all UI processes cannot be preempted by anything other than a UI with a focus flag.
You don't want fair processing for BPs and UIs - you want the UIs to run whenever they can and the BPs to get any CPU time that's left over.
How about using a multi-level feedback queue without the feeback part? For example, have 4 queues where threads never change between queues (unless the user shifts them, or they ask to be shifted)? That way the GUI and task manager could run in the highest priority queue, normal applications can run in the second queue, the third queue could be for general processing and the last queue could be for background threads. The scheduler would run tasks in the highest priority queue it can, so that background threads never slow down general processing threads, general processing threads never slow down application threads, and application threads never slow down the GUI or task manager.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.