Re: an idea about Protected Control Transferring
Posted: Thu Oct 23, 2014 7:32 am
Hi,
For multi-CPU systems, the "process priority" hack no longer works at all.
Let's not forget that almost all the time, a thread blocks because it has to wait for IO, and a thread unblocks because the IO it was waiting for occurred. This means that kernel code (either device drivers for monolithic systems or IPC for micro-kernels) is always involved, so switching to user space for scheduling (instead of doing it in the kernel, while you're already in the kernel to begin with), is completely retarded and saves nothing.
Cheers,
Brendan
For it to work on single-CPU systems, the process' highest priority "ready to run" thread would determine the process' current priority. It would be constantly changing as threads block and unblock (otherwise "do the most important thing first" fails). For this case you can optimise a little (e.g. if there's 2 high priority threads that are ready to run and one blocks, then the process' priority remains the same), but that's largely ignorable because high priority threads tend to do a small amount of work, so the chance of having 2 or more high priority threads "ready to run" at the same time is almost zero.mallard wrote:There's rarely a need to change a process' priority in any system. User-level scheduling doesn't have much of an effect on that. In fact, there's rarely a need to change a thread's priority either. It's usually clear what it should be when the thread is started.Brendan wrote:Except now you've got an additional "process scheduler", and processes continually diddling with their process priority and telling the process scheduler when the process' priority changes.
For multi-CPU systems, the "process priority" hack no longer works at all.
Let's not forget that almost all the time, a thread blocks because it has to wait for IO, and a thread unblocks because the IO it was waiting for occurred. This means that kernel code (either device drivers for monolithic systems or IPC for micro-kernels) is always involved, so switching to user space for scheduling (instead of doing it in the kernel, while you're already in the kernel to begin with), is completely retarded and saves nothing.
No, you've still got a scheduler in the kernel. There's little difference between "process scheduler" and "thread scheduler" (it's about the same amount of work).mallard wrote:As with just about everything in programming, there's a trade-off. You gain simplicity in the kernel since you have one "kernel thread" per process and probably improve the speed of (kernel) context switches (since there are less threads to consider).
Processes gain the flexibility to lie to the kernel (e.g. tell the kernel their highest priority thread is "extremely high priority" when it's not) so that its threads can hog CPU time. Yay.mallard wrote:You also gain flexibility since userspace processes are free to decide their own scheduling algorithms without affecting anything else.
If no system call is involved (whether it's related to IO or caused by priority changes), then there's no point bothering with different threads - just have one thread instead.mallard wrote:It can also be faster, as no system call is needed to switch between threads in the same process.
Lots of people do lots of silly things for lots of wrong reasons.mallard wrote:Note that you can have user-level threading even if the OS supports traditional multi-threading. Various programming languages/environments/libraries do so.
If your sister/daughter/spouse/whatever wanted to try smoking crack and didn't know that it's a bad idea; would you say nothing and laugh while they become a crack addict; or would you try to explain that crack has some disadvantages so that they can make an informed decision?mallard wrote:I know you have (IMHO overly) strong opinions on just about everything related to OS development, but really, most aspects of design are simply a matter of subjective opinion and personal preference. Nobody's forcing you to adopt a particular design. As long as the design is workable, why not just live and let live?Brendan wrote:It's a nice example of "additional "something", that makes user level scheduling less efficient and even more pointless".
Cheers,
Brendan