carbonBased wrote:From a priority-based stand-point, however, if a thread with high priority is waiting on a queue, and that queue receives a message, it should be waken, and executed as quickly as possible.
Yep, if a message unblocks a thread, the scheduler should update its tables and switch to the highest priority task available. Whether that really means that kernel jumps to the just reactivated task depends on the priority this task was assigned by its creator. It's thus a transparent decision that allows the task's parent to indirectly decide what to do in such situations.
A kernel that tries to reduce the time between two communicating task implicitly enforces a policy that is not even visible to the user: It always preferes task involved in IPC, although other tasks with the same priority level may also be waiting to run. This behaviour might be reasonable in most cases, but I'm sure that there are always situations where a different policy would provide a much more efficient solution (f.ex there's no hurry delivering if the message is only an "ACK: Please stand-by.."). If the policy is part of the scheduler, you however can't alter it at run-time, even if you know that it's not optimal with your setup.
An abstraction should thus always expose decisions to its users. The ideal is a well defined and yet simple mechanism that bases its decisions on the parameters provided (-> seperation of mechanism and policy).
I second the idea that a priority based system should be implemented. I, personally, think all schedulers should support a form of priority based scheduling. Even something basic like levels of priority.
At least in an interactive system they're more or less indispensible to reduce system latency. Priorities can however be implemented in very different ways..
Real-time, in my opinion, generally implies embedded.
For the hard real-time systems this is probably true, soft real-time could however also make sense on a normal desktop system. As multimedia applications (audio, video, games) process their data at a constant rate, they might also benefit from a time aware schedulers. This of course doesn't mean that a traditional round-robin system couldn't do the job: I just believe that the results should always be better with a more specialized algorithm..
The O(1) is probably the more relevant portion of my quote above. Meaning that you'll want a scheduling system that scales well with multiple threads. It would be adventageous if the scheduler took N cycles all the time, regardless of the number of threads currently running.
If it's only about the switch itself, even my (still imaginary) scheduler should execute in a constant amount of time. It just iterates to the next node on the "active" list to get the task's TCB, and thus its registers aswell as the context pointer. The main work has to be done when a task blocks/unblocks or a process either gets started or killed: The scheduler then edits the active list by incorporating all kinds of data that must mostly be obtained from lists or trees. Making this portion execute with O(1) is much less trivial and I actually have some doubts whether its's really worth all the trouble, given that the number of threads is normally not that high. Even on my bloated KDE installation I still only have 67 right now - realtime systems probably tend to have even fewer.
If the scheduler takes more and more time as threads are being added to the system, then it becomes more difficult to obtain accurate timing at the application level (which might be controlling a very time sensitive device).
Well, I guess that these devices would provide something similiar to double-buffering to avoid such problems. Otherwise even something as unpredictably as TLB faults or random IRQs could delay the timing.
mrkaktus wrote:And I need to tell that, to that moment I have not got priorities in my tasks sheduling algorithm. I was thinking about it but I don't know how to decide which task schould have upgraded or degraded task level (what decides on it).
The higher the priority of a task is, the smaller will its latency be. A very simple solution would be to give a priority of 0 (highest) to all driver and lower priorities to all regular applications. An interactive task could for example run on level 1, while a batch application that spends most of its time calculating could be on level 2. With such a setup interactive tasks (like the shell) may interrrupt all batch tasks and interrupts always get served immediately. It's of course possibe to extend the design to something more sophisticated..
One thing you should have an eye on is that your low priority tasks don't starve eventually. If there're too many high priority tasks it may happen, that they hardly ever get the possibility to run. It might thus be necessary to rebalance the system every now and then: Task that have been waiting for too long might then be assigned a higher privilege level and/or the priority of often running task may get reduced in time.
regards,
gaf