Hi,
mblock wrote:
i implemented priority-based-scheduling with static priorities... but how can i prevent the "starvation" of "low-priority-tasks" if a "high-priority-task" is in a endless-loop.... the solution is easy: i must modify the priorities
but how?
i could use aging... but i don't know wheather this is sooo good... what do you think?
IMHO ageing is worse. Tasks that use a constant amount of CPU time still end up with a priority of 255 or 0, it just takes longer before they get there. Also consider a task used to create video for a 3D game - when nothing is moving the video doesn't change, the task does nothing and the task's priority becomes 255. When the user starts moving around in the game the task starts using 98% of CPU time. Eventually the scheduler reduces the task's priority back to 0, but other tasks that need high priority may end up with overflowed buffers and things first.
The problem is caused because the scheduler always runs the highest priority task. This type of scheduler I call "co-operative". Changing task's priorities makes the problem more complex but doesn't fix it. What you need is a scheduler that doesn't always run the highest priority task
If you had a round robin scheduler where the tasks priority determines how long the task runs, then a task in an endless loop would slow everything else down, but all tasks would still get CPU time. This type of scheduler I call "variable time slice". The problem with this is that high priority tasks may not get a time slice for ages.
Instead the scheduler could give every task 10 mS and use the tasks priority to determine how often it gets a time slice. A high priority task would get 255 time slices while a low priority task gets 1 (if both are busy). This I call "variable frequency". The problem is, for tasks that use a lot of CPU time you end up with many more task switches than you need.
Co-operative has very good response times, doesn't cause excessive task switches and behaves badly for CPU hogs - perfect for IO-tasks and user interfaces. Variable time slice has bad response time, doesn't cause excessive task switches and is acceptable for CPU hogs when good response times aren't needed - good for tasks that use lots of CPU time and don't need to be responsive (compilers, idle tasks). Variable frequency is somewhere in the middle - reasonable response times and reasonable for CPU hogs, but not good for overhead.
Combine several different types of scheduling!
This brings me to my scheduler
It has 4 different "scheduling policies" in addition to task priorities.
The first policy (policy 0) is for extremely high priority tasks that need very quick response times but don't use much CPU time (e.g. IRQ handlers). This policy is co-operative - if a task uses too much time it's terminated.
The second policy is variable frequency, and is used for high priority tasks that need good response times but not much CPU time (e.g. user interfaces).
The third policy is variable time slice and is used for normal priority threads that don't need good response time but use CPU time (e.g. compilers).
The fourth policy (policy 3) is also variable time slice and is used for idle threads.
Code: Select all
findNextTask {
if(readyPolicy0tasks != 0) findNextPolicy0task;
else if(readyPolicy1tasks != 0) findNextPolicy1task;
else if(readyPolicy2tasks != 0) findNextPolicy2task;
else findNextPolicy3task;
}
findNextPolicy0task {
//Note: tasks sorted in order of priority when put in stack
task = task from top of policy 0 stack
switchTasks(task, policy0qTime)
}
findNextPolicy1task {
task = nextPolicy1task
lowestCount = 255
bestTask = -1
do {
task.count--
if(task.count == 0) {
task.count == task.priority
switchTasks(task, 10)
return
}
if(task.count < lowestCount) lowestCount = task.count
task = task + 1
} while task != nextPolicy1task
do {
task.count = task.count - lowestCount
if( (task.count == 0) && (bestTask == -1) ) bestTask = task
task = task + 1
} while task != nextPolicy1task
task.count == task.priority
switchTasks(bestTask, 10)
}
findNextPolicy2task {
//Note: tasks put on end of queue
task = task from start of policy 2 stack
switchTasks(task, 255-task.priority)
}
findNextPolicy3task {
//Note: tasks put on end of queue
task = task from start of policy 2 stack
switchTasks(task, 255-task.priority)
}
I've used this design in all of my kernels for the last 4 years and I've never seen a scheduler that behaves better.
Cheers,
Brendan