Hi all,
I'm not really up to the scheduler construction phase as yet (done interrupts sorting memory management)... ie I'm in the very early stages of learning OS development, however I was just wondering about schedulers. I put together a little test app of a scheduler and it seems to work fine (in terms of time slice sharing) though it's somewhat limited. The idea is simple:
Keep a single list of all tasks and for each task have an associated priority (0-6 inclusive at present) and an offset value. Associated with the list we have two arrays of 6 values representing a mapping (which I'll explain) and the next offset to assign. There is also a value called counter which is initially 1.
The mapping will map the priorities (0-6) onto prime numbers and 1. Since I wanted high priority to mean lots of CPU usage I map as follows:
0->13, 1->11, 2->7... 6->1
My list in the current scheme must contain at least one task (in my case an idle task, with priority 0).
When a task is added it's placed onto the list and it has the offset from the corresponding (ie matching its priority) array element (initially 0) assigned to it, then that array element is incremented.
When the scheduler is called it continues from it's current point in the list (initially the start of the list) until it finds a task for which (counter+task.offset)%priority_map[task.priority]==0. This is carried out in a for loop whereby we step through the tasks and if we hit the end of the task list we start again. When we hit the end of the task list, the counter is then incremented. When we have found such a task... we switch to it.
Logically it works as follows.... For each call to the scheduler the next appropriate task gets the timeslice. For each pass over the task list any given task may get 0 or 1 timeslice. Each task is set to get a timeslice every priority_map[priority] passes over the task list.
The reason for using prime numbers as the priorities is that they will be coprime, meaning that tasks of various priorities should be uniformly distributed over the task list passes. The reason for the offset is that tasks of equal priority should be distributed more uniformly between task list passes. These two points are intended to make the cost of calling the scheduler less variable and to give each task time on the CPU at as predictable/regular an interval as possible.
It doesn't factor in various other issues like tasks waiting for events etc, but this should not affect the overall design.
My question is this: Does this sound like a reasonable design for a scheduler as it is extremely simple (the scheduler core is 4 lines of code)? Is this equivalent to a well know type of scheduler and if so which one? Do you think this is a ridiculous approach to take or does it sound ok in your opinion?
Thanks in advance,
Dan
PS: When I get to the point of actually putting a scheduler into the kernel then I'll probably give it a try anyway and see how it works. I was just looking for any critique/advice.