priority scheduling

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Post Reply
mblock

priority scheduling

Post by mblock »

hi,

this is my first post here, so -> hello :)

i want to replace my round'n'robin-algo with priority-scheduling... but i can't find any informations about it :(

can you tell me where i can find informations about priority-scheduling?

thx
mblock
proxy

Re:priority scheduling

Post by proxy »

generally a simplistic way to do it is like this:

have several queues (one for each level, i've seen lots but lets say 8 for now).

to pick next task, start with highest pri queue, if it has no runnable move on to next list (if all have no runnable, choose idle thread).

now the tricky part is comming up with a hueristic for setting a threads priority, you have it adjustable at runtime or set by the application (or both).

one way i have been considering how to do this, is if the process has been preempted (namely it didnt block, it was forced to yield do the timer interrupt) it likely had more work to do. if this happens a given number of times it get's bumped up in priority. the mentality being if is continually being interrupted, then it is doing a lot of busy work.

here's the problem, starvation...how can i guarantee this wont prevent threads that do a lot of IO and thus block a lot from never running if someone has a program doing a lot of number crunching.

well i also have a point where too much non-blocking will count against your priority (kinda being punished for being a cpu hog)

this is just my initial idea, it's hard to tell if it is good for general performance until i have a GUI where it'll really show.

proxy
Slasher

Re:priority scheduling

Post by Slasher »

Proxy, your idea is good but backwards. It should be that a task that keeps getting forced out of execution i.e. it does not block should be moved to a lower priority queue as it will interfere with short interactive tasks that only need a short burst of cpu time before blocking or finishing their task.
Now, if you are trying to code a system that favours interactive tasks then all new tasks start at the highest queue. If the task keeps getting preempted by the scheduler, it should be moved to the next lower queue.
This is what I've done with my scheduler and I've experienced a huge performance increase. I was using a Priority based Round Ribbon Scheduler which works but could not reliably handle a task that was in an infinate loop i.e. as soon as I added this task for testing purposes all the other tasks really suffered
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:priority scheduling

Post by Solar »

You're both right. ;D

The question is: What do you want to achieve?

If your system is aimed at crunching numbers (e.g. in scientific research), you'll probably wouldn't want your numbercruncher tasks being interupted just because some other thread wants to do some I/O.

If your system has lots of I/O to handle, and only little commputational stuff (webserver, for example), you'll want to have your I/O bursts handled ASAP, and accept the cost of preempting / task switching.

What both of you are describing is feedback scheduling - where priorities get modified based upon feedback from previous runs of the task. Note that this is actually one step beyond priority scheduling. (And here comes my most favourite example - AmigaOS scheduling knew priorities, but didn't modify them.)

@ mblock:

Scheduling algorithms are a tricky subject, and our Wiki only scratches the surface. You might want to read up in relevant literature, Tanenbaum being the most-quoted author in the English language namespace. ;-)
Every good solution is obvious once you've found it.
distantvoices
Member
Member
Posts: 1600
Joined: Wed Oct 18, 2006 11:59 am
Location: Vienna/Austria
Contact:

Re:priority scheduling

Post by distantvoices »

You wouldn't even need *extra* queues for feedback priority round robin scheduling:

You 'd need to keep the queue sorted (f. ex. asc): and for each priority, the processes/threads are grouped together. Any preempted process/thread is appended at the bottom of the group it belongs to.

Fetching the next runnable task would be an o(1) operation for you just pick the one from the head of the queue.

If a task hogs the cpu too much, I for one would give it a penalty of say max (actual priority - 10), while an IO bound task should get max (actual priority + 10).

@Solar: while you are perfectly correct with your considerations, I don't think it is necessary to have the number crunchers get the top most priority. It *should* always be possible to interact with the system regardless of how many number crunchers are actually working in the background. It is in my opinion, that *Responsiveness* needs to be taken into considerations. Look at your own holy grail: AmigaOS - responsive as hell regardless of workload spinning around the cpu in form of Number Crunchers.
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:priority scheduling

Post by Solar »

beyond infinity wrote:
Look at your own holy grail: AmigaOS - responsive as hell regardless of workload spinning around the cpu in form of Number Crunchers.
Reason is simple. Every task was set to priority 0 by default. The Workbench GUI (including the event handling) ran at priority 5, so GUI events preempted whatever task was running ATM.

Note that there was very little demand for changing priorities on AmigaOS. Screensavers were set to -127 (lowestmost priority, -128 was "never runs"), and "free CPU cycle eaters" like RC5 or Seti@Home were set to -1 or -2. Everyone else used 0, and was fine with it.

PS: Two things bogged down even this fine system: Really *heavy* event load - with the kernel preempting so often that most time is spent in the scheduler - and setting tasks to priority 5 or higher - which effectively froze the GUI. ;-)

PPS: Note that disk access did *not* slow down the system, since the Amiga had a dedicated chip for doing the I/O...
Every good solution is obvious once you've found it.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:priority scheduling

Post by Solar »

mblock: Congrats for opening thread #6000, by the way. ;-)
Every good solution is obvious once you've found it.
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:priority scheduling

Post by Pype.Clicker »

If priorities are static, one must ensure that job control process have top priority whatever other process do (or a misbehaved heavy program could hang up the machine, and beleive me, even scientist do not want this :P)

I'd rather advocate for k fifo queues instead of one large sorted queue. The reason is that inserting a new task requires O(N) operations in a sorted queue and only O(1) in the k fifo approach.

if k is relatively small compared to N, then even sweeping the k queues until a non-empty queue is found is O(1), theorically speaking. btw, it could even be done quicker if queues are later organized as a linked list of non-empty queues.

Code: Select all

class PriorityScheduler {
     const int k;
     bits notempty:k;
     Queue queues[k];
  
     void insert(Task task) {
         int priority=task.priority()
         queues[priority].insert(task);
         if (!notempty[priority]) {
             int previous = notempty&masks_all_tasks_with_lower_priority
             previous=find_lowest_bit_set_in(previous);
             queues[priority].next=queues[previous].next;
             queues[previous].next=priority;
             notempty[priority]=true;
        }
      }

      Task get() {
        priority=curr_priority;
        Task t=queues[priority].get();
        if (queues[priority].isempty()) {
             notempty[priority]=false;
             curr_priority=queues[priority].next;
        }
      }
}
mblock

Re:priority scheduling

Post by mblock »

hi again

i implemented priority-based-scheduling with static priorities... but how can i prevent the "starvation" of "low-priority-tasks" if a "high-priority-task" is in a endless-loop.... the solution is easy: i must modify the priorities :)
but how? :)

i tried the following:
all tasks have a "quantum" of 100ms... every new task start with 128 as priority (priority=0-255)... if a task use all these 100ms it's priority is decreased by one and if a task use less than 10ms of it's time this priority is increased...
so after a few seconds all 'io-tasks' have 255 as priority and all 'cpu-tasks' have 0 as priotity... but i think that this is not the goal of my scheduling :)

what else can i do to prevent this starvation?

i could use aging... but i don't know wheather this is sooo good... what do you think?

thx :)
mblock
ps: sorry for any mistakes :) my english is not 'perfect' ;D
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re:priority scheduling

Post by Brendan »

Hi,
mblock wrote: i implemented priority-based-scheduling with static priorities... but how can i prevent the "starvation" of "low-priority-tasks" if a "high-priority-task" is in a endless-loop.... the solution is easy: i must modify the priorities :)
but how? :)

i could use aging... but i don't know wheather this is sooo good... what do you think?
IMHO ageing is worse. Tasks that use a constant amount of CPU time still end up with a priority of 255 or 0, it just takes longer before they get there. Also consider a task used to create video for a 3D game - when nothing is moving the video doesn't change, the task does nothing and the task's priority becomes 255. When the user starts moving around in the game the task starts using 98% of CPU time. Eventually the scheduler reduces the task's priority back to 0, but other tasks that need high priority may end up with overflowed buffers and things first.

The problem is caused because the scheduler always runs the highest priority task. This type of scheduler I call "co-operative". Changing task's priorities makes the problem more complex but doesn't fix it. What you need is a scheduler that doesn't always run the highest priority task :)

If you had a round robin scheduler where the tasks priority determines how long the task runs, then a task in an endless loop would slow everything else down, but all tasks would still get CPU time. This type of scheduler I call "variable time slice". The problem with this is that high priority tasks may not get a time slice for ages.

Instead the scheduler could give every task 10 mS and use the tasks priority to determine how often it gets a time slice. A high priority task would get 255 time slices while a low priority task gets 1 (if both are busy). This I call "variable frequency". The problem is, for tasks that use a lot of CPU time you end up with many more task switches than you need.

Co-operative has very good response times, doesn't cause excessive task switches and behaves badly for CPU hogs - perfect for IO-tasks and user interfaces. Variable time slice has bad response time, doesn't cause excessive task switches and is acceptable for CPU hogs when good response times aren't needed - good for tasks that use lots of CPU time and don't need to be responsive (compilers, idle tasks). Variable frequency is somewhere in the middle - reasonable response times and reasonable for CPU hogs, but not good for overhead.

Combine several different types of scheduling!

This brings me to my scheduler :) It has 4 different "scheduling policies" in addition to task priorities.

The first policy (policy 0) is for extremely high priority tasks that need very quick response times but don't use much CPU time (e.g. IRQ handlers). This policy is co-operative - if a task uses too much time it's terminated.

The second policy is variable frequency, and is used for high priority tasks that need good response times but not much CPU time (e.g. user interfaces).

The third policy is variable time slice and is used for normal priority threads that don't need good response time but use CPU time (e.g. compilers).

The fourth policy (policy 3) is also variable time slice and is used for idle threads.

Code: Select all

findNextTask {
   if(readyPolicy0tasks != 0) findNextPolicy0task;
   else if(readyPolicy1tasks != 0) findNextPolicy1task;
   else if(readyPolicy2tasks != 0) findNextPolicy2task;
   else findNextPolicy3task;
}

findNextPolicy0task {
   //Note: tasks sorted in order of priority when put in stack
   task = task from top of policy 0 stack
   switchTasks(task, policy0qTime)
}

findNextPolicy1task {
   task = nextPolicy1task
   lowestCount = 255
   bestTask = -1
   do {
      task.count--
      if(task.count == 0) {
         task.count == task.priority
         switchTasks(task, 10)
         return
      }
      if(task.count < lowestCount) lowestCount = task.count
      task = task + 1
   } while task != nextPolicy1task

   do {
      task.count = task.count - lowestCount
      if( (task.count == 0) && (bestTask == -1) ) bestTask = task
      task = task + 1
   } while task != nextPolicy1task

   task.count == task.priority
   switchTasks(bestTask, 10)
}

findNextPolicy2task {
   //Note: tasks put on end of queue
   task = task from start of policy 2 stack
   switchTasks(task, 255-task.priority)
}

findNextPolicy3task {
   //Note: tasks put on end of queue
   task = task from start of policy 2 stack
   switchTasks(task, 255-task.priority)
}
I've used this design in all of my kernels for the last 4 years and I've never seen a scheduler that behaves better.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:priority scheduling

Post by Pype.Clicker »

just in case, you might wish to check the Dictionary of Algorithms and Data Structures ... i guess they certainly have something about priority scheduling :)
How to prevent starvation ?
That's a matter of access control, not of scheduling.
- limit the maximum cpu holding time for your high-priority tasks (e.g. each may keep the CPU for a maximum of 10ms)
- check if MCHT*nb_hprio_tasks<threshold when adding a new task. Refuse the task if the total amount of CPU cycles for hi-priority tasks goes too high (say, 25% or something)
- enforce the MCHT: when As IRQs will still be received, you can check for how long the high-priority task hasn't yield and doom it to the lower class if it fails to observe the rules.
Post Reply