http://josh.trancesoftware.com/linux/
Contains great explanations.
Could help some of us
INeo
Documentation of Linux 2.6.8.1 Scheduler
Re:Documentation of Linux 2.6.8.1 Scheduler
imho this fits under QuickLinks at the top...
Re:Documentation of Linux 2.6.8.1 Scheduler
Added.Candy wrote: imho this fits under QuickLinks at the top...
-
- Member
- Posts: 1600
- Joined: Wed Oct 18, 2006 11:59 am
- Location: Vienna/Austria
- Contact:
Re:Documentation of Linux 2.6.8.1 Scheduler
fairly neat stuff is explained there.
What I don't really get is, why the heck they use only two queues for the run queues: one for currently runnable tasks, one for those who have expired their time slice. Wouldn't it suffice to add the tasks to the tail of the run queue upon expiration of time slice - or Have I got some detail the wrong way?
What I don't really get is, why the heck they use only two queues for the run queues: one for currently runnable tasks, one for those who have expired their time slice. Wouldn't it suffice to add the tasks to the tail of the run queue upon expiration of time slice - or Have I got some detail the wrong way?
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
BlueillusionOS iso image
Re:Documentation of Linux 2.6.8.1 Scheduler
I added a note in the QT thread on my "favourite" sidenote - licensing. (The doc is FDL and thus "safe" for non-GPL'ers like me, whereas the Linux source code isn't.)
@ BI:
Check chapter 5.3.1 - the active vs. expired queues, if I get it right, are used so that threads giving up their timeslice get preferred treatment over those who use it up. (I might be wrong, this is after a quick sifting-through only.)
@ BI:
Check chapter 5.3.1 - the active vs. expired queues, if I get it right, are used so that threads giving up their timeslice get preferred treatment over those who use it up. (I might be wrong, this is after a quick sifting-through only.)
Every good solution is obvious once you've found it.
-
- Member
- Posts: 1600
- Joined: Wed Oct 18, 2006 11:59 am
- Location: Vienna/Austria
- Contact:
Re:Documentation of Linux 2.6.8.1 Scheduler
hm ....
let's itereate over it:
say we have 8 priorities.
we have two queues, each of which has 1. a bitmap, which tells, in which slot of the queue are runnable tasks and 2. an array of linked list queues
let's call this
If I understand correctly, they first check the index of the first bit set in current->bitmap.
Then, select the tcb from the queue indexed by the index of the first bit set in current->bitmap.
That's the runnable task. It uses up its time slice, gets preempted.
take the task outta current runqueue. if the runqueue is empty, clear the bit. if the task is to be scheduled again, stick it in the according expired queue and set the bit in expired->bitmap accordingly, if the queue 's been empty previously.
en train of this, recalculate the new timeslice of this task and eventually give it bonus/penalty (is it a cpu hog - does it yield nicely or request user input/data from other hardware often.
If there are no more tasks in the current (bitmap is empty, all queue->heads are NULL), letz exchange current with expired and the game starts from the beginning - fetch the highest priority task and so forh.
Hm. Not sure if I am understanding this correctly. Does this permit driver tasks or services to preempt otto-normal user processes/tasks so that no user task gets the cpu as long as system crucial tasks/processes require it? I speak from micro kernel point of view.
let's itereate over it:
say we have 8 priorities.
we have two queues, each of which has 1. a bitmap, which tells, in which slot of the queue are runnable tasks and 2. an array of linked list queues
let's call this
Code: Select all
typedef struct tcb_queue{
tcb_t *head;
tcb_t *tail;
}tcb_t;
typedef struct runqueue{
uchar_t bitmap; //simplified!!!
tcb_queue_t queues[8];
}runqueue_t;
//then we have
runqueue_t queues[2];
runqueue_t *current;
runqueue_t *expired;
Then, select the tcb from the queue indexed by the index of the first bit set in current->bitmap.
That's the runnable task. It uses up its time slice, gets preempted.
take the task outta current runqueue. if the runqueue is empty, clear the bit. if the task is to be scheduled again, stick it in the according expired queue and set the bit in expired->bitmap accordingly, if the queue 's been empty previously.
en train of this, recalculate the new timeslice of this task and eventually give it bonus/penalty (is it a cpu hog - does it yield nicely or request user input/data from other hardware often.
If there are no more tasks in the current (bitmap is empty, all queue->heads are NULL), letz exchange current with expired and the game starts from the beginning - fetch the highest priority task and so forh.
Hm. Not sure if I am understanding this correctly. Does this permit driver tasks or services to preempt otto-normal user processes/tasks so that no user task gets the cpu as long as system crucial tasks/processes require it? I speak from micro kernel point of view.
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
BlueillusionOS iso image