Timer And Task Switch

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Post Reply
User avatar
arabasso
Member
Member
Posts: 27
Joined: Thu Jul 10, 2008 9:41 am

Timer And Task Switch

Post by arabasso »

I was using the PIT, and now I started using the APIC timer. So far no problem, but an interesting question arose.

My kernel has support for threads, as well as a sleep and mutexes (initially all threads have the same priority). When there is a call to sleep or mutex_lock (if already in use), the kernel locks the current thread and gives a reschedule to get another thread ready to run.

The problem is, a timer IRQ can happen soon after a task switch was made because of sleep or mutex_lock, thus forcing a new task switch. But this thread? And your timeslice?

Code: Select all

  Execution
      |
+----------+ Thread 1
|          |
|          |
|  Time    |
|          |
|          |
+----------+ Thread 2
|          |
|          |
|  Time    |
+----------+ -> [mutex_lock / sleep] -> [reschedule] -> Thread 3
|  Time    |
+----------+ Thread 4
      .
      .
      .
In the above case, the thread 2 was interrupted (OK), but the thread 3, has not the correct timeslice. What should I do then? Reset the counter of the timer? Or simply ignore it?
Matthew
Member
Member
Posts: 48
Joined: Wed Jul 01, 2009 11:47 am

Re: Timer And Task Switch

Post by Matthew »

Generally, a good idea is to use the Local APIC in "one-shot" mode and set the counter every time you need it. Find out the number of ticks that will pass from now until the next important "event" (however you define it) and set the Local APIC timer countdown to that.

Unfortunately there is no specified relationship between Local APIC ticks and "real time", it generally runs at the frequency of the front-side bus. You can establish the number of Local APIC ticks per second by measuring it between two PIT IRQs during boot.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Timer And Task Switch

Post by Brendan »

Hi,

There's 2 cases to consider here (and I think arabasso has only noticed the first so far).

The first case is a task blocking, leaving very little time before the timer IRQ occurs, and causes the next task to get a very short amount of CPU time. Mathew is right - using "one shot" mode fixes this problem completely.

The second case is something occurring that causes a previously blocked task to pre-empt the currently running task. For an example, the scheduler might do a task switch to task A, and immediately after the task switch an IRQ from the disk controller might cause task B (which was blocked waiting for disk IO) to pre-empt task A. In this case, even when you're using the timer in "one shot" mode, task A can get an extremely small amount of CPU time.

A few OSs don't have this problem because they don't support pre-emption (and they feel "sluggish" because it takes longer for processes to respond to keypresses, mouse movement, network packets, etc). Some OS's ignore the problem and hope that it works out OK on average. Some OSs take this into account. For a simple example, when a task is pre-empted you could read the remaining count from the timer and calculate how much time the pre-empted task is still owed, and then give the task extra time later to catch up what it missed to ensure that all tasks end up getting their fair share of CPU time despite pre-emption.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
arabasso
Member
Member
Posts: 27
Joined: Thu Jul 10, 2008 9:41 am

Re: Timer And Task Switch

Post by arabasso »

Hi, thanks for the answers.

Brendan, with respect to the second case, i'm using a semaphores to unlock the threads locked for IO (in my kernel, threads blocked by semaphores has priority and more likely to be executed).

About the threads have a very short amount of CPU time, I do not know if this can be considered a problem, just like to distribute better the CPU time between threads.

Since we're talking about timers and task switch, has another thing I would ask about priorities. Several kernel projects using a time scheme. Ex.:

Code: Select all


// Thread initialization

THREAD.time = 20;

...

// Thread scheduler

CURRENT_THREAD.time--;

if (CURRENT_THREAD.time <= 0)
{
    // ... TASK_SWITCH ...
}
This seems to work, but there seems to be something I say "fast". On each IRQ timer, the CPU push to stack the registers and jump to the handler of the timer. If it is not yet time for a task switch, the CPU pop registers of the stack and return to the same point where it stopped.

In this case, much time is wasted with push and pop of the registers, and call the timer handler unnecessarily (after all, is not it time for a task switch, then all that CPU time was wasted).

I thought in my kernel to implement a priority scheme based on number of TIMES that a thread would be executed, and not a TIME. For example, I have three threads (T1 = 1, T2 = 1, T3 = 2). In this case, the thread of priority 2 would run more 2x than a thread of the priority 1:

Code: Select all

+----------+
| Thread 3 |
+----------+
| Thread 1 |
+----------+
| Thread 3 |
+----------+
| Thread 2 |
+----------+
| Thread 3 |
+----------+
| Thread 1 |
+----------+
     .
     .
     .
Although this system has a serious problem, if there are only one thread at a priority type, all have the same timeslice. It's just an initial idea... I would like suggestions about this or another priority scheme.

Bye
Post Reply