Implementing Delay

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Post Reply
User avatar
Chandra
Member
Member
Posts: 487
Joined: Sat Jul 17, 2010 12:45 am

Implementing Delay

Post by Chandra »

I have implemented delay in my kernel using PIT. But this doesn't seem to be a good idea because it is going to affect the Multitasking system. So I was just wondering if someone has a fine way of implementing delay(upto 5 seconds would be fine). If anyone do have the code or at least some link please do post it here. Thanks in advance.
Programming is not about using a language to solve a problem, it's about using logic to find a solution !
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

Re: Implementing Delay

Post by JamesM »

Hi,

You've got it right first time - the way to implement delays is to use the PIT or APIC.

Why will using the PIT impact on your multitasking subsystem?

James
xyzzy
Member
Member
Posts: 391
Joined: Wed Jul 25, 2007 8:45 am
Libera.chat IRC: aejsmith
Location: London, UK
Contact:

Re: Implementing Delay

Post by xyzzy »

Using the PIT for delays shouldn't interfere with your multitasking system, as long as you don't have the two systems programming the PIT differently or something. You need a central system for using the timer. For example, my kernel has a timer system that allows the scheduler and delay functions to be notified when a time period passes. After each timer tick, the handler checks if any registered timers have expired, and if so, calls their handler functions.

Alternatively, you could implement delays with a spin loop, using RDTSC to figure out when the time period has passed. However, spinning is quite inefficient for long delays.
skyking
Member
Member
Posts: 174
Joined: Sun Jan 06, 2008 8:41 am

Re: Implementing Delay

Post by skyking »

I assume that you ran into the problem that you already uses the timer for preemption. In that case the trick is to put the calling process into sleep and let it be there until the desired number of ticks has elapsed.

Another way around is to write a virtual timer that allows to set arbitrary number of timers. By having a list of future timeouts you may reprogram the PIT to trigger at the next timeout in the list when either the PIT triggers or a timeout is added to the list that is prior to the upcoming timeout. You can then use the other timer in the PIT for reference so that you don't get timer drift when you reprogram the other timer.
User avatar
Chandra
Member
Member
Posts: 487
Joined: Sat Jul 17, 2010 12:45 am

Re: Implementing Delay

Post by Chandra »

Thanks everyone for your posts.
JamesM wrote: Why will using the PIT impact on your multitasking subsystem?

James
I was just checking out if it could really affect the multitasking system. It's always better to be sure before implementing anything. Actually, I have a question in mind. Can the significant delay interval(at most 5 secs) be updated when the OS switches between the processes?To make things clear, if one of the process introduces 1 sec of delay, how am I supposed to update the delay interval of the process the next time the os switches to that process.
Sorry for asking so many questions but I wanted to clarify myself before proceeding on to write a fairly neat OS. Thanks all once again.
Programming is not about using a language to solve a problem, it's about using logic to find a solution !
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Implementing Delay

Post by Brendan »

Hi,
Chandra wrote:Can the significant delay interval(at most 5 secs) be updated when the OS switches between the processes?To make things clear, if one of the process introduces 1 sec of delay, how am I supposed to update the delay interval of the process the next time the os switches to that process.
Sorry for asking so many questions but I wanted to clarify myself before proceeding on to write a fairly neat OS. Thanks all once again.
You don't want to do something like "while( current_time < start_time + delay_time) { /* wait */ }" where lots of CPU time is wasted for no reason. Instead you want to do something like "sleep(delay_time);" where the CPU executes other processes (or tries to save power/heat if there's nothing else to run) until the delay expires.

To do something like "sleep(delay_time);", the kernel would calculate what time the task should wake up, then put the task onto some sort of queue, and change the task's state to "sleeping" and remove the task from whatever data structure the scheduler uses to decide which task to give CPU time to next. For the PIT (or any other timer), when the IRQ occurs you update the current time, then check for any sleeping tasks that should be woken up. If any sleeping tasks are meant to wake up you remove them from the "sleep queue", change their state to "running" and put them back into whatever data structure the scheduler uses to decide which task to give CPU time to next.

That's the basic idea anyway - there's plenty of implementation details I skipped (like keeping the "sleep queue" sorted) and a lot of different ways to optimise it, like using timers with better precision (e.g. HPET) if available, using multiple "sleep queues" if/when there's multiple timers available, using "buckets" to minimise the overhead of queue sorting, etc.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply