Page 1 of 1

Process burst time?

Posted: Fri Mar 06, 2009 9:47 am
by deostroll
Hi,
When we learn about os theory I came across a term called "process burst time". It is usually reported in milli-seconds. As far as my understanding of the subject goes to explain the various scheduling algorithms we need such a deliberation. But does the os actually know about it?

Re: Process burst time?

Posted: Fri Mar 06, 2009 11:44 am
by 01000101
I'm not familiar with that exact phrase, but it sounds like your talking about the time a process gets as its time-slice.

The longer the burst (time allotted for that particular slice), the more work on that process gets done, the shorter the burst, the more fair-play a scheduler becomes but this also comes with some frequent task-switch overhead and ISR overhead.

but I could be completely wrong, and you're talking about something completely different. :D

Re: Process burst time?

Posted: Fri Mar 06, 2009 11:59 am
by yemista
I think what you mean is how long(usually in terms of timer interrupts) it takes before the scheduler decides to swap it out. How is the burst time stored? If it is what I am thinking of, its a part of every process, and when the timer interrupt occurs, it decrements the burst time, and when the burst time is 0, the scheduler swaps it. Is this what you mean?

Re: Process burst time?

Posted: Fri Mar 06, 2009 8:46 pm
by deostroll
01000101 wrote:...the time a process gets as its time-slice...
Perhaps...
But in the text book it is given as P1 - 2ms, p2 - 9ms, p3 - 3ms. I assumed that p1,p2,p3 are processes, which take 2,9,3 ms to complete.

But I am learning scheduling algorithms currently. I don't understand how you construed them as time slices? My impression is that the activity of scheduling is to allocate time slices.

PS: May need some mentoring here...

Re: Process burst time?

Posted: Fri Mar 06, 2009 8:59 pm
by yemista
As far as I know it is. Its depends on what you want to do. The simplest scheduling algorithm swaps them out in order and gives them all equal time slices, but some might have more priority or there might be other concerns such as if the process is blocked or waiting. What the book might mean is that that is the time slices they are given by the scheduler.

Re: Process burst time?

Posted: Fri Mar 06, 2009 9:19 pm
by Brendan
Hi,

Some processes wait for something to happen, then do a burst of processing to handle whatever happened, then go back to waiting. The burst time is how much time the process takes to handle whatever happened.

This could be used to optimize scheduling. For example, if a process has a burst time of 5 ms (it always takes 5 ms for the process to handle an event) and the scheduler only gives the process 4 ms time slices, then the process will need 2 time slices to handle each event, and you'll get twice as many task switches and a lot more latency (the time between an event occurring and the event handling to be completed will be a lot more than necessary).
deostroll wrote:My impression is that the activity of scheduling is to allocate time slices.
The activity of scheduling is to decide which tasks (or processes or threads or whatever) run on which CPU/s at which time. Some schedulers don't have time slices - for example, consider a co-operative scheduler that always runs the highest priority task (where the "time slice" is always effectively infinite).


Cheers,

Brendan

Re: Process burst time?

Posted: Sat Mar 07, 2009 1:18 am
by deostroll
Brendan wrote:The burst time is how much time the process takes to handle whatever happened.
If I say that the burst time is the time for a set of instructions to execute (which could be part or a subset of process p1), would I be correct? Further, the instructions complete within the specific time. They are in a sense atomic; they cannot be stopped mid-way. For e.g., consider the instruction
mov a b
This operation has to complete right? So does the scheduler sense this and then allocate the time slices?

Re: Process burst time?

Posted: Sat Mar 07, 2009 8:53 am
by Combuster
You're mixing up the atomicity property with the inseparability property.

inc [counter] is not atomic. (under the wrong circumstances you don't even have read/write atomicity) It reads from memory, increments a value, then writes the result. If something else changes the counter in the meantime the result is not guaranteed correct.

inc [counter] is inseparable (and all x86 instructions are). If an exception happens then either it has been executed, or it has not been executed. That is because each exception or interrupt will point to an instruction. If only part of the instruction has been executed, how do you know what is left to be done?

And neither has anything to do with burst time, which Brendan nicely explained above.

Also, how can the scheduler find out for itself how many instructions the process needs to process an event? The answer depends on the halting problem, making both insolvable. It also means that you have to figure out a different idea.

Re: Process burst time?

Posted: Sun Mar 08, 2009 8:52 pm
by deostroll
Only processes that utilize the cpu are scheduled, right? I've heard of terms like the ready queue, i/o queue, etc. I didn't understand when & what things will fall into each queue; and does the scheduler use these queues as input for its execution?

Re: Process burst time?

Posted: Sun Mar 08, 2009 10:15 pm
by Brendan
Hi,
deostroll wrote:Only processes that utilize the cpu are scheduled, right? I've heard of terms like the ready queue, i/o queue, etc. I didn't understand when & what things will fall into each queue; and does the scheduler use these queues as input for its execution?
Imagine a scheduler with a FIFO buffer (the ready queue). You create a new task and put it on the ready queue, sooner or later the scheduler gets this task off the ready queue and gives it CPU time. If the task uses all of it's time slice the scheduler puts it back on the ready queue and finds another task to run. If the task doesn't use all of it's time slice and blocks (e.g. it calls "sleep()", needs to wait for another task/IPC or needs to wait for a device/device driver) then it's not put back on the ready queue until whatever it's waiting for occurs.

Now imagine if 10 tasks ask the same disk driver to read or write data at the same time. To manage this the device driver might have a FIFO buffer and do the requests one at a time. This FIFO buffer is an I/O queue - a queue of requests for I/O.

In both cases (the scheduler's ready queue and each device driver's I/O queue) it can be more complex. Instead of using a FIFO buffer you might use one FIFO buffer for each priority level, where only the highest priority tasks are given CPU time (and lower priority tasks need to wait until there aren't any higher priority tasks that are ready to run), or where only the highest priority I/O requests are performed (and lower priority I/O requests aren't performed until there aren't any higher priority I/O requests). This allows you to have idle tasks that don't get any CPU time until there's nothing better to do. It also allows you to make sure requests for swap space are done before normal file I/O, and things like defragmenting the file system and checking for bad blocks are only done when there's no other disk I/O.

Of course in this case, if you're running a lower priority task (because no higher priority tasks were ready to run) and a higher priority task becomes ready to run you can immediately put the currently running task back on a ready queue and start running the higher priority task. In the same way, if your disk driver is doing a lower priority I/O request and a higher priority I/O request arrives then you could abort the lower priority request and start doing the higher priority request instead. This will improve response times (higher priority things get attention quicker) but it makes lower priority things slower (for e.g. a low priority I/O request might be half-completed and aborted several times before it's actually completed).


Cheers,

Brendan