Process burst time?

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
User avatar
deostroll
Posts: 4
Joined: Fri Mar 06, 2009 9:37 am

Process burst time?

Post by deostroll »

Hi,
When we learn about os theory I came across a term called "process burst time". It is usually reported in milli-seconds. As far as my understanding of the subject goes to explain the various scheduling algorithms we need such a deliberation. But does the os actually know about it?
User avatar
01000101
Member
Member
Posts: 1599
Joined: Fri Jun 22, 2007 12:47 pm
Contact:

Re: Process burst time?

Post by 01000101 »

I'm not familiar with that exact phrase, but it sounds like your talking about the time a process gets as its time-slice.

The longer the burst (time allotted for that particular slice), the more work on that process gets done, the shorter the burst, the more fair-play a scheduler becomes but this also comes with some frequent task-switch overhead and ISR overhead.

but I could be completely wrong, and you're talking about something completely different. :D
User avatar
yemista
Member
Member
Posts: 299
Joined: Fri Dec 26, 2008 12:31 pm
Location: Boston
Contact:

Re: Process burst time?

Post by yemista »

I think what you mean is how long(usually in terms of timer interrupts) it takes before the scheduler decides to swap it out. How is the burst time stored? If it is what I am thinking of, its a part of every process, and when the timer interrupt occurs, it decrements the burst time, and when the burst time is 0, the scheduler swaps it. Is this what you mean?
User avatar
deostroll
Posts: 4
Joined: Fri Mar 06, 2009 9:37 am

Re: Process burst time?

Post by deostroll »

01000101 wrote:...the time a process gets as its time-slice...
Perhaps...
But in the text book it is given as P1 - 2ms, p2 - 9ms, p3 - 3ms. I assumed that p1,p2,p3 are processes, which take 2,9,3 ms to complete.

But I am learning scheduling algorithms currently. I don't understand how you construed them as time slices? My impression is that the activity of scheduling is to allocate time slices.

PS: May need some mentoring here...
User avatar
yemista
Member
Member
Posts: 299
Joined: Fri Dec 26, 2008 12:31 pm
Location: Boston
Contact:

Re: Process burst time?

Post by yemista »

As far as I know it is. Its depends on what you want to do. The simplest scheduling algorithm swaps them out in order and gives them all equal time slices, but some might have more priority or there might be other concerns such as if the process is blocked or waiting. What the book might mean is that that is the time slices they are given by the scheduler.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Process burst time?

Post by Brendan »

Hi,

Some processes wait for something to happen, then do a burst of processing to handle whatever happened, then go back to waiting. The burst time is how much time the process takes to handle whatever happened.

This could be used to optimize scheduling. For example, if a process has a burst time of 5 ms (it always takes 5 ms for the process to handle an event) and the scheduler only gives the process 4 ms time slices, then the process will need 2 time slices to handle each event, and you'll get twice as many task switches and a lot more latency (the time between an event occurring and the event handling to be completed will be a lot more than necessary).
deostroll wrote:My impression is that the activity of scheduling is to allocate time slices.
The activity of scheduling is to decide which tasks (or processes or threads or whatever) run on which CPU/s at which time. Some schedulers don't have time slices - for example, consider a co-operative scheduler that always runs the highest priority task (where the "time slice" is always effectively infinite).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
deostroll
Posts: 4
Joined: Fri Mar 06, 2009 9:37 am

Re: Process burst time?

Post by deostroll »

Brendan wrote:The burst time is how much time the process takes to handle whatever happened.
If I say that the burst time is the time for a set of instructions to execute (which could be part or a subset of process p1), would I be correct? Further, the instructions complete within the specific time. They are in a sense atomic; they cannot be stopped mid-way. For e.g., consider the instruction
mov a b
This operation has to complete right? So does the scheduler sense this and then allocate the time slices?
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Process burst time?

Post by Combuster »

You're mixing up the atomicity property with the inseparability property.

inc [counter] is not atomic. (under the wrong circumstances you don't even have read/write atomicity) It reads from memory, increments a value, then writes the result. If something else changes the counter in the meantime the result is not guaranteed correct.

inc [counter] is inseparable (and all x86 instructions are). If an exception happens then either it has been executed, or it has not been executed. That is because each exception or interrupt will point to an instruction. If only part of the instruction has been executed, how do you know what is left to be done?

And neither has anything to do with burst time, which Brendan nicely explained above.

Also, how can the scheduler find out for itself how many instructions the process needs to process an event? The answer depends on the halting problem, making both insolvable. It also means that you have to figure out a different idea.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
deostroll
Posts: 4
Joined: Fri Mar 06, 2009 9:37 am

Re: Process burst time?

Post by deostroll »

Only processes that utilize the cpu are scheduled, right? I've heard of terms like the ready queue, i/o queue, etc. I didn't understand when & what things will fall into each queue; and does the scheduler use these queues as input for its execution?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Process burst time?

Post by Brendan »

Hi,
deostroll wrote:Only processes that utilize the cpu are scheduled, right? I've heard of terms like the ready queue, i/o queue, etc. I didn't understand when & what things will fall into each queue; and does the scheduler use these queues as input for its execution?
Imagine a scheduler with a FIFO buffer (the ready queue). You create a new task and put it on the ready queue, sooner or later the scheduler gets this task off the ready queue and gives it CPU time. If the task uses all of it's time slice the scheduler puts it back on the ready queue and finds another task to run. If the task doesn't use all of it's time slice and blocks (e.g. it calls "sleep()", needs to wait for another task/IPC or needs to wait for a device/device driver) then it's not put back on the ready queue until whatever it's waiting for occurs.

Now imagine if 10 tasks ask the same disk driver to read or write data at the same time. To manage this the device driver might have a FIFO buffer and do the requests one at a time. This FIFO buffer is an I/O queue - a queue of requests for I/O.

In both cases (the scheduler's ready queue and each device driver's I/O queue) it can be more complex. Instead of using a FIFO buffer you might use one FIFO buffer for each priority level, where only the highest priority tasks are given CPU time (and lower priority tasks need to wait until there aren't any higher priority tasks that are ready to run), or where only the highest priority I/O requests are performed (and lower priority I/O requests aren't performed until there aren't any higher priority I/O requests). This allows you to have idle tasks that don't get any CPU time until there's nothing better to do. It also allows you to make sure requests for swap space are done before normal file I/O, and things like defragmenting the file system and checking for bad blocks are only done when there's no other disk I/O.

Of course in this case, if you're running a lower priority task (because no higher priority tasks were ready to run) and a higher priority task becomes ready to run you can immediately put the currently running task back on a ready queue and start running the higher priority task. In the same way, if your disk driver is doing a lower priority I/O request and a higher priority I/O request arrives then you could abort the lower priority request and start doing the higher priority request instead. This will improve response times (higher priority things get attention quicker) but it makes lower priority things slower (for e.g. a low priority I/O request might be half-completed and aborted several times before it's actually completed).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply