Page 1 of 1

multitasking theory

Posted: Fri Jul 26, 2002 9:11 am
by kernel
Just wondering, whats the basic theory behind multitasking. Do you load different programs into different parts of the memory then execute certain parts of each program 1 after another?

Re:multitasking theory

Posted: Fri Jul 26, 2002 2:01 pm
by Schol-R-LEA
kernel wrote: Just wondering, whats the basic theory behind multitasking. Do you load different programs into different parts of the memory then execute certain parts of each program 1 after another?
That more or less is the essence of it... but it rarely is that simple. There are a number of issues that come up whenever you have multiple programs running interleaved on a single processor, and others that come up with multiple processors.

What you describe fits the idea of 'co-programming', which are the simplest form of concurrency. It is characterized by the fact that there is no scheduler - the individual programs switch directly from one to another, at points hardcoded into the programs. This approach has long since been dropped (as of around 1965 or so, in fact...) in favor of more flexible approaches.

The introduction of a scheduler, even a cooperative one, changes everything. The purpose of the scheduler, as the name impies, is to decide which task will run at a given time, as well as to handle scheduling requests (wait and sleep system calls) that affect when then programs can run. The scheduler is always loaded into the system; it is one of the key elements of any multitasking kernel, although not all multitasking schedulers are part of a kernel per se. It should be possible for a scheduler to switch between tasks without the individual programs necessarily being aware of it; from the application programmer's point of view, it should be possible to write programs as if they were running alone. At the same time, however, it should be possible for the different tasks to interact with each other (for example, to synchronize there actions with each other), or to cede control explicitly to th scheduler. Two common cases where a task may do this are the sleep call, which tells the scheduler "don't run this program again until t microseconds have elapsed"; and the wait call, which tells the scheduler "don't run this program again until message x has been sent to it by another task". How each of these is handld, if at all, is dependent on the scheduler.

A cooperative scheduler is one which does not interrupt a program while it is running at arbitrary times; it depends on the task to occasionally stop itself, or else make a system call which it has to wait for, or for a system hardware interrupt to intervene, before scheduling another task. This can result in a single program monopolizing the system, which can slow or even stop other programs. For this reason, virtually all multitasking OSes today use 'pre-emptive multitasking', in which the scheduler periodically stops the running task and schedules a different one to run (if there are any). The time slice - the number of seconds (or clock cycles, or timer ticks, whichever) between interrupts - is crucial; too short a time slice, and the system spends all too much time running the scheduler and the whole system slows; too long a time slice, and the programs end up running in a jerky and inconsistent fashion.

This is only the beginning, however. See the next posting about scheduling algorithms for a bit more.

Comments and corrections are welcome.

Re:multitasking theory

Posted: Fri Jul 26, 2002 2:14 pm
by Schol-R-LEA
Of considerable importance also is the scheduling algorithm - the decision process how the scheduler chooses which tasks to run when. The simplest approach is what is called 'round-robin' scheduling - the pending tasks are put into a ring, and each one in turn is run for a time slice. While this is naively fair (it ensures that every process gets a certain amount of time), it fails to take into account the different priorities of the tasks - some tasks may be more important to the system, and may need more running time than others, or require immediate processing without waiting for the scheduler to come around to it. However, if higher-priority tasks are always given favor, low-priority tasks will be starved out, getting no running time. How the scheduler balances the need to give suitable service for high-priority tasks while ensuring fairness to lower-priority ones is the sort of issue that has spawned many a flame war over the years.

Most schedulers keep track of the tasks with a list, called a priority queue. To give an example in pseudocode of how a scheduler may run, from the time of a scheduler call or timer interrupt:

Code: Select all

Scheduler () 
if the time has expired on any of the sleeping tasks
   insert the tasks into the priority queue;
   remove them from the sleeping queue;
end if;
if a signal has arrived for a task that was waiting on it
   insert the tasks into the priority queue;
   remove them from the waiting queue;
end if;

if there are pending tasks in the priority queue
   suspend the current task; 
   insert the current task into the priority queue;
   make the task at the head of the queue the current task;
   remove the task from the priority queue;
end if;

switch context to the current task;
end scheduler;
This is obviously heavily simplified, but it gives you some idea of how it works. In the case of a round robin scheduler, all tasks would always be added to the tail of the queue; other scheduling algorithms may be more complex, inserting tasks into the queue according to their priority or whatever other qualifications it chooses.

You will often see the terms 'thread' and 'process' in discussions of multitasking. While the terms are not used all that consistently in the literature, generally speaking, a processes are tasks that run as full programs, with their own protected memory areas and scheduled by the OS, while threads are usually (but not always) tasks run within a process, in shared memory (that is, the different threads are not protected from each other) working as part of a larger application program, and may or may not be a function of the OS.

I have only scratched the surface here, and my presentation is, to say the least, a bit unusual. Real multitasking systems have to address issues such as task synchronization and interprocess communication, preventing starvation and deadlock, and resource management, among other things. Any decent book on OS design should cover these issues in more detail than I can.

You'll note that I have not addressed the issue of context switching (the process of switching from one task to another while saving the state of the old task), which you'd expect to be the real heart of the issue. This is because it is a whole issue to itself, and depends on things like the memory model you are using. Intel CPUs have special hardware support for multitasking, which would take a whole book to explain completely ::) , so it will have to wait for now.

CCAW, as usual.

Re:multitasking theory

Posted: Fri Jul 26, 2002 8:50 pm
by kernel
damn, you should write a book ;D
"Writing Complete OS's For Dummies"

lol

My head hurts now.
Ive been making direct3D Games, Trying to write an OS, and attemping to create a forum in PHP. I think i should get some sleep

thanx 8)

Re:multitasking theory

Posted: Fri Jul 26, 2002 8:56 pm
by kernel
one more question though, Is this at all close to how the schedular would run 2 programs?

//program 1 stored at memory locations a to d
//program 2 stored at memory locations e to h

run code from a to b; //Run part of program 1

run code from e to f; //run code from program 2

run code from c to d; //Run more from program 1

run code from g to h; //run more from program 2

or are seperate instructions loaded into different parts of memory?

Re:multitasking theory

Posted: Fri Jul 26, 2002 11:51 pm
by Schol-R-LEA
It is sort of like that, yes, but the way you seem to be showing it also obscures the issue a bit. It would be more typical for it to jump around the code of each program as the programs run, for example.

Also, it really isn't relevant where the two tasks are relative to each other in memory, unless there is no memory protection and one happens to stomp all over the other in some way (wild pointer references, stack underflow or overflow, running past the end of the code, etc.). Program 1 doesn't need to know where program 2 is, or in fact that it exists at all, and vis versa. The two threads of execution could even be running in different parts of the same code, with different data and stack, and the programs should never be affected by it (this is in fact how DLLs work). Only the scheduler and the memory manager need to know those details.

What you really seem to be asking about is context switching, which as I said was a fairly complex issue in and of itself. It looks like it will have to be addressed though, or else it won't be very clear what's happening.

The key issue is that the scheduler has to be able to set aside all of a task's state - the information on where the program is, where in the program execution it is, and so forth - in a way that will allow it to restart the program as if it had never been stopped. It saves this in a data structure or object which represents that task for the scheduler. This task record is what the scheduler really manipulates when it is setting priorities and so forth.

Now, you may ask how the state gets saved in the first place, and how the scheduler recovers it for it's use. The answer, in the most basic case, is the same way that it is when calling a function, or when an interrupt is processed. In fact, that is exactly what the scheduler invocation looks like to the programs. Either the program calls the scheduler with a wait or sleep, or else the scheduler runs after X many clock interrupts (the time slice) or following some other interrupt call. In the x86 TSS system, it's a good deal more complex, of course, but the basic idea is that the most important state information - the instruction pointer - is already saved on the stack. The next step is to push all of the registers onto the stack. The final, and most crucial step, is to store the final value of the stack pointer in the task record, along with the other necessary details such as where the code and data are in memory (in point of fact the task record may not actually need these, and if it does, they should already be stored in it; but you get the idea). This is important because the scheduler now switches from the program's stack to it's own, and it has to know where to recover the task's context from later. Assuming that there is no swapping or virtual memory, then each task's memory space (code, data, stack, etc) should remain unchanged in physical memory while the system runs other tasks.

To run a pending process, it reverses the process: it sets the stack pointer (and stack segment, if necessary) of the task, then restores each of it's registers in turn, and finally performng an interrupt return (or call gate return, or whatever) to the program, which goes on it's merry way as if time had never stopped for it.

I've tried to describe this in a very vague and abstract way, for the good reason that I don't myself know all of the details, and in any case what those details will vary based on several important design decisions. I may very well have some parts quite wrong, as well, I have to admit. But this is really the core of it: to stop and restart a giving program at will, without interfering with the actual thread of execution.

I hope that this is at least vaguely clear. :-\ I really would want more time and space to make this a lot clear, but this about the best I can do for now. As before, CCAW.

Re:multitasking theory

Posted: Sat Jul 27, 2002 12:49 am
by Schol-R-LEA
I'd really strongly suggest looking at some books on the subject, if only because they can cover details I simply don't have space or time to (yet, anyway). Any of Andrew Tannebaum's books are good, with Operating Systems: Design and Implementation (aka the Minix book) probably being the most useful. Operating Systems by Stallings is also a good reference for theory. Raphael Finkel's Operating Systems Vade Mecum while rather dated, is available for free online, which makes it fairly well known. For more on these books, see:

Andrew Tannenbaum's home page. While you can't get to it directly,there is also a comprehensive list of his books on this site.

Raphael Finkel's home page, with a mirror of OS Vade Mecum on it

Operating Systems Concepts by Abraham Silbershatz

Operating Systems by William Stallings (page also includes an extensive list of links, some of them very useful)

Operating Systems Design - the Xinu Approach by Douglas Comer (somewhat dated, but very clear and concise)

The TUNES List of Operating Systems on the Web links to a variety of information sources, but much of it is now long out of date. The Triple Fault Club has similar information and is more recent, as does the OS Developer's Page.

Operating Systems Resource Center is a bit ashort on theory, but is an excellent resource for hard details for things like task descriptors.

Does anyone else have a book or site they recommend?

As for writing a book of my own... in a sense, that's what the LoIS Project is, if I ever get anywhere with it. The problem of course is that I'm still learning much of this myself, and I've been seriously slacking on it as well.

Re:multitasking theory

Posted: Wed Jul 31, 2002 10:10 am
by srg
hmm, Intel 386 an better in p-mode have protection and hardware based task switching of this kind, as you said, but this is mainly to do with segments. Most operating systems make one segment that covers the whole address range, does this mean that they have to do all the parts of a task switch them selves? Surely the hardware solution would be faster?

Steven Graham