Page 1 of 2
A few questions about Processes and Threads...
Posted: Sun Oct 21, 2007 6:15 pm
by EliteLegend
I know its kinda silly but I've seen most material consider one side - a thread executes and finishes its task and might be available in a thread pool but I was wondering what will happen if the process dies when the thread is still under execution... I somehow think that the thread will continue execution but am backing off from the fact that a thread shares the data and code of a process... If the process dies then there shouldn't be any data or code and so logically speaking (if there's logic that is ) there shouldn't be a thread... I seem to be going round in circles... What happens in reality?
And another doubt was about multithreading. It seemed ok... But when it came to creating a distinction even between threads - user level threads and kernel level threads... I was somewhat not feeling comfortable... As a matter of analogy, if I think as a web browser having threads at the user level, it might be serving the user load a page at the same time loading an image in the background, then how can I think of kernel level threads as?
Posted: Sun Oct 21, 2007 8:26 pm
by frank
I believe that in most(not always the best word to use) operating systems when a process dies the operating system kills all of the threads that were apart of that process.
On the issue of user threads vs. kernel threads it would be considered a user thread if it ran at PL3(Which is the lowest privilege level of the CPU) and a kernel thread if it ran at PL0(The most privileged level). Of course that is just the way I see it, another person may see it differently.
Posted: Sun Oct 21, 2007 8:39 pm
by EliteLegend
Thank you for the clarification... So does it mean that a thread cannot run without the process that created it (I wonder where the need for such a thing arise...)
Thanks for differentiating the user level and kernel level threads... Could you please give me a practical example of a kernel level thread so that I can fix it into my head properly? I am kind of new to these things so I'm learning by example.... Sorry for the trouble...
Posted: Sun Oct 21, 2007 9:10 pm
by bewing
It seems like you need a bit of clarification. There are OSes that are singlethreaded -- like original UNIX. On those OSes, a thread IS a process. They are the same thing.
For OSes that support multithreading, you have to be very specific about what it means for a process to "die". A process will generally run, along with all its subthreads, in its own virtual address space. When the process is created, the virtual address space for it is "allocated" (in a sense). So you need to know when the process is complete, in order to deallocate the address space. The process can either complete normally, or it can be forcibly terminated. If it is forcibly terminated, then clearly all the subthreads must be killed. If it is attempting to terminate normally, then it may need to wait for some subthreads to complete, and it may need to forcibly terminate others. But the virtual address space that the process and all its subthreads are running in cannot be disallocated until all the subthreads of the process have stopped running.
Some examples of possible system/kernel threads:
the scheduler thread
the virtual memory manager process
the virtual filesystem manager process, and all of its subthreads
a thread that watches the keyboard status memory registers, and sets the LEDs whenever the status registers change
a thread to reinitialize the mouse, if it is unplugged and replugged
-- often, system threads are processes that need a lot of "security clearance", in order to deal with hardware interfaces, or system resources.
Posted: Sun Oct 21, 2007 11:49 pm
by Colonel Kernel
About the distinction between "user threads" and "kernel threads" -- these two terms are sometimes used to mean different things in different contexts. One meaning I know of refers to where thread scheduling and switching is implemented -- in user mode, or in the kernel.
Most OSes support "kernel threads", meaning that the kernel creates, schedules, and switches between threads pre-emptively. "User threads" were more popular back in the days of the old single-kernel-thread-per-process UNIX (I have also heard them referred to as "green threads" in this context, versus "native threads"). They are created, switched, and scheduled within a single process co-operatively. The main differences between a user thread and kernel thread are:
- Switching between user threads is usually a lot faster than switching between kernel threads, because it does not incur the cost of a privilege transition (system call or interrupt).
- Scheduling algorithms in OSes that support kernel threads usually considers all threads within the system at once, regardless of which process they belong to. User threads are by their nature switched only within the process that created them, so they are in effect sharing whatever CPU time is allocated to that process.
- When a process makes a blocking call such as to read data from a file, it will suspend the thread that made the call, whether it is a user thread or a kernel thread. The big difference is that all user threads in the process corresponding to that kernel thread will also be suspended, but other kernel threads in the process may be scheduled to run.
- If a process has many kernel threads, it is possible for more than one of them to be running concurrently on multiple CPU cores. For user threads, this is generally not the case (except for user threads that map to different kernel threads). This is probably the most important reason why this kind of "user threading" is mostly irrelevant today.
Posted: Mon Oct 22, 2007 12:08 am
by speal
I believe there are two issues regarding the kernel/user threads. I'm not sure which the original post referred to.
The first is the tread implementation, which was addressed above. User-level thread implementations build threading in systems with single-threaded processes. Kernel-level thread implementations allow multiple threads in a process that don't interfere on blocking system calls. You can read all about the advantages and disadvantages of each here and in the wiki.
The second issue is the concept of thread permission level. Some threads may run at the user level (3), while some may run at kernel level (0). When these are used is greatly dependent on the system's implementation. It would be possible for a kernel to handle system calls by creating a special thread that performs the requested operations and runs at the kernel level. This would allow you to schedule the execution of a system call independently from the calling thread (useful for asynchronous I/O, and probably other things). I couldn't tell you where this is used, but I can imagine a few cases where it could come in handy.
Posted: Mon Oct 22, 2007 1:11 am
by Colonel Kernel
speal wrote:User-level thread implementations build threading in systems with single-threaded processes.
Although that is how it started out, it is not the only way to do it. On some older versions of Solaris, a process could have many kernel threads, and each kernel thread could co-operatively run several user threads. The user threads could even migrate between kernel threads in some systems.
Posted: Mon Oct 22, 2007 1:53 am
by JamesM
This is probably the most important reason why this kind of "user threading" is mostly irrelevant today.
Please correct me if I'm wrong, but I believe the pthreads library invisibly maintains a many-to-many mapping of kernel and user threads to get the best of both worlds (small switching time and no hanging on syscalls). I believe this is still the case.
Posted: Mon Oct 22, 2007 6:55 am
by mystran
JamesM wrote:This is probably the most important reason why this kind of "user threading" is mostly irrelevant today.
Please correct me if I'm wrong, but I believe the pthreads library invisibly maintains a many-to-many mapping of kernel and user threads to get the best of both worlds (small switching time and no hanging on syscalls). I believe this is still the case.
Depends on the system. On operating systems where kernel threads aren't all that costly and operating system does a decent job of scheduling on it's own, many to many mappings aren't all that common, because they are kinda complex..
AFAIK at least Linux pthreads implementations are quite happy with 1-to-1 mapping between user and kernel level threads.
Posted: Mon Oct 22, 2007 10:50 am
by Colonel Kernel
The pthreads implementation on Solaris 10 is also 1-to-1 AFAIK.
Posted: Mon Oct 22, 2007 12:43 pm
by EliteLegend
Thank you so much for taking the effort to type all that... I think I'm clear with the threads... I've encountered a thread_yield call that allows a thread to voluntarily give up the CPU to let another thread run. But when there are no clock interrupts to enforce timesharing, it makes sense in being polite with other threads... but why would it ever give up its CPU knowing clearly that it might never get its turn back? Is it because it might be waiting for some sort of an I/O operation which is rather slow compared to the CPU processing and thus inorder to save the precious CPU time, it just yields itself?
One other thing I was wondering was that there are semaphores for processes... As I perceive there is no reason to enforce mutual exclusion for threads because they anyway come under the same process and are expected to cooperate not compete (correct me if I'm wrong please) But does it make sense to achieve synchronization in the extreme case using some sort of a kernel semaphore on the threads created by the kernel? For example, at a really low level, if I consider the situation where my design doesn't want the user to input anything from a mouse while he is typing on a keyboard, can the problem be solved using some kernel semaphores?
Posted: Mon Oct 22, 2007 7:44 pm
by Brendan
Hi,
EliteLegend wrote:Thank you so much for taking the effort to type all that... I think I'm clear with the threads... I've encountered a thread_yield call that allows a thread to voluntarily give up the CPU to let another thread run. But when there are no clock interrupts to enforce timesharing, it makes sense in being polite with other threads... but why would it ever give up its CPU knowing clearly that it might never get its turn back? Is it because it might be waiting for some sort of an I/O operation which is rather slow compared to the CPU processing and thus inorder to save the precious CPU time, it just yields itself?
Some systems use "CPU affinity" where threads can set a bitmask that determines which CPUs the OS lets it run on. When a thread changes it's CPU affinity it can be running on a wrong CPU (for e.g. if a thread is running on CPUA and changes it's own CPU affinity so that it can only run on CPUB). In this case the thread will want to do "yield()" after setting it's CPU affinity to make sure it's running on one of the selected CPUs.
Apart from that, depending on how the scheduler works, a thread might use "yield()" to reduce it's CPU usage. For example, if a thread is polling something then it might yield within the polling loop. For a prioritized co-operative scheduler ("always run the highest priority thread that can run") yield isn't useful for this purpose (a task would end it's time slice, but the scheduler would find it's still the highest priority thread that can run and give it a new time slice) but for something like a round robin scheduler it gives all other threads more CPU time.
Note: some programmers will use "sleep(0)" instead of "yield()", which effectibely does the same thing.
EliteLegend wrote:One other thing I was wondering was that there are semaphores for processes... As I perceive there is no reason to enforce mutual exclusion for threads because they anyway come under the same process and are expected to cooperate not compete (correct me if I'm wrong please)
You're correct. Often programmers use multiple threads to make sure their process can run faster on multple CPUs (e.g. a process with 4 threads running on a computer with 4 CPUs, where all 4 threads can be running at the same time).
EliteLegend wrote:But does it make sense to achieve synchronization in the extreme case using some sort of a kernel semaphore on the threads created by the kernel?
My advice would be to assume kernel threads don't never need semaphores, then write all the kernel's code and see if this assumption is wrong. If you think the assumption is wrong for a specific case, then post a description of the situation and someone will probably find a better way to do the same thing without semaphores.
EliteLegend wrote:For example, at a really low level, if I consider the situation where my design doesn't want the user to input anything from a mouse while he is typing on a keyboard, can the problem be solved using some kernel semaphores?
The word "while" is a problem here. If I pressed the "A" key yesterday, click a mouse button today, and then press the "B" key tomorrow, then will I input something from the mouse while I'm typing "AB"?
Cheers,
Brendan
Posted: Wed Oct 24, 2007 9:47 am
by Pype.Clicker
Brendan wrote:
EliteLegend wrote:One other thing I was wondering was that there are semaphores for processes... As I perceive there is no reason to enforce mutual exclusion for threads because they anyway come under the same process and are expected to cooperate not compete (correct me if I'm wrong please)
You're correct. Often programmers use multiple threads to make sure their process can run faster on multple CPUs (e.g. a process with 4 threads running on a computer with 4 CPUs, where all 4 threads can be running at the same time).
/me puzzled here. even if multiple threads have been spawned within a single process, that does not mean they couldn't need some mutual exclusion. Think of all of them running on different CPUs and suddenly willing to write an entry in the (shared) log, for instance... or maybe many worker threads trying to acquire a job from a queue. If you don't prevent all the others to access queue/log state while one is manipulating it, you're quickly in trouble.
Or have i missed something in the discussion's context ? ...
Posted: Wed Oct 24, 2007 10:40 am
by EliteLegend
I guess I should've mentioned the whole thing... When I assumed that there needn't be any mutual exclusion between the threads created by the same process, I implied (in the background) that the programmer has to take care of all of that with his code because he has total control over his threads... But well, if the programmer doesn't address this fact, then I guess mutual exclusion is required in both the cases... Please suggest if you think my implication is wrong...
Posted: Thu Oct 25, 2007 2:07 am
by Pype.Clicker
EliteLegend wrote:I implied that the programmer has to take care of all of that with his code because he has total control over his threads...
You may have complete control of what your threads are doing, but you have typically no control of them while they are running on another CPU. Efficiently "suspend this thread and notify thread X that it should resume me when the resource is released" is non-trivial (at best) with userland-only implementation. I.e. "locking" is a syscall and "resume Y" as well.
The best you could achieve is busy-looping on a spinlock, expecting that the thread that is holding the resource can do what it has to do quickly enough so that it doesn't become a performance-killer -- a valid assumption on multi-cpu system, but completely inappropriate on single-core systems.