about multithreading

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
jtlb
Member
Member
Posts: 29
Joined: Sat May 12, 2007 8:24 am

about multithreading

Post by jtlb »

Today's application makes more and more usage of multiple threads to take advantages oh the multicore revolution. The problem in my opinion, is that in order to switch between threads, one has to go back to supervisor mode, reload the context and get back to user mode. This implies another overhead.

So, what about letting each application deaming with its thread and doing the context switch itself(possibly via a dedicated library)? We could of course allow multiple "instances" of the same application to run simultaneously but one per core would be a maximum.

What do you think of this?
User avatar
Love4Boobies
Member
Member
Posts: 2111
Joined: Fri Mar 07, 2008 5:36 pm
Location: Bucharest, Romania

Re: about multithreading

Post by Love4Boobies »

You are basically talking about user threads instead of kernel threads. That has problems - if one thread makes a blocking call, all threads in the process get blocked since the kernel has now knowledge of this. A more exotic approach is scheduler activations which is a technique for multiplexing user threads (or fibres) on top of kernel threads. This method isn't without its problems, though - most operation systems that used it (such as FreeBSD) have gone back to kernel threads. Google around...
"Computers in the future may weigh no more than 1.5 tons.", Popular Mechanics (1949)
[ Project UDI ]
jtlb
Member
Member
Posts: 29
Joined: Sat May 12, 2007 8:24 am

Re: about multithreading

Post by jtlb »

Thanks for your answer,

Actually, this idea is part of another bigger one: call to the kernel are slow, so we have to do as less as possible. One way of achieving this is to make asynchronous call, via message bus for example.

let's try with a simple example:
an application may require the system to read data from a file. Since it needs a call to the kernel it will be slow. There are now 2 options. The first one is to put this request in a queue that resides and is is managed in this specific application space. When this is done, the application can 1)trigger the system if the queues is full/if the request is urgent/or if there are no more threads to run, 2)otherwise it transfers the control to another thread.
When the system will get the control back(on timer interrupt for example), it will process the queue / dispatch the message.
And finally, when it is done executing a request, it will put a message in the application queu to tell it that it can release the thread.

This is my idea
dr_evil
Member
Member
Posts: 34
Joined: Mon Dec 20, 2004 12:00 am

Re: about multithreading

Post by dr_evil »

Love4Boobies wrote:most operation systems that used it (such as FreeBSD) have gone back to kernel threads. Google around...
The guys from Microsoft are going the other direction:
http://channel9.msdn.com/shows/Going+De ... duler-UMS/

The plan is to add a usermode scheduler to Windows 7 or the version after that.
jal
Member
Member
Posts: 1385
Joined: Wed Oct 31, 2007 9:09 am

Re: about multithreading

Post by jal »

jtlb wrote:an application may require the system to read data from a file. Since it needs a call to the kernel it will be slow.
Reading data from a harddisk (or solid state disk, or even from a RAM disk) will always be much slower than the switching overhead.


JAL
User avatar
abachler
Member
Member
Posts: 33
Joined: Thu Jan 15, 2009 2:21 pm

Re: about multithreading

Post by abachler »

I think the OP is talking about preemptive vs non-preemptive task switching. The problem with non-preemptive is that a poorly written program can deadlock the entire operating system. Most modern operating systems impliment preemptive multitasking, which forceably takes control of the processor away from a thread so that other threads also run. An errant thread cannot deadlock the system. Most blocking system calls however will cause a task switch anyway. The calling thread will be suspended until the I/O request is completed, and will only be restarted when the data is available. This can degrade application performance, which is why Windows includes asynchronous I/O calls in its API, where the system call returns control to the calling thread immediately after scheduling the I/O request. The kernel still preempts the thread eventually though, just not on this call, which lets the application continue to process other data while the I/O call is serviced. e.g. a request to the hard drive using DMA can be accomplished without interaction from teh kernel once the HDD is sent the command. There is no reason for the kernel then to sit and wait for completion, it can execute a new thread. The synchronous versions of the API allow windows to execute a different thread, effectively relinquishing the remainer of the callign threads time slice to another thread. The asynchronous version returns control to the callign thread, at least until the end of its time slice. This can improve overall system throughput because it can be assumed that locality of data (i.e. cache relevance) is higher for the current thread than for a different thread that may not have been executed for several ms.
Post Reply