I have decided to make a microkernel and have beeen thinking abount how the differant 'servers' will interact with eachother. The problem is how do I prevent a deadlock occouring when one server needs something from annother bu the other server needs something from the first.
for example A 'calls' B.
B runs but then 'calls' A.
A does nothing because it is waiting for B.
A deadlock resaults.
Does anyone have any good links for semaphores/mutexes?
Deadlocks
Re: Deadlocks
The easiest way of dealing with deadlocks is to make sure they never happen. Avoid the possibility of a circular path like that. If you need to have modules A and B that are mutually dependent, split one of them into two threads that don't have those dependencies. In your specific case, making one of them use an asynchronous rather than synchronous call might work.
Re: Deadlocks
Hi,
If possible, I'd also recommend adding code to detect deadlocks. For example, if A synchronously calls B and B is waiting for a synchronous reply, then you could check if B is waiting for a reply from A, or if B is waiting for C which is waiting for A, etc (i.e. follow the chain of who is waiting for who to see if anything in that chain is waiting for A). Once you've detected the deadlock you can display an error (including a list of all modules involved).
I just use asynchronous IPC for everything. This solves the deadlock problems easily, but makes other things a bit harder because you have no way of knowing what will occur when.
For example:
As you can see, it's not as easy as sychronous IPC because the processing for FOO is split into 2 different parts, which can make following the code hard (especially for larger things). For synchronous IPC the IPC itself could be hidden within header files so that it looks like a normal function call, which makes it much easier to work with in comparison. For example:
I tend to prefer asynchronous IPC because it can easily handle many requests without unnecessary delays. For an example, if 4 different applications send "FOO" to the code above, the asychronous version can work on the next "FOO" while it waits for "BAR_REPLY" (i.e. multiple requests can be in progress at the same time). For the sychronous version, all subsequent requests must wait for the current request to complete - i.e. if an application takes ages to send "BAR_REPLY" then the code above stalls and all other applications that are waiting for the code above also stall.
To prevent these unnecessary delays in a synchronous systems, programmers tend to use more threads (e.g. one thread per request, or a pool of worker threads with work queues), which complicates things again - you end up needing re-entrancy protection and (possible deadlocks) between threads belonging to the same process.
I guess it's a compromise - the complexities of inter-process deadlock avoidance (and intra-process deadlock avoidance where necessary); or the complexities of asynchronous IPC.
Cheers,
Brendan
This is correct - either use asynchronous IPC, or use synchronous IPC but write the code so it doesn't call anything else. For example:Jules wrote:The easiest way of dealing with deadlocks is to make sure they never happen. Avoid the possibility of a circular path like that. If you need to have modules A and B that are mutually dependent, split one of them into two threads that don't have those dependencies. In your specific case, making one of them use an asynchronous rather than synchronous call might work.
- A 'calls' B.
B stores something on a work queue
B returns to A
B gets CPU time
B looks at it's work queue
B calls A
If possible, I'd also recommend adding code to detect deadlocks. For example, if A synchronously calls B and B is waiting for a synchronous reply, then you could check if B is waiting for a reply from A, or if B is waiting for C which is waiting for A, etc (i.e. follow the chain of who is waiting for who to see if anything in that chain is waiting for A). Once you've detected the deadlock you can display an error (including a list of all modules involved).
I just use asynchronous IPC for everything. This solves the deadlock problems easily, but makes other things a bit harder because you have no way of knowing what will occur when.
For example:
- A send FOO to B
A waits to receive a message
B receives FOO from A and starts working on it
B sends BAR to A
B waits to receive a message
A receives BAR from B and starts working on it
A send BAR_REPLY to B
A waits to receive a message
B receives BAR_REPLY from A and continues working on FOO
B sends FOO_REPLY to A
B waits to receive a message
A receives FOO_REPLY from B
Code: Select all
void main(void) {
for(;;) {
getMessage(&message);
switch(message.type) {
case FOO:
handle_FOO();
case BAR_REPLY:
handle_BAR_REPLY();
}
}
}
void handle_FOO(void);
/* Do some processing for FOO*/
sendMessage(BAR);
store_details_of_FOO_somewhere();
}
void handle_BAR_REPLY() {
try_to_find_details_of_FOO();
if( details_of_FOO_found) {
/* Do some more processing for FOO */
sendMessage(FOO_REPLY);
}
}
Code: Select all
void main(void) {
for(;;) {
getMessage(&message);
switch(message.type) {
case FOO:
handle_FOO();
}
}
}
void handle_FOO(void);
/* Do some processing for FOO*/
getResultFromBAR();
/* Do some more processing for FOO */
sendMessage(FOO_REPLY);
}
To prevent these unnecessary delays in a synchronous systems, programmers tend to use more threads (e.g. one thread per request, or a pool of worker threads with work queues), which complicates things again - you end up needing re-entrancy protection and (possible deadlocks) between threads belonging to the same process.
I guess it's a compromise - the complexities of inter-process deadlock avoidance (and intra-process deadlock avoidance where necessary); or the complexities of asynchronous IPC.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.