synchronous ipc
synchronous ipc
I´m just about to write (and design) my ipc code. I like the idea of synchronous msg passing, because it solves some problems for, but it also creates some
Will there be problems or things I make complicated with that (without the problem below)?
A problem which is created with synchronous msg passing is that I want to use this msg passing for calling my drivers. So when an irq occures I go through a list and send an "irq" msg to the drivers. The problem here is that this would mean that the actual running thread would be blocking till all the drivers have send received the msg. At the moment I can´t imagine a good way to solve this. This would be a good reason for asynchronous msg passing But I want to try to go the synchronous way.
Another things is, what is faster for multi cpu systems, asynchronous or synchronous?
Will there be problems or things I make complicated with that (without the problem below)?
A problem which is created with synchronous msg passing is that I want to use this msg passing for calling my drivers. So when an irq occures I go through a list and send an "irq" msg to the drivers. The problem here is that this would mean that the actual running thread would be blocking till all the drivers have send received the msg. At the moment I can´t imagine a good way to solve this. This would be a good reason for asynchronous msg passing But I want to try to go the synchronous way.
Another things is, what is faster for multi cpu systems, asynchronous or synchronous?
Re: synchronous ipc
Hi,
You could disable IRQs when a device driver is running, but that's doesn't fix the problem. For example, a GUI thread is running and a keyboard IRQ occurs, the IRQ handler sends a message to the keyboard driver and the scheduler switches to the keyboard driver (and disables IRQs), then the keyboard driver tries to send a "key pressed" message to the GUI thread, but the GUI thread isn't waiting for a message so the keyboard driver can't send its message. Instead the keyboard driver blocks (and the scheduler runs other threads until the keyboard driver can send its message to the GUI thread). Now another keyboard IRQ occurs and the keyboard driver isn't in the "waiting for a message" state (it's still waiting to send the previous message), so you're screwed again.
The only way I can think of to fix this problem (while still using synchronous messages) is to spawn a new device driver thread each time an IRQ is sent to a device driver. That would have fairly high overhead though - it's probably better to use a callback or something (and make sure the IRQ handling code in device drivers is re-entrant) instead of using synchronous messages.
Of course this is only the beginning of the problem. For everything you do with synchronous messages you'll need to be careful of deadlocks; and you'll probably end up with a strict hierarchy of tasks, where tasks at the bottom can't send messages to tasks higher up without risking deadlock (and can only reply to messages from tasks higher up).
However, for synchronous messaging; when you send a message your thread blocks and the other thread starts running, and when the other thread sends a reply back it blocks and your thread starts running again. This makes it hard to have more than one thread running at a time (it doesn't scale very well, and performance on multi-CPU will probably suck because of it). For asynchronous messaging; the sender and receiver can both be running on different CPUs and both might never block. Basically, asynchronous messaging scales easily on multi-CPU (and is even used for large clusters of computers because it scales so well); and as an added bonus it avoids all the deadlock hassles.
What this means is that for single-CPU (e.g. embedded systems and desktop machines from last century) synchronous messaging is a really good idea - you get the "easier to write code" advantage and scalability doesn't matter. For servers and modern desktop machines (and some embedded devices now, and for distributed systems) synchronous messaging is probably a bad idea, and it'll get progressively worse as "CPUs per computer" increases.
Cheers,
Brendan
It's worse than that. When a process sends a message to another process the sender has to wait until the receiver is ready to receive the message. If a device driver is running (and not waiting for a message) and is interrupted by an IRQ handler that sends a message to the device driver; then the IRQ handler won't be able to send the message until the device driver reaches the "waiting for message" state, and the device driver won't be able to reach the "waiting for message" state until the IRQ handler returns. Basically it's a deadlock (A waits for B while B waits for A, and the computer locks up).FlashBurn wrote:A problem which is created with synchronous msg passing is that I want to use this msg passing for calling my drivers. So when an irq occures I go through a list and send an "irq" msg to the drivers. The problem here is that this would mean that the actual running thread would be blocking till all the drivers have send received the msg. At the moment I can´t imagine a good way to solve this. This would be a good reason for asynchronous msg passing But I want to try to go the synchronous way.
You could disable IRQs when a device driver is running, but that's doesn't fix the problem. For example, a GUI thread is running and a keyboard IRQ occurs, the IRQ handler sends a message to the keyboard driver and the scheduler switches to the keyboard driver (and disables IRQs), then the keyboard driver tries to send a "key pressed" message to the GUI thread, but the GUI thread isn't waiting for a message so the keyboard driver can't send its message. Instead the keyboard driver blocks (and the scheduler runs other threads until the keyboard driver can send its message to the GUI thread). Now another keyboard IRQ occurs and the keyboard driver isn't in the "waiting for a message" state (it's still waiting to send the previous message), so you're screwed again.
The only way I can think of to fix this problem (while still using synchronous messages) is to spawn a new device driver thread each time an IRQ is sent to a device driver. That would have fairly high overhead though - it's probably better to use a callback or something (and make sure the IRQ handling code in device drivers is re-entrant) instead of using synchronous messages.
Of course this is only the beginning of the problem. For everything you do with synchronous messages you'll need to be careful of deadlocks; and you'll probably end up with a strict hierarchy of tasks, where tasks at the bottom can't send messages to tasks higher up without risking deadlock (and can only reply to messages from tasks higher up).
The advantage of synchronous messaging is that it's easier to write code that uses it; because the messaging behaves like an inter-process function call, and function calls are easy to use. The disadvantage of asynchronous messaging is that it's hard to write code that uses it; because the messaging behaves more like networking/sockets. Synchronous messaging can also be done with less overhead (no overhead caused by managing message queues).FlashBurn wrote:Another things is, what is faster for multi cpu systems, asynchronous or synchronous?
However, for synchronous messaging; when you send a message your thread blocks and the other thread starts running, and when the other thread sends a reply back it blocks and your thread starts running again. This makes it hard to have more than one thread running at a time (it doesn't scale very well, and performance on multi-CPU will probably suck because of it). For asynchronous messaging; the sender and receiver can both be running on different CPUs and both might never block. Basically, asynchronous messaging scales easily on multi-CPU (and is even used for large clusters of computers because it scales so well); and as an added bonus it avoids all the deadlock hassles.
What this means is that for single-CPU (e.g. embedded systems and desktop machines from last century) synchronous messaging is a really good idea - you get the "easier to write code" advantage and scalability doesn't matter. For servers and modern desktop machines (and some embedded devices now, and for distributed systems) synchronous messaging is probably a bad idea, and it'll get progressively worse as "CPUs per computer" increases.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: synchronous ipc
This is a problem which wont occur when I write the drivers (or the drivers use my ipc design). Because the driver should use one thread only for an irq, so getting the data out of the device and put it in some buffer and then another thread is doing the work on data and send it to the server processes. Also in my os you will only use msgs for setting up my shared memory msg system (and maybe for things like rpc with only a small number of parameters).Brendan wrote: You could disable IRQs when a device driver is running, but that's doesn't fix the problem. For example, a GUI thread is running and a keyboard IRQ occurs, the IRQ handler sends a message to the keyboard driver and the scheduler switches to the keyboard driver (and disables IRQs), then the keyboard driver tries to send a "key pressed" message to the GUI thread, but the GUI thread isn't waiting for a message so the keyboard driver can't send its message. Instead the keyboard driver blocks (and the scheduler runs other threads until the keyboard driver can send its message to the GUI thread). Now another keyboard IRQ occurs and the keyboard driver isn't in the "waiting for a message" state (it's still waiting to send the previous message), so you're screwed again.
The thing I like on synchronous msgs is that I can easily create something like a "call" (rpc) and I think this is what msgs are used for. You send a request to a server (and wait for the reply) and the only work the server does is waiting for a request, working on it and send a reply. So if I would use asynchronous msgs I would need a way to do the send and wait for reply thing and the thread would also block.
So maybe I should have a kernel thread which only purpose it is to make such "rpc" calls to the irq threads of the drivers and then maybe one kernel thread per irq?!
Re: synchronous ipc
As Brendan said above, asynchronous is much better scalable, to the point that it might even be faster on a 2-core system (depending on implementation details)FlashBurn wrote:Another things is, what is faster for multi cpu systems, asynchronous or synchronous?
Another advantage is that it's fairly easy to implement synchronous messages on top of asynchronous messages, whereas the other way around (as in, doing so properly, not just presenting an appropriate interface) is impossible.
Re: synchronous ipc
So maybe a mix of both would be good. I will try if I could make my functions so, that if I have the situation that the sender wants to wait for a reply it will go blocked and that I don´t copy the msg to extra mem, but put the thread into the msg queue of the port.
I will look if I can come up with a system like that.
But what do you think about that irqs just send msgs to drivers which then do the work and send msgs back, if it were their device which caused the irq? Another question for this is, what is better to mask the irq and send an eoi and then unmask it when one driver said it was my device or wait with the eoi till a driver says it was my device?
So thanks for the input!
I will look if I can come up with a system like that.
But what do you think about that irqs just send msgs to drivers which then do the work and send msgs back, if it were their device which caused the irq? Another question for this is, what is better to mask the irq and send an eoi and then unmask it when one driver said it was my device or wait with the eoi till a driver says it was my device?
So thanks for the input!
Re: synchronous ipc
As I said, you can implement synchronous messages as a simple layer on top of asynchronous ones (notice that 'send' doesn't need two versions, because you can just follow it with a synchronous recieve in order to block, though one syscall to do both might be more efficient (saves two mode switches))FlashBurn wrote:So maybe a mix of both would be good. I will try if I could make my functions so, that if I have the situation that the sender wants to wait for a reply it will go blocked
That's a good idea, but the device driver has to be able to set up the IRQ handler, because it needs to gather device-specific data to send with the message (as it can't be retrieved in the future, when another event may have happened and changed it)FlashBurn wrote:But what do you think about that irqs just send msgs to drivers which then do the work and send msgs back, if it were their device which caused the irq?
Re: synchronous ipc
I just finished my msg system. I have a function where you can do a send and then receive within one syscall. The problem with your variant is what happens if the thread send a msg and wants to wait for the reply, but another thread send a msg and now the waiting thread thinks this is the reply. I solved it so that you can say, that you want to wait for a reply of your just sent msg and you can also say, that you want to sent a msg and receive within in one syscall with waiting till a msg arrives or not.Selenic wrote: As I said, you can implement synchronous messages as a simple layer on top of asynchronous ones (notice that 'send' doesn't need two versions, because you can just follow it with a synchronous recieve in order to block, though one syscall to do both might be more efficient (saves two mode switches))
The only point which I don´t like at the moment is that you need to have a port to send a msg, because I need a port if the receiver wants to make a reply (it will need a port for this).
I also did something so that you can say, if the owner of a port is a thread or a task and if the owner or a thread or a task could read the port and if anyone or a thread or a task can write to the port. I hope that I can so achieve some security.
I just decided to not doing it the way I wanted, but to do it the way you describe it. So I will have some code which runs in kernel space (the irq handler) and does the checking if a device just did the irq and then it does send the data over ipc to the driver (will be no msg, but shared memory). Because if I would do it the way I first thought, I would have the problem that I also would need to have the scheduler in user space and this is something I don´t like, because it produces to much headachesSelenic wrote: That's a good idea, but the device driver has to be able to set up the IRQ handler, because it needs to gather device-specific data to send with the message (as it can't be retrieved in the future, when another event may have happened and changed it)
Re: synchronous ipc
Surely that's a problem with the sendrec command too, because there's no way of identifying what's a reply to what? Anything which can do the reply identification on one side of the kernel/user space divide can do it on the other, too.FlashBurn wrote:The problem with your variant is what happens if the thread send a msg and wants to wait for the reply, but another thread send a msg and now the waiting thread thinks this is the reply.
Look at how the internet protocol stack works: at the physical layer, it's "Put these bits on the wire", at the lowest software level, it's "send this data here" and then extra levels add further structure.
The only other way I could see of doing this would be "This is a reply to that", which provides no way of initiating communication between arbitrary processes.FlashBurn wrote:The only point which I don´t like at the moment is that you need to have a port to send a msg, because I need a port if the receiver wants to make a reply (it will need a port for this).
Re: synchronous ipc
I tracked down the reply thing so that the waiting thread is only woken up if a msg comes from the port where the thread send the request to. This is can´t guarantee that it the waiting thread will get the right reply, but it is almost perfect. I mean the only purpose of this is, that a client wants to ask a server of something and the server does only work on the jobs it gets with requests. So it is impossible that a client gets a reply to a wrong request.
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: synchronous ipc
Brendan, you say that spawning a thread for each IRQ is inefficient, and I agree with you there.
But there is a much better alternative: The driver reserves a thread for each interrupt source. That thread listens only on said interrupt's virtual message port. The kernel masks the interrupt while the driver is not waiting for it.
To ensure that the driver completes quickly, you could allow drivers to create real time (i.e. non-preemptible) threads.
As for scalability: If your server spawns a thread per core, then it should scale as well either way (Particularly if you have the kernel manage event ports for your equivalent of the receive call).
But there is a much better alternative: The driver reserves a thread for each interrupt source. That thread listens only on said interrupt's virtual message port. The kernel masks the interrupt while the driver is not waiting for it.
To ensure that the driver completes quickly, you could allow drivers to create real time (i.e. non-preemptible) threads.
As for scalability: If your server spawns a thread per core, then it should scale as well either way (Particularly if you have the kernel manage event ports for your equivalent of the receive call).
- AndrewAPrice
- Member
- Posts: 2299
- Joined: Mon Jun 05, 2006 11:00 pm
- Location: USA (and Australia)
Re: synchronous ipc
I apologize for the necroposting, it's been a while since I've been on these forums.
I actually really like syncronous IPC as it simplifies messaging. You can treat calling another process like a function call (as Brendan said) which means they can also return messages back (for example, DoIPC(VFS_PID, VFS_GetFileSize, path); returns an integer). Synchronous IPC is also easy to implement and can be extremely fast (no need to store the message in the kernel).
However, I don't like having syncronous IPC as the only opinion supported by a kernel. If an essential service tries sending a message to a user program that locks, then the service could die (unless you implement timeouts, in which case there will still be a performance issue).
You can help hide this problem (but not eliminate it) by building an asynchronous API on top of synchronous IPC. You could do this by making it part of your OS's standard that each program must have a messaging thread different from the main thread, and the sole purpose of the messaging thread is to receive messages and push it to a stack of things that must be done. This is a bad idea since a user program can still potentially slow down a system service if their messaging thread either locks or does too much processing.
I would much rather do the opposite though, implement just asynchronous IPC in your kernel (to keep the kernel small unless you're taking the monolithic approach ) and let a user library provide a synchronous interface for you.
I'm not actually supporting messages at all in the traditional sense in my kernel. Instead I'm passing around interfaces, which can call either asynchronously (only supporting 'void' as the return type and returns instantly) or synchronously, which spawns a thread in the receiver (using a thread pool to minimize overhead) at the interface's entry point. And you can even pass references to types/arrays efficiently as parameters since I'm using global garbage collection (and I'm crazy).
I actually really like syncronous IPC as it simplifies messaging. You can treat calling another process like a function call (as Brendan said) which means they can also return messages back (for example, DoIPC(VFS_PID, VFS_GetFileSize, path); returns an integer). Synchronous IPC is also easy to implement and can be extremely fast (no need to store the message in the kernel).
However, I don't like having syncronous IPC as the only opinion supported by a kernel. If an essential service tries sending a message to a user program that locks, then the service could die (unless you implement timeouts, in which case there will still be a performance issue).
You can help hide this problem (but not eliminate it) by building an asynchronous API on top of synchronous IPC. You could do this by making it part of your OS's standard that each program must have a messaging thread different from the main thread, and the sole purpose of the messaging thread is to receive messages and push it to a stack of things that must be done. This is a bad idea since a user program can still potentially slow down a system service if their messaging thread either locks or does too much processing.
I would much rather do the opposite though, implement just asynchronous IPC in your kernel (to keep the kernel small unless you're taking the monolithic approach ) and let a user library provide a synchronous interface for you.
I'm not actually supporting messages at all in the traditional sense in my kernel. Instead I'm passing around interfaces, which can call either asynchronously (only supporting 'void' as the return type and returns instantly) or synchronously, which spawns a thread in the receiver (using a thread pool to minimize overhead) at the interface's entry point. And you can even pass references to types/arrays efficiently as parameters since I'm using global garbage collection (and I'm crazy).
My OS is Perception.
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: synchronous ipc
L4 has a simple solution to the "hang a server" problem: The server can specify the send is non-blocking. If the receiver is not waiting at right this moment, then the message doesn't get sent. A process may need to define a message handling thread for unsolicited messages, but for "calls" it can do a send followed by receive in a single syscall.