Toy idea: IPC as interrupts
Toy idea: IPC as interrupts
Many microkernels implement hardware interrupts as messages, so after reading the words "L4.Sec models interrupts as messages from ?hardware? threads." a weird idea crossed my mind. In a way, it's not even so different from those Coyotos FCRBs.
So what if, instead of modelling interrupts as messages, one would model messages as interrupts? I only got this idea about 2 minutes ago, but it seems to allow asynchronous transfers, with little global policy, no kernel buffering... and well.. I wonder why nobody has mentioned this...
Teh idea goes like this:
Every thread must have a receive buffer defined. It must be long enough to contain the longest message the thread is willing to receive (otherwise we'll truncate) and it must be accessible to the thread, but that's pretty much it.
When a thread gets a message, the message is copied to that message area, and the thread is interrupted (thrown into something like an interrupt/signal handler). This interrupt handler can check if the message makes sense, discard it if not, and set a new buffer if required. Setting a new buffer and returning from interrupt nicely go into a single syscall, after which we'll unblock the thread if it was waiting for a message.
If processes malloc is interrupt safe, then a new buffer can be allocated using normal malloc, which is nice. And if one combines this with fast userspace-semaphores for queue management in userspace, one can avoid syscall for receives which need not block. A simple server could even serve the whole request directly from the handler, while others could server some requests directly, and queue the rest.
Typical case server invocation cost (at least with higher priority server) would look like this:
1. client syscalls to "send"
2. message is copied to receivers buffer
3. receive thread is interrupted, and scheduled
4. receiver either serves the message (simple case) or queues it (sets new buffer and returns from handler
5. in non-simple case receiver is then scheduled for normal execution
With some additional complexity, one could even avoid the interruption when receiver is waiting for a message, only doing the extra work when queueing is actually required.
Now queueing policy goes out of kernel (should we queue? should we priorize? how much to queue?), transfer is atomic even if we use sender blocking with timeout, it should be possible to avoid multiple copies in most cases.. and so on.
I'd like to emphasize, that while sender would be blocked, and it might timeout, by the time the receiver is interrupted, the message has been transferred already, so the whole thing is atomic.
Ofcourse, by default this stuff would be handled by runtime library. Oh, and now the same mechanism can support things like POSIX signals, which are typically hard to do with IPC.
I'm not sure if I'm going to implement this any time soon, but it does seem like an interesting idea.
Thoughts?
So what if, instead of modelling interrupts as messages, one would model messages as interrupts? I only got this idea about 2 minutes ago, but it seems to allow asynchronous transfers, with little global policy, no kernel buffering... and well.. I wonder why nobody has mentioned this...
Teh idea goes like this:
Every thread must have a receive buffer defined. It must be long enough to contain the longest message the thread is willing to receive (otherwise we'll truncate) and it must be accessible to the thread, but that's pretty much it.
When a thread gets a message, the message is copied to that message area, and the thread is interrupted (thrown into something like an interrupt/signal handler). This interrupt handler can check if the message makes sense, discard it if not, and set a new buffer if required. Setting a new buffer and returning from interrupt nicely go into a single syscall, after which we'll unblock the thread if it was waiting for a message.
If processes malloc is interrupt safe, then a new buffer can be allocated using normal malloc, which is nice. And if one combines this with fast userspace-semaphores for queue management in userspace, one can avoid syscall for receives which need not block. A simple server could even serve the whole request directly from the handler, while others could server some requests directly, and queue the rest.
Typical case server invocation cost (at least with higher priority server) would look like this:
1. client syscalls to "send"
2. message is copied to receivers buffer
3. receive thread is interrupted, and scheduled
4. receiver either serves the message (simple case) or queues it (sets new buffer and returns from handler
5. in non-simple case receiver is then scheduled for normal execution
With some additional complexity, one could even avoid the interruption when receiver is waiting for a message, only doing the extra work when queueing is actually required.
Now queueing policy goes out of kernel (should we queue? should we priorize? how much to queue?), transfer is atomic even if we use sender blocking with timeout, it should be possible to avoid multiple copies in most cases.. and so on.
I'd like to emphasize, that while sender would be blocked, and it might timeout, by the time the receiver is interrupted, the message has been transferred already, so the whole thing is atomic.
Ofcourse, by default this stuff would be handled by runtime library. Oh, and now the same mechanism can support things like POSIX signals, which are typically hard to do with IPC.
I'm not sure if I'm going to implement this any time soon, but it does seem like an interesting idea.
Thoughts?
Re:Toy idea: IPC as interrupts
Sounds very interesting....
cheers Joe
So that means the sender is blocked until the receiver has finished its 'message interrupt' handler? Now if the interrupt handler would be executed with the sender's timeslice as well, you would have a good method to avoid DoS attacks: The sender has to pay CPU time for checking whether the message was legal or not. It cannot really starve other clients by flooding the server with nonsense messages. Did I see that right?I'd like to emphasize, that while sender would be blocked, and it might timeout, by the time the receiver is interrupted, the message has been transferred already, so the whole thing is atomic.
cheers Joe
Re:Toy idea: IPC as interrupts
Hi,
Cheers,
Brendan
If a high priority thread running in address space A sends a message to a low priority thread in address space B, how does the data get into the receiver's address space and how many task switches are needed to run the low priority thread's interrupt handler? If the low priority thread's interrupt handler goes into an infinite loop, does it effectively lock up the high priority thread?mystran wrote:Teh idea goes like this:
When a thread gets a message, the message is copied to that message area, and the thread is interrupted (thrown into something like an interrupt/signal handler). This interrupt handler can check if the message makes sense, discard it if not, and set a new buffer if required. Setting a new buffer and returning from interrupt nicely go into a single syscall, after which we'll unblock the thread if it was waiting for a message.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re:Toy idea: IPC as interrupts
Well, no. I specifically thought of first transferring message, then unblocking sender, then interrupting receiver.So that means the sender is blocked until the receiver has finished its 'message interrupt' handler? Now if the interrupt handler would be executed with the sender's timeslice as well, you would have a good method to avoid DoS attacks: The sender has to pay CPU time for checking whether the message was legal or not. It cannot really starve other clients by flooding the server with nonsense messages. Did I see that right?
Also, I'm not sure if I made it clear enough, but my idea is that messaging is asynchronous, so that between a request and a reply, both server and client (or even equals peers) can continue normally.
As it is, I'm not terribly concerned by sender DoS'ing the receiver. Starving other clients isn't that big a problem, since there are basicly three cases:
1. Attacker's priority is higher than other clients or the server; it can starve anyway, by just running an infinite loops.
2. Attacker's priority is less than that of the server, and less than or equal to other clients; no DoS possibility, server will always pre-empt the attacker.
2. Attacker's priority is equal or less than other clients, and equal to priority of the server: Blocking send means I can queue senders. If somebody tries to flood messages, then FIFO queue will take care of letting the other clients get their messages through.
The remaining problem then, is an equal priority thread, that can flood messages to a server fast enough, that all of servers time goes into servicing interrupts.
So I guess the right thing to do here (and possibly good idea anyway) is to always yield to the receiver on send. After this initial yield there is no need for sender to pay anymore, I think.
It is still theoretically possible for a large enough group of senders to time their message sends in such a way that each will be sending at the very end of it's time quantum, but I see this as a relatively theoretical threat. Solution would be to give the receiver not only the current time quatum, but also the next one. This way, for each message it receives, it would have at least one full quantum to handle it.
In any case, blocking the sender until the interrupt is processed is specifically what I don't want to do. I see a receiver blocking the sender as much bigger threat than the other way around: I want to avoid the DoS attack where a client sends a request to a server and the server replies only to get stuck because the client will never finish it's receive properly.
I think the server losing a time quatum here is fine (and this only applies if both client and server run on same priority), since in a sense it's "returning" the quatum that it got from the sender when it received the request.
This does introduce a limited kind of realtime constraint into the message interrupt handler, but I think that's unavoidable if DoS attacks both ways (sender->receiver and receiver->sender) are to be avoided, while still keeping the atomicity of the transfer.
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:Toy idea: IPC as interrupts
I once tried to have this in Clicker ... Actually, i first tried to have those "interrupting message" working and then revert to a more traditional approach (see the wiclicker).mystran wrote: So what if, instead of modelling interrupts as messages, one would model messages as interrupts? I only got this idea about 2 minutes ago, but it seems to allow asynchronous transfers, with little global policy, no kernel buffering... and well.. I wonder why nobody has mentioned this...
Teh idea goes like this:
Every thread must have a receive buffer defined. It must be long enough to contain the longest message the thread is willing to receive (otherwise we'll truncate) and it must be accessible to the thread, but that's pretty much it.
When a thread gets a message, the message is copied to that message area, and the thread is interrupted (thrown into something like an interrupt/signal handler). This interrupt handler can check if the message makes sense, discard it if not, and set a new buffer if required. Setting a new buffer and returning from interrupt nicely go into a single syscall, after which we'll unblock the thread if it was waiting for a message.
The main trouble with "messages that interrupt threads" is that it makes replies very complex. Kicking a server is easy, but when it comes to making sure the server received the message and block the thread until that very message has been replied ... really nasty, trust me, partly because you need communication between the interrupt handler and the "basic thread" to make sure you wake it up for the good reason and that expected results are available.
Re:Toy idea: IPC as interrupts
I think I know what you are talking about, but I don't intend to support replies other than provide onetime capabilities for sending them. So if a reply is just a regular message, sent like any other message, then there should be no issue with this, right?The main trouble with "messages that interrupt threads" is that it makes replies very complex. Kicking a server is easy, but when it comes to making sure the server received the message and block the thread until that very message has been replied ... really nasty, trust me, partly because you need communication between the interrupt handler and the "basic thread" to make sure you wake it up for the good reason and that expected results are available.
The key for the whole scheme to make sense, is that any receiver, be it server waiting for a request, or a client waiting for a reply, will be allowed to execute normally while it's waiting.
So the original plan was to only ever block in two situations: if thread voluntarily wants to block until it gets the next message, or if a message can't be sent immediately because receiver hasn't set a new buffer yet and sender is willing to block.
I don't see either of these being very hard to implement.
Anyway, I thought about this, and it occured to me, that if a DMA like, interrupt controlled receive method is used, why not use a similar method for sends too? Then it's no longer necessary to block the thread for sending either. This ofcourse only makes sense on SMP, because on uniprocessor the receiver won't progress while the pending sender is running..
Oh, and you have to remember that since I'm using capabilities with protected payloads, it's more or less safe to put the callback into the reply capability. And yes, I know that asynchronous callback API is slightly more tedious to use from a language like C, but I am more than willing to ignore that.
But well, I guess this whole deal is complicated enough that I must write a proper document about it or something.
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:Toy idea: IPC as interrupts
That's typically a problem i underestimated as well. If you just have a method call such as "send_stuff(what, who)" and that the program can then continue (leaving the kernel responsible from copying "what" into "who"'s buffers when this operation will be possible, then the current process cannot define how long data in "what" should remain untouched.mystran wrote: Anyway, I thought about this, and it occured to me, that if a DMA like, interrupt controlled receive method is used, why not use a similar method for sends too? Then it's no longer necessary to block the thread for sending either. This ofcourse only makes sense on SMP, because on uniprocessor the receiver won't progress while the pending sender is running..
Now, if for some reason, i feel like sending some stuff that lays on the stack, the receiver cannot be sure that the data DMA'd by the kernel are actually the data that were supposed to be sent...
Re:Toy idea: IPC as interrupts
Actually, that shouldn't be an issue, since you can still do something like this:
More useful case would be something like this:
Yes, you need some method to actually check whether the message is still pending, but other than that there's no issue.
Ofcourse the above assumes we only allow one send to be pending...
Code: Select all
status_t foobar() {
char data[1234];
snprintf(data, 1234, "all %s base belong to %s", "your", "us");
SendMessage(hConsole, MSG_console_print, data, 1234);
// Now message is pending for send..
do_whatever_else_that_can_be_done_while_sending();
if(CancelSendIfPending()) {
// message was still pending, we could retry but..
return error;
} else {
// the message wasn't pending anymore
return success;
}
}
Code: Select all
status_t foobar() {
char data[1234];
snprintf(data, 1234, "all %s base belong to %s", "your", "us");
SendMessage(hConsole, MSG_console_print, data, 1234);
// Now message is pending for send..
while(SendPending()
&& (!user_cancelled)
&& usleep(100)); // sleep for 0.1 seconds
if(CancelSendIfPending()) {
// message was still pending, we could retry but..
return error;
} else {
// the message wasn't pending anymore
return success;
}
}
Ofcourse the above assumes we only allow one send to be pending...
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:Toy idea: IPC as interrupts
That sounds a bit overkill to type to me ... and especially tricky and bug-prone, no ? Likely you'd like the compiler to know about this mechanism and do the "wait until all messages using local buffer are delivered before we exit the code block" ...
But again this could be ?ber-confusing for anyone else than Eleanore, right ? ...
But again this could be ?ber-confusing for anyone else than Eleanore, right ? ...
Re:Toy idea: IPC as interrupts
Well, you can hide that behind a library with synchronous API without extra copy, and you can hide that behind a library with an asynchronous API with one extra copy.
On receiver side you don't even need any extra copies whatever type of API you want to present to the actual application programmer.
Now, there are explicit design goals I've consciously made:
1) If "mortals" can't use the native API without a library wrapper, then it's fine as long as the library won't introduce too much performance overhead.
2) I'm really designing for languages where first-class functions and tail-calls can be taken for granted. Think of something like Standard ML, Ocaml, Scheme, Haskell, and ... well.. Ruby does have call/cc
So technically my target languages don't even let the user mess with the stack.
Oh, and if you think that was confusing, I was just yesterday thinking about an extension that would let you change the buffer while your waiting, without losing your position in the wait queue, so you could initially attempt send from a stack, then upon exiting the codeblock make a copy into the heap (we didn't get the fast case anyway). Such strange functionality would actually be almost trivial to support with the kernel model I'm thinking of.
In any case, doing any of this stuff with my current kernel wouldn't make much sense without a lot of restructuring, and I don't feel like it right now, so I guess I'm not going to prototype this any time soon.
On receiver side you don't even need any extra copies whatever type of API you want to present to the actual application programmer.
Now, there are explicit design goals I've consciously made:
1) If "mortals" can't use the native API without a library wrapper, then it's fine as long as the library won't introduce too much performance overhead.
2) I'm really designing for languages where first-class functions and tail-calls can be taken for granted. Think of something like Standard ML, Ocaml, Scheme, Haskell, and ... well.. Ruby does have call/cc
So technically my target languages don't even let the user mess with the stack.
Oh, and if you think that was confusing, I was just yesterday thinking about an extension that would let you change the buffer while your waiting, without losing your position in the wait queue, so you could initially attempt send from a stack, then upon exiting the codeblock make a copy into the heap (we didn't get the fast case anyway). Such strange functionality would actually be almost trivial to support with the kernel model I'm thinking of.
In any case, doing any of this stuff with my current kernel wouldn't make much sense without a lot of restructuring, and I don't feel like it right now, so I guess I'm not going to prototype this any time soon.