Posted: Fri Nov 09, 2007 9:08 pm
Ok, now that I'm totally bored, at work, waiting for some 700GB of data to move itself around, having watched enough videos (both work-safe and not), and with all my friends sleeping so I can't even chat in messenger, I guess it's a good moment to consider describing the IPC design I had in mind..
This will be a long and dry message..
Where shall we begin? Let's begin in the applications level, let's say a web browser, as those as complicated enough to have plugins and all that crap that tends to make or break good APIs. Oh and there's a server in the other end that needs to serve tons of pages all the time and it has a fat pipe but the clients don't, and then there's the user that thinks having several pages open in a browser in different tabs is a nice thing.. and we'd like a design that works with this mess without causing the poor little developers to scream "**** THIS ****! I'm going to start tomorrow in a new job as a garbage person." (you can't call them men these days, even if they are almost all men anyway...)
For such an application (starting with the client side), ultimately the interesting part is handling events. Those little weenies (or brownies? no wait, brownies get sent by the server to the client in order to have kinky sessions) tell the browser all kinds of interesting things like .. mm.. that the user has returned from the pub and is smashing the keyboard again, or that the system has lost what was supposed to be on screen, because the screenshaver has shaved all the graphics memory.. or the other thread that thinks we should react to the little picture that finally completed downloading.
It should also be able to send the weenies to the flash-o-advertisement, so that the flash-o-advertisement can run some Commando-Action scripts in order to make the drunk buy something he doesn't need. And therefore the browser has arranged the flash-o-advertisement to have a window that it can do whatever it wants to, and has agreed to have a message loop that sends stuff to the flash-o-advertisements window.
Meanwhile (not in Sweden) the server is totally busy, and just because some browser can't have more data in it's cute little hole (or was I gonna say pipe?) right now, the server has no time to wait, so it wants to go giving other browsers something to consume, and wants the local networking subsystem to notify it when the cute little hole (uum.. pipe) gets a little less full, so we can repeat.
So in order to satisfy the weenie-sending dreams of the programs running on the system, we want to have this asynchronous weenie-sending system. But because we don't want to keep those things in the sacred kernel any longer than we have to (and definitely not agree to queue unlimited amounts of them), we'll need some arrangements. Back in the days when computers where really slow, things like printers or even the users smashing the keyboards totally smashed by this time of day were considered fast, and hence there was not much time for computer to do anything other than interact with those devices when it was time to interact. But then computers got faster, and people got less patient, and they thought it's stupid to have computer waiting for device when the user is waiting for the computer. So they had computers start doing other stuff, and have the device interrupt the computer. So now they could use the computer, until they got interrupted by the device also known as girlfriend (or wife, depending if you were foolish enough to accept the 'upgrade').
So, we wanna do something similar: have the kernel interrupt the process when a message arrives. The process can then queue it someplace safe to wait for processing, if it's wife wants it to eat first. Or whatever.
So basicly, blueprint comes here: have the kernel call into the process kinda like a signal handler when a message arrives. Actually, exactly like a signal handler, since you can put the message into some pre-designated box first. The process can then put the message into it's own queues, which it can malloc if it wants, but it's not the kernel's business. Because it was an interrupt, the process then goes on to do whatever it was doing, and maybe eventually processes the message. Kinda like my boss. Except the event probably has less waiting in the queue than my mail in my bosses mind. It probably also has less chance of getting forgotten.
Anyway... mm... yeah.. signal handlers, telling "you've got new mail" so the process can interrupt whatever it was doing in order to move the message to someplace safe, so it's limited inbox quota doesn't get exhausted, and kernel needs not figure out how to malloc into the process address space, and life is simple. We can still use a larger inbox area, and a few pointers in the kernel, so that we can deliver a few more messages before we start losing them, in case we can't interrupt the process for some reason.
Anyway.. now comes the fun parts: there is no kernel buffering needed, as the kernel already knows the pre-designated address in the receiving process. And if the receiving processes queue allocation is written to be compatible with it's malloc (or has it's own allocation scheme), we don't even need to move the messages in the receiving process. I'll leave it as an excersize to implement, but it's really not that complex.
So what do we have... single copy asynchronous IPC mechanism? Ok, what about synchronous calls? Well, because before the message goes into the real queue from which the event-loop feeds, it gets passed to a signal handler, we can intercept replies to faked-synchronous calls in the handler. All that's needed is for the signal handler to understand what message is the reply to be waited for, and semaphore for the main thread to block at, until the handler posts to the semaphore, freeing the thread. Rest of the messages can be sent to the queue to be processed later.
Quite trivial stuff to implement in fact... much more trivial than implementing the POSIX API such that it conforms to all the details... but at least the basic be trivial now.
Ohw, and the data already got copied.. now it's working with the data..
As for mm... actual API calls.. I'll call that "volume 2" and release it next year..
This will be a long and dry message..
Where shall we begin? Let's begin in the applications level, let's say a web browser, as those as complicated enough to have plugins and all that crap that tends to make or break good APIs. Oh and there's a server in the other end that needs to serve tons of pages all the time and it has a fat pipe but the clients don't, and then there's the user that thinks having several pages open in a browser in different tabs is a nice thing.. and we'd like a design that works with this mess without causing the poor little developers to scream "**** THIS ****! I'm going to start tomorrow in a new job as a garbage person." (you can't call them men these days, even if they are almost all men anyway...)
For such an application (starting with the client side), ultimately the interesting part is handling events. Those little weenies (or brownies? no wait, brownies get sent by the server to the client in order to have kinky sessions) tell the browser all kinds of interesting things like .. mm.. that the user has returned from the pub and is smashing the keyboard again, or that the system has lost what was supposed to be on screen, because the screenshaver has shaved all the graphics memory.. or the other thread that thinks we should react to the little picture that finally completed downloading.
It should also be able to send the weenies to the flash-o-advertisement, so that the flash-o-advertisement can run some Commando-Action scripts in order to make the drunk buy something he doesn't need. And therefore the browser has arranged the flash-o-advertisement to have a window that it can do whatever it wants to, and has agreed to have a message loop that sends stuff to the flash-o-advertisements window.
Meanwhile (not in Sweden) the server is totally busy, and just because some browser can't have more data in it's cute little hole (or was I gonna say pipe?) right now, the server has no time to wait, so it wants to go giving other browsers something to consume, and wants the local networking subsystem to notify it when the cute little hole (uum.. pipe) gets a little less full, so we can repeat.
So in order to satisfy the weenie-sending dreams of the programs running on the system, we want to have this asynchronous weenie-sending system. But because we don't want to keep those things in the sacred kernel any longer than we have to (and definitely not agree to queue unlimited amounts of them), we'll need some arrangements. Back in the days when computers where really slow, things like printers or even the users smashing the keyboards totally smashed by this time of day were considered fast, and hence there was not much time for computer to do anything other than interact with those devices when it was time to interact. But then computers got faster, and people got less patient, and they thought it's stupid to have computer waiting for device when the user is waiting for the computer. So they had computers start doing other stuff, and have the device interrupt the computer. So now they could use the computer, until they got interrupted by the device also known as girlfriend (or wife, depending if you were foolish enough to accept the 'upgrade').
So, we wanna do something similar: have the kernel interrupt the process when a message arrives. The process can then queue it someplace safe to wait for processing, if it's wife wants it to eat first. Or whatever.
So basicly, blueprint comes here: have the kernel call into the process kinda like a signal handler when a message arrives. Actually, exactly like a signal handler, since you can put the message into some pre-designated box first. The process can then put the message into it's own queues, which it can malloc if it wants, but it's not the kernel's business. Because it was an interrupt, the process then goes on to do whatever it was doing, and maybe eventually processes the message. Kinda like my boss. Except the event probably has less waiting in the queue than my mail in my bosses mind. It probably also has less chance of getting forgotten.
Anyway... mm... yeah.. signal handlers, telling "you've got new mail" so the process can interrupt whatever it was doing in order to move the message to someplace safe, so it's limited inbox quota doesn't get exhausted, and kernel needs not figure out how to malloc into the process address space, and life is simple. We can still use a larger inbox area, and a few pointers in the kernel, so that we can deliver a few more messages before we start losing them, in case we can't interrupt the process for some reason.
Anyway.. now comes the fun parts: there is no kernel buffering needed, as the kernel already knows the pre-designated address in the receiving process. And if the receiving processes queue allocation is written to be compatible with it's malloc (or has it's own allocation scheme), we don't even need to move the messages in the receiving process. I'll leave it as an excersize to implement, but it's really not that complex.
So what do we have... single copy asynchronous IPC mechanism? Ok, what about synchronous calls? Well, because before the message goes into the real queue from which the event-loop feeds, it gets passed to a signal handler, we can intercept replies to faked-synchronous calls in the handler. All that's needed is for the signal handler to understand what message is the reply to be waited for, and semaphore for the main thread to block at, until the handler posts to the semaphore, freeing the thread. Rest of the messages can be sent to the queue to be processed later.
Quite trivial stuff to implement in fact... much more trivial than implementing the POSIX API such that it conforms to all the details... but at least the basic be trivial now.
Ohw, and the data already got copied.. now it's working with the data..
As for mm... actual API calls.. I'll call that "volume 2" and release it next year..