Hi,
gaf wrote:Using a send_receive message (standard way in L4) this would look as following:
- The Client sends its message to the server and blocks
- The server process the request at it's priority level
- After that it returns to the caller which can then resume its work
As you can see this behaviour resembles a normal function-call which it indeed replaces to a certain extend in a ?-kernel.
That's why I don't like the synchronous send-receive-reply stuff - too much like call/ret and not enough like multi-threading. For e.g. consider what happens on a dual CPU computer where there's one thread that needs to send 3 independant requests to a server:
[tt]On CPU0 client sends request 1 and blocks (CPU1 idle)
On CPU0 server receives request 1, processes, replies and blocks (CPU1 idle)
On CPU0 client receives and handles result 1, sends request 2 and blocks (CPU1 idle)
On CPU0 server receives request 2, processes, replies and blocks (CPU1 idle)
On CPU0 client receives and handles result 2, sends request 3 and blocks (CPU1 idle)
On CPU0 server receives request 3, processes, replies and blocks (CPU1 idle)
On CPU0 client receives and handles result 3 (CPU1 idle)[/tt]
Total task switches: 6, total time spent idle: CPU0=0%, CPU1=100%
For the same thing with non-blocking messaging:
[tt]On CPU0 client sends request 1, (CPU1 idle)
On CPU0 client sends request 2, on CPU1 sever receives request 1 and processes request 1
On CPU0 client sends request 3, on CPU1 sever sends result 1 and receives request 2
On CPU0 client sends request 4, on CPU1 sever processes request 2 and sends result 2
On CPU0 client sends request 5, on CPU1 sever receives request 3 and processes request 3
On CPU0 client receives and handles result 1, on CPU1 sever sends result 3 and receives request 4
On CPU0 client receives and handles result 2, on CPU1 sever processes request 4 and sends result 4
On CPU0 client receives and handles result 3, on CPU1 sever receives request 5 and processes request 5
On CPU0 client receives and handles result 4, on CPU1 sever sends result 5 and blocks
On CPU0 client receives and handles result 5, (CPU1 idle)[/tt]
Total task switches = 0, total time spent idle: CPU0=0%, CPU1=20%
It's a very rough example, but you should understand what I mean - for multi-CPU, synchronous send-receive-reply sucks badly (and the more CPUs you've got the more it sucks). IMHO send-receive-reply is the main reason why multi-threading doesn't work as well as it should on most traditional OSs (the most important thing for multi-CPU performance is keeping all CPUs busy).
gaf wrote:Solar: Ah, exos... been there, read that, didn't like 'em.
Brendan: That would also work - I just don't like synchronous send-receive-reply
...
This is now the second time in this thread that I get an answer that goes roughly like this:
"This might work perfectly well, but for some dogmatic reasons I prefere sticking to the good ol' monolithic design".
How comes that nano/exo-kernels have such a bad reputation ?
For the record, I don't like monolithic design much and lean more towards "largish micro-kernel" (micro-kernel with IPC, pagers, scheduler and a few other little things built into it).
How come nano/exo-kernels have such a bad reputation? I think it comes down to people ignoring what most OS's get used for - running "applications" (which includes things like Apache servers, etc). Applications programmers couldn't give a rat's behind what the kernel does or how much policy
could be changed - they're writing Java, VisualBasic and POSIX/ANSI C code that needs to work. Often performance is less important than development time, often it needs to be portable, and almost always the application programmer has better things to do than understanding and writing new "system modules" or implementing different policies.
It's sort of like designing a car where the steering wheel, gearstick and pedals can be easily removed and replaced with something completely different - a waste of time considering that most people just want to go to the shops.
Cheers,
Brendan