Could synchronous send-receive-reply also work? When the server replies, the client would be blocked waiting for the reply, and the message could then be immediately copied and the client unblocked. At which point the scheduler can decide to allow the server to continue running, or whatever the policy determines (maybe the client, for whatever bizarre reason, is higher priority).Brendan wrote: To implement any scheduling policy accurately you need to prevent IPC from causing undesirable thread switches. The only way this can be done is with message queues.
Minimalist Microkernel Memory Management
Re:Minimalist Microkernel Memory Management
Re:Minimalist Microkernel Memory Management
Hi,
Cheers,
Brendan
That would also work - I just don't like synchronous send-receive-reply ...QuiTeVexat wrote:Could synchronous send-receive-reply also work? When the server replies, the client would be blocked waiting for the reply, and the message could then be immediately copied and the client unblocked. At which point the scheduler can decide to allow the server to continue running, or whatever the policy determines (maybe the client, for whatever bizarre reason, is higher priority).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re:Minimalist Microkernel Memory Management
Using a send_receive message (standard way in L4) this would look as following:Brendan wrote:A low priority server and high priority client doesn't make much design sense, so let's assume the opposite - a high priority server sending a message to a low priority client. In this case it can be assumed that the server is sending the message in response to an earlier request (it's the most likely possibility). Without an immediate task switch, the high priority server could continue to run or other tasks may run instead of the low priority task. The low priority task would eventually get CPU time (when it should, according to the scheduler) and receive the message. There's no reason for the entire system to become halted.
- The Client sends its message to the server and blocks
- The server process the request at it's priority level
- After that it returns to the caller which can then resume its work
As you can see this behaviour resembles a normal function-call which it indeed replaces to a certain extend in a ?-kernel. If the scheduler was allowed to re-schedule after every step of this operation (client->server, server->client), a task would risk losing its whole time-slice aswell as not being scheduled for a longer instance every single time it sends a message, which is (as you yourself stated) done all the time in L4. A policy such as the one you described thus doesn't make a lot of sense in L4, which however doesn't mean that it couldn't be done (As QuiTeVexat already pointed out, or by extending the IPC API).
If you really think that asynchronious IPC would solve the problem better, you can also extend the L4 mechanism quite easily to support it. Mystran has posted a concept about how this could be done some days ago..
If you don't use any buffering technique, an integrated sigma0 server will run several times slower than any user-space simga0 using a pool. Unfortunately I can't provide you with any numbers off-hand, but I'm absolutly sure that traffic between user-level pager's and sigma0 can be kept to a minimum and that you'll not notice it under any workload. Including something back into the kernel just because you think that it under certain cirumstances (here: lousy pagers) could cost 1-3 % of performance is in my opinion nothing but premature optimization at its finest..Brendan wrote:I did mean it in a more general way, but only building things into the kernel where it makes the most sense (which doesn't really apply to the other user-pagers in L4). If the performance gains of building sigma0 into the kernel mean that page caching is unecessary, then it'd result in a simpler system where free RAM isn't scattered across multiple free page pools and less "free up some RAM" messages sent to the user-level pages.
"Modularity just helps with portability"... might be true but you somewhat under-estimate how much it actually helps. In fact you can go for a monolithic-kernel without any disadvantages under the premise that you don't only re-write your code from scratch for every single user, but also for all possible system configurations and workload. You'd then end up with a perfectly monolithic OS that suits the system it was designed for perfectly but, as it's roughly as flexible as a brick, can't be ported to any other computer or even user, leave alone another architecture. To make the whole task a bit more challenging you might aswell use a language such as brainfuck or VisualBasic rather than assembler. Now ask yourself why nobody has ever attempted to do so..Brendan wrote:What if I wrote a huge monolithic kernel in assembly, making every possible machine-specific optimization I could? In order to be able to support machine-specific code an OS doesn't have to be modular (modularity just helps with portability).
This is now the second time in this thread that I get an answer that goes roughly like this:Solar: Ah, exos... been there, read that, didn't like 'em.
Brendan: That would also work - I just don't like synchronous send-receive-reply ...
"This might work perfectly well, but for some dogmatic reasons I prefere sticking to the good ol' monolithic design".
How comes that nano/exo-kernels have such a bad reputation ?
Do you really think that you can do better in developing yet another monlithic kernel than a team of several hundred highly skilled (and well paid..) developers at microsoft/linux ?
regards,
gaf
Re:Minimalist Microkernel Memory Management
Sounds like you've got that backwards, one does not need to live on the straight and narrow road, a microkernel has a blurry definition, the best way to describe it would be "somewhere between an exokernel and a monolithic kernel", as such why shouldn't I borrow design elements from monolithic kernels provided it doesn't compromise the security or stability?gaf wrote:This is now the second time in this thread that I get an answer that goes roughly like this:Solar: Ah, exos... been there, read that, didn't like 'em.
Brendan: That would also work - I just don't like synchronous send-receive-reply ...
"This might work perfectly well, but for some dogmatic reasons I prefere sticking to the good ol' monolithic design".
A truly skilled OS Developer would see the values and disadvantages of each design model and attempt to find a common ground with the best of all worlds, sticking to one model just because of "design consistancy" is impractical.
Re:Minimalist Microkernel Memory Management
My dislike of exokernels has nothing to do with dogmatism, but a lot with my idea of what should be my OS' environment for application programmers on the one side and OS maintainers on the other.
I also strongly believe in cherry-picking and mixing design principles to come up with a solution that strikes the balance between efficiency, convenience and security. (As AR above.) But that's just me.
I said I didn't like exos. Doesn't mean I think bad about you trying to build one.
I also strongly believe in cherry-picking and mixing design principles to come up with a solution that strikes the balance between efficiency, convenience and security. (As AR above.) But that's just me.
I said I didn't like exos. Doesn't mean I think bad about you trying to build one.
Every good solution is obvious once you've found it.
Re:Minimalist Microkernel Memory Management
Because you always have to trade modularity and flexibility for the performance advantage the monolithic way may offer. As I tried to point out in earlier posts, this is in my opinion not such a good idea because a flexible design allows more specialized solutions for applications than a monolithic-kernel could, thus increasing performance on the long-run.AR wrote:Why shouldn't I borrow design elements from monolithic kernels provided it doesn't compromise the security or stability?
And what makes you so sure that you're the chosen one who'll find the perfect compromise ? Bill Gates, Linus Torvalds and a lot of others have already tried with a whole armada of skilled OS devers and they all failed miserably. WinNT/2000 (for example) has several ?-kernel like characteristics and can be seen as an attempt to mix the two concepts.AR wrote:A truly skilled OS Developer would see the values and disadvantages of each design model and attempt to find a common ground with the best of all worlds, sticking to one model just because of "design consistancy" is impractical.
I don't think that an exo-kernel like approach would keep you from realizing your ideas concerning both programmer interface and system architecture. After all the exo-kernel is only the lowest layer and the "real" OS is built on top of it (->library OS).Solar wrote:My dislike of exokernels has nothing to do with dogmatism, but a lot with my idea of what should be my OS' environment for application programmers on the one side and OS maintainers on the other.
Don't get me wrong: I'm not here to fight any crusades for the exokernel concept. Although personally thinking that it has a lot of potetial, I do accept if you want to go the other direction. I'd just like to give others the chance to have a look at the concept from another point of view so that they at least know both sides of the coin before deciding agains it
regards,
gaf
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
Re:Minimalist Microkernel Memory Management
My big problem with exokernels is that I simply don't understand them. I've read some papers on them, but it's hard to envision how they work in reality. I also get the sense that a lot more low-level resource management would need to be in an exokernel that wouldn't even be in a microkernel (e.g. -- disk blocks, but I'm sure there are others). This makes me vaguely uncomfortable about driver development for an exokernel... What if some new type of hardware resource comes along that the exokernel doesn't know about? If it can handle this, I guess I just don't understand how.
About IPC and scheduling in L4:
Another L4 question -- do pagers that swap have to keep track of disk addresses themselves, or does the kernel give them a way to stuff information into the unused bits of an invalid PTE? I'm beginning to think that with L4's MM scheme, the total size of the data structures needed to track memory would end up being larger than in the monolithic MM case...
On flexibility and separating mechanism from policy... While in general I think this is a good idea, I'm not sure whether I would take it as far as the designers of L4 did. Mainly because I think you need radically different policies only when targeting your OS at radically different uses (e.g. -- embedded vs. server). QNX for example has only one user-level pager, and you can pretty much take it (for higher-end hw) or leave it (for really low-end embedded devices with no MMU). I could see how a similar architecture (or just a little more public documentation ) would allow for a new user-level pager to replace the default one if need be.
In the real world, is it really necessary to support such different policies simultaneously, at run-time, in a "hot-swappable" manner? IMO, you can have a single user-level server that supports not-quite-such radically different policies for different kinds of apps, if that's what's needed...
About IPC and scheduling in L4:
Which thread does "it's" refer to in the second point? The client's? (I think so, I just want to be sure... priority inheritance is generally a Good Thing ).Using a send_receive message (standard way in L4) this would look as following:
- The Client sends its message to the server and blocks
- The server process the request at it's priority level
- After that it returns to the caller which can then resume its work
Another L4 question -- do pagers that swap have to keep track of disk addresses themselves, or does the kernel give them a way to stuff information into the unused bits of an invalid PTE? I'm beginning to think that with L4's MM scheme, the total size of the data structures needed to track memory would end up being larger than in the monolithic MM case...
On flexibility and separating mechanism from policy... While in general I think this is a good idea, I'm not sure whether I would take it as far as the designers of L4 did. Mainly because I think you need radically different policies only when targeting your OS at radically different uses (e.g. -- embedded vs. server). QNX for example has only one user-level pager, and you can pretty much take it (for higher-end hw) or leave it (for really low-end embedded devices with no MMU). I could see how a similar architecture (or just a little more public documentation ) would allow for a new user-level pager to replace the default one if need be.
In the real world, is it really necessary to support such different policies simultaneously, at run-time, in a "hot-swappable" manner? IMO, you can have a single user-level server that supports not-quite-such radically different policies for different kinds of apps, if that's what's needed...
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
Re:Minimalist Microkernel Memory Management
Yep. And no-one keeps you from coming up with a libOS that feels like Windows (used by 30% of MyOS applications), Linux (used by another 30% of MyOS applications), MacOS (...30%...) or any hodgepodge of other APIs some libOS designer might have come up with (...10%...). Hey voila, a maintenance coder's nightmare.gaf wrote: After all the exo-kernel is only the lowest layer and the "real" OS is built on top of it (->library OS).
Don't bother. I don't like exo's, and you won't convince me otherwise. I'll be happy to help you if I can, though.
Every good solution is obvious once you've found it.
Re:Minimalist Microkernel Memory Management
IIRC, on page fault, the kernel sends a message to the responsible pager thread, that contains the faulting address and the faulting thread, nothing more. It's then up to the pager to determine which page corresponds to the address, grab a free page in it's address space somehow (i.e. from a physical memory manager), fetch the data from disk, and map it to the faulting address space. I'm quite sure L4 won't allow you to access the PT's in any way (not even for reading simple values). Again, IIRC, it's been a while since I digged into the L4 docs.Colonel Kernel wrote: Another L4 question -- do pagers that swap have to keep track of disk addresses themselves, or does the kernel give them a way to stuff information into the unused bits of an invalid PTE? I'm beginning to think that with L4's MM scheme, the total size of the data structures needed to track memory would end up being larger than in the monolithic MM case...
But why would the needed data structures become larger than in the monolythic approach? I think that you should track page mappings (even mappings to swap space) in seperate data structures anyway, just building the processor specific page tables and directories out of this information. Consider that most address spaces are unlikely to be fully (all 4GB) decked out, and when a process (almost) does, it's likely that there are some very large, but contiguous chunks mapped (files, e.g.). So if you only track the mapped and used pages, and use ranges for multiple contiguous mappings instead of having an entry for every single page, you should come along fine, I think.
cheers Joe
Re:Minimalist Microkernel Memory Management
That depends on your definition of the term microkernel. As AR has already said, it's in general pretty vague and can mean anything between a kernel that supports some user-space servers for special tasks and a very low-level ?kernel such as L4 which has a lot in common with the exokernel design.Colonel Kernel wrote:I also get the sense that a lot more low-level resource management would need to be in an exokernel that wouldn't even be in a microkernel (e.g. -- disk blocks, but I'm sure there are others).
If you compare how basic ressources like memory, IRQs or cpu-time are managed by the L4 nucleus and how by an exokernel you'll find a lot of similiarities. The key difference is that L4 at least tries to build a basic abstraction around the resources, while exokernels limit themself on simply exposing the raw hardware.
In fact the exokernel itself can be seen as a big driver that allows secure access to all the system ressources. It's therefore neccessary to recompile it if a new device is added, but future implementations (like mine..) may of course support a more practical technique like loadable kernel modules or user-space drivers.Colonel Kernel wrote:What if some new type of hardware resource comes along that the exokernel doesn't know about?
It is important to know that an exokernel driver doesn't have much to do with the traditional definition of a driver because its only job is to allow "multiplexed" access to its device but not to offer any kind of abstraction. This means that it has to (logically) divide the device into several extends that are each protected by a capability that serves as a key. By allowing the allocation of such extends by applications/managers, user-space policies can be constructed.
Here's a small example:
- root is the main-memory manager
- app1 want to allocate 16kb of physical memory
- app2 is just a dummy..
[pre]
o--------o o------o o------o
| kernel | o---o | root | --o--- | app1 |
o--------o o------o | o------o
|
| o------o
o--- | app2 |
o------o
[/pre]
1. At system start-up root is loaded
2. Root asks the kernel for the cap to access the whole main memory
3. Root is granted the capability and goes to sleep
4. App1 is started and asks root through the (user-space defined) interface for memory
5. Root does some internal book-keeping and then returns a capability to app1 that allows it to access the requested memory
6. App1 uses the capability to map the page-frame somewhere
As you can see the whole policy depends on root. To support multiple user-space managers all that is needed is to replace root by a tasks that supports them. Pagers could then login through root's interface just like app1 does in the example, and the whole system would just be expanded by one level.
Please also note the similarities to L4's memory management:
Root is nothing else than sigma0 and while L4 doesn't explicitly use capabilities, the three memory primitives grant, map & unmap serve exactly the same purpose.
(to be continued..)
Re:Minimalist Microkernel Memory Management
Hmm, good question..Colonel Kernel wrote:Which thread does "it's" refer to in the second point? The client's? (I think so, I just want to be sure... priority inheritance is generally a Good Thing ).
For me i'd be most logcal if "its" refered to the client to reflect the semantics of a system-call, but I can't guaranty you that this is also how it's done in L4.
Applications can only access the page-table indirectly by using grant, map and unmap. Therefore you can't store anything there, but I have some doubts that this would have been such a good idea anyways. After all there aren't that many unused PTEs and the data therefore would have to be scattered across several kb of memory causing horrible access-times and caching behaviour. Apart from that you'd need a list to locate the PTEs used for the disk-addresses and probably even a second in case that the space in the PTEs isn't sufficient.Colonel Kernel wrote:Another L4 question -- do pagers that swap have to keep track of disk addresses themselves, or does the kernel give them a way to stuff information into the unused bits of an invalid PTE? I'm beginning to think that with L4's MM scheme, the total size of the data structures needed to track memory would end up being larger than in the monolithic MM case...
It's a good idea to start with one user-level manager, otherwise you'll only get side-tracked. As long as the manager is clearly seperated from the kernel you won't have any problems if you really were to decide at a later point of time to support multiple pagers. All that is then neccessary is to write a new root manager and adapt the old one to act as a child.Colonel Kernel wrote:On flexibility and separating mechanism from policy... While in general I think this is a good idea, I'm not sure whether I would take it as far as the designers of L4 did. Mainly because I think you need radically different policies only when targeting your OS at radically different uses (e.g. -- embedded vs. server). QNX for example has only one user-level pager, and you can pretty much take it (for higher-end hw) or leave it (for really low-end embedded devices with no MMU). I could see how a similar architecture (or just a little more public documentation ) would allow for a new user-level pager to replace the default one if need be.
Well, there's the often quoted "eight times throughput increasement compared to OpenBSD" which was achieved by a specialized user-level networking manager on the exokernel (pdf). Of course this is an extreme example but it shows that performace can be improved significantly using specialized policies, at least for "unusual" applications.Colonel Kernel wrote:In the real world, is it really necessary to support such different policies simultaneously, at run-time, in a "hot-swappable" manner?
regards,
gaf
Re:Minimalist Microkernel Memory Management
I never said I was a "chosen one", I personally would only claim the title of "skilled and experienced" once I had successfully implemented a monolithic, micro and exo kernel and experimented with different concepts for implementing OS features. Practical knowledge is more useful then theoretical.gaf wrote:And what makes you so sure that you're the chosen one who'll find the perfect compromise ? Bill Gates, Linus Torvalds and a lot of others have already tried with a whole armada of skilled OS devers and they all failed miserably. WinNT/2000 (for example) has several ?-kernel like characteristics and can be seen as an attempt to mix the two concepts.AR wrote:A truly skilled OS Developer would see the values and disadvantages of each design model and attempt to find a common ground with the best of all worlds, sticking to one model just because of "design consistancy" is impractical.
Re:Minimalist Microkernel Memory Management
Hi,
[tt]On CPU0 client sends request 1 and blocks (CPU1 idle)
On CPU0 server receives request 1, processes, replies and blocks (CPU1 idle)
On CPU0 client receives and handles result 1, sends request 2 and blocks (CPU1 idle)
On CPU0 server receives request 2, processes, replies and blocks (CPU1 idle)
On CPU0 client receives and handles result 2, sends request 3 and blocks (CPU1 idle)
On CPU0 server receives request 3, processes, replies and blocks (CPU1 idle)
On CPU0 client receives and handles result 3 (CPU1 idle)[/tt]
Total task switches: 6, total time spent idle: CPU0=0%, CPU1=100%
For the same thing with non-blocking messaging:
[tt]On CPU0 client sends request 1, (CPU1 idle)
On CPU0 client sends request 2, on CPU1 sever receives request 1 and processes request 1
On CPU0 client sends request 3, on CPU1 sever sends result 1 and receives request 2
On CPU0 client sends request 4, on CPU1 sever processes request 2 and sends result 2
On CPU0 client sends request 5, on CPU1 sever receives request 3 and processes request 3
On CPU0 client receives and handles result 1, on CPU1 sever sends result 3 and receives request 4
On CPU0 client receives and handles result 2, on CPU1 sever processes request 4 and sends result 4
On CPU0 client receives and handles result 3, on CPU1 sever receives request 5 and processes request 5
On CPU0 client receives and handles result 4, on CPU1 sever sends result 5 and blocks
On CPU0 client receives and handles result 5, (CPU1 idle)[/tt]
Total task switches = 0, total time spent idle: CPU0=0%, CPU1=20%
It's a very rough example, but you should understand what I mean - for multi-CPU, synchronous send-receive-reply sucks badly (and the more CPUs you've got the more it sucks). IMHO send-receive-reply is the main reason why multi-threading doesn't work as well as it should on most traditional OSs (the most important thing for multi-CPU performance is keeping all CPUs busy).
How come nano/exo-kernels have such a bad reputation? I think it comes down to people ignoring what most OS's get used for - running "applications" (which includes things like Apache servers, etc). Applications programmers couldn't give a rat's behind what the kernel does or how much policy could be changed - they're writing Java, VisualBasic and POSIX/ANSI C code that needs to work. Often performance is less important than development time, often it needs to be portable, and almost always the application programmer has better things to do than understanding and writing new "system modules" or implementing different policies.
It's sort of like designing a car where the steering wheel, gearstick and pedals can be easily removed and replaced with something completely different - a waste of time considering that most people just want to go to the shops.
Cheers,
Brendan
That's why I don't like the synchronous send-receive-reply stuff - too much like call/ret and not enough like multi-threading. For e.g. consider what happens on a dual CPU computer where there's one thread that needs to send 3 independant requests to a server:gaf wrote:Using a send_receive message (standard way in L4) this would look as following:
- The Client sends its message to the server and blocks
- The server process the request at it's priority level
- After that it returns to the caller which can then resume its work
As you can see this behaviour resembles a normal function-call which it indeed replaces to a certain extend in a ?-kernel.
[tt]On CPU0 client sends request 1 and blocks (CPU1 idle)
On CPU0 server receives request 1, processes, replies and blocks (CPU1 idle)
On CPU0 client receives and handles result 1, sends request 2 and blocks (CPU1 idle)
On CPU0 server receives request 2, processes, replies and blocks (CPU1 idle)
On CPU0 client receives and handles result 2, sends request 3 and blocks (CPU1 idle)
On CPU0 server receives request 3, processes, replies and blocks (CPU1 idle)
On CPU0 client receives and handles result 3 (CPU1 idle)[/tt]
Total task switches: 6, total time spent idle: CPU0=0%, CPU1=100%
For the same thing with non-blocking messaging:
[tt]On CPU0 client sends request 1, (CPU1 idle)
On CPU0 client sends request 2, on CPU1 sever receives request 1 and processes request 1
On CPU0 client sends request 3, on CPU1 sever sends result 1 and receives request 2
On CPU0 client sends request 4, on CPU1 sever processes request 2 and sends result 2
On CPU0 client sends request 5, on CPU1 sever receives request 3 and processes request 3
On CPU0 client receives and handles result 1, on CPU1 sever sends result 3 and receives request 4
On CPU0 client receives and handles result 2, on CPU1 sever processes request 4 and sends result 4
On CPU0 client receives and handles result 3, on CPU1 sever receives request 5 and processes request 5
On CPU0 client receives and handles result 4, on CPU1 sever sends result 5 and blocks
On CPU0 client receives and handles result 5, (CPU1 idle)[/tt]
Total task switches = 0, total time spent idle: CPU0=0%, CPU1=20%
It's a very rough example, but you should understand what I mean - for multi-CPU, synchronous send-receive-reply sucks badly (and the more CPUs you've got the more it sucks). IMHO send-receive-reply is the main reason why multi-threading doesn't work as well as it should on most traditional OSs (the most important thing for multi-CPU performance is keeping all CPUs busy).
For the record, I don't like monolithic design much and lean more towards "largish micro-kernel" (micro-kernel with IPC, pagers, scheduler and a few other little things built into it).gaf wrote:This is now the second time in this thread that I get an answer that goes roughly like this:Solar: Ah, exos... been there, read that, didn't like 'em.
Brendan: That would also work - I just don't like synchronous send-receive-reply ...
"This might work perfectly well, but for some dogmatic reasons I prefere sticking to the good ol' monolithic design".
How comes that nano/exo-kernels have such a bad reputation ?
How come nano/exo-kernels have such a bad reputation? I think it comes down to people ignoring what most OS's get used for - running "applications" (which includes things like Apache servers, etc). Applications programmers couldn't give a rat's behind what the kernel does or how much policy could be changed - they're writing Java, VisualBasic and POSIX/ANSI C code that needs to work. Often performance is less important than development time, often it needs to be portable, and almost always the application programmer has better things to do than understanding and writing new "system modules" or implementing different policies.
It's sort of like designing a car where the steering wheel, gearstick and pedals can be easily removed and replaced with something completely different - a waste of time considering that most people just want to go to the shops.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re:Minimalist Microkernel Memory Management
question:Brendan wrote: As you can see this behaviour resembles a normal function-call which it indeed replaces to a certain extend in a ?-kernel
That's why I don't like the synchronous send-receive-reply stuff - too much like call/ret and not enough like multi-threading.
take a look on this from the side of programming *languages*. Our programming languages (at least those I know) all use synchronous procedure calls, and the most straight forward way to distribute an app is therefore to do synchronous RPC (look at CORBA or COM). When you want to use asynch RPC, you'll either have to do additional, specialized calls (begin call, check for completion, retrieve result, instead of just one call), the borders between servers would become explicit, instead of hiding them, which is the general trend (but no one says that you always should follow the trends ).
That brings to mind another question: Are our current programming languages (derived somehow from C or Pascal) really that suitable for today's growing requirements in the field of concurrent programming...I've not yet seen a language that directly allows for asynchronous procedure calls.
cheers Joe
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
Re:Minimalist Microkernel Memory Management
http://research.microsoft.com/Comega/JoeKayzA wrote: That brings to mind another question: Are our current programming languages (derived somehow from C or Pascal) really that suitable for today's growing requirements in the field of concurrent programming...I've not yet seen a language that directly allows for asynchronous procedure calls.
Check out the papers on concurrency abstractions in C#.
@Brendan:
IMO your examples are only really meaningful if both the client and the server are CPU bound. In terms of servers, I don't think many OS services fall under that category. In terms of clients, most clients would be stuck waiting for the server to do useful work anyway. If a client thread wants to make an asynchronous request to a server, it can always queue a request to a pool of worker threads, any of which could handle communication with the server on its behalf. Or, the server thread that handles requests could ship them off to its own thread pool to distribute the load and reply to the client immediately. Either way works, and both are common idioms in my experience.
Either you make kernel-level IPC synchronous and really fast, or you add more complexity in the kernel for asynchronous IPC and you try to get rid of thread pools. It isn't clear to me which approach would perform better, but to me they seem functionally equivalent from the application developer's point of view.
We need benchmarks!
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager