Microkernel Design Info

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
slash

Microkernel Design Info

Post by slash »

Can anyone tell me where I might find info on designing my own microkernel.
BI lazy

Re:Microkernel Design Info

Post by BI lazy »

A. Tanenbaum: Operating Systems: Design and Implementation II. Very good reference, and the operating system is MInix, the fabulous Micrro Kernel OS :-)

MAybe I sound offending, but I won't apologize: STFWFY or check out quicklinkz
Arto

Re:Microkernel Design Info

Post by Arto »

Christopher Browne's OS Pages are a good resource containing information and links.

ODP's directory probably has the most comprehensive single link list.

Aside from buying books, reading research papers such as this is probably the best method to get up to speed.
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Microkernel Design Info

Post by Colonel Kernel »

Tanenbaum's book is pretty good in that it covers a lot of implementation details that other books don't. It also explains the rationale for microkernels pretty well. However, Minix itself is IMO a very warped design, barely worthy of being called a microkernel (for various reasons that I don't have time to get into while I'm at work ;) ).
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
Dreamsmith

Re:Microkernel Design Info

Post by Dreamsmith »

Well, if you won't, I will. Having had the displeasure of writing video and mouse drivers for Minix in college, I can tell you one thing -- the ideal of having seperate units who's internal implementations don't affect other seperate units is a wonderful ideal, and everyone should shoot for it. But it's a myth that using a microkernel design guarentees this. Nothing could be further from the truth. It's also equally untrue that not using a microkernel design ensures or makes more likely that kind of spagetti code. Minix itself provides a wonderful counterexample to AST's arguments for the superiority of microkernel design in this regard.

Choosing a microkernel design doesn't nothing at all whatsoever to advance the cause of clear and clean code, free of internal dependencies and weird interactions. Good coding practices like encapsulation and clean implemenation do help ensure this. Now, if you have the good coding practices, it's every bit as easy to keep these things clean and seperate in a monolithic kernel as it is in a microkernel. Using a linker to glue the parts together does nothing to downgrade the quality of the code, and gluing them together with syscalls/message passing does nothing to upgrade it.

AST's main argument is on the maintenance of the code, but well written monolithic code is every bit as easy to maintain as well written microkernel code, and poorly written microkernel code is every bit as buggy and error prone to modify as poorly written monolithic code. It's pure nonsense to suggest your choice of monolithic or microkernel design in any way impacts code maintainability -- it has no effect on it at all. And the weird interactions and full-out crashes that can result in one part of Minix as a result of a problem in another is a wonderful demonstration of this. Not enough effort was put in to isolating the various parts of Minix from the internal details of the other parts, and it shows.

AST also argues in his book that a microkernel approach works better for distributed computing, because when passing a message to the filesystem manager, for example, you don't need to know whether it's local or not. This is pure bull. It's every bit as easy for a monolithic kernel to delegate its functionality to another machine -- you don't need to know whether the request is satisfied locally or not in either case, and it's no easier or harder to implement remote functionality either way.

Seems to me there was a third major argument he makes that wasn't terribly convincing either, but I don't remember what it was.

I'm not saying there aren't any advantages to a microkernel approach -- being able to mix and match binary implementations of parts of the OS at runtime is a nice one. But AST himself doesn't make a very good case for it in his book. His own reasons seem to range from questionable to completely bogus.
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Microkernel Design Info

Post by Colonel Kernel »

That pretty much hits the nail right on the head. The problem with Minix as I saw it was that AST didn't just structure it as co-operating tasks, but as layered tasks. Given the cost of context switching and message-passing, this was a pretty stupid mistake. Server processes ought to interact "horizontally" as peers, not vertically. As it is now, to get from client task A to server task B, you might have to go through C, D, E, and F, depending on what "layer" B is in. The definition of layers in Minix is also pretty arbitrary and the servers seem to know way too much about each other. No wonder Linus decided not to go with a microkernel, after seeing the incredible hack-job that is Minix. :P
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re:Microkernel Design Info

Post by Brendan »

Hi,

I just wanted to point out that the problems with Minix aren't problems with micro-kernels in general, and that Tanenbaum's book might still contain valuable information despite Minix. I haven't seen or read this book though - I've ignored Tanenbaum after seeing the (IMHO) lack of insight that is the design of AST's Ameoba "distributed" operating system.

In general micro-kernels have worse performance (more IPC, more context switches and worse memory overhead). They are (or should be) more secure/stable (drivers shouldn't be able to trash the entire system) and more flexible (a lot more is dynamically replaceable).

I'd also suggest that a micro-kernel would be much easier to maintain than a monolithic kernel, because you don't have to worry about maintaining the code for each device driver as part of it. On the other hand maintaining an OS would be just as much work regardless of the type of kernel, if the interfaces between software components is well documented.

It's easier to have well documented interfaces with a micro-kernel, but with a monolithic kernel it can be difficult to figure out where the interfaces between components actually are (although this depends on the coding style and language used).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:Microkernel Design Info

Post by Pype.Clicker »

Engineering vs Computer Theory. That's what Monolithic vs Microkernel is ...

Imho, the 'Ideal' approach would be a system where 'experimental' code could be loaded at level 3 and run in a sand-boxed environment and could be later promoted to full-featured level 0 when considered truth-worthy.

Having hacked around the Linux sources (and seeing people manually turning their module's reference count to a negative value to prevent automatic removal by the system) showed me that for a project as big as a kernel, you should better define strict interfaces *as if you were in a microkernel environment*.

Now, if you succeed to make your system calls *asynchronous*, much of the "inefficience" of microkernel approach disappears because you actually don't switch towards another address space until you'd have required its services anyway.
distantvoices
Member
Member
Posts: 1600
Joined: Wed Oct 18, 2006 11:59 am
Location: Vienna/Austria
Contact:

Re:Microkernel Design Info

Post by distantvoices »

*hehe*

One can write a micro kernel and do an awesome engineering job.

Now, concerning AST's Minix: althou it contains some very interresting points, I've found the entire design a bit too confused: take for example the file system service of Minix: Not even the slightest bit of Objects: I for one think, that pipes shan't be merged into the global read and write mechanism o the Block device file system access - pipes are something like byte streams of an arbitrary length - why not treat them in an own *thread*?
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:Microkernel Design Info

Post by Solar »

Brendan wrote: In general micro-kernels have worse performance (more IPC, more context switches and worse memory overhead). They are (or should be) more secure/stable (drivers shouldn't be able to trash the entire system) and more flexible (a lot more is dynamically replaceable).
As for the stability of microkernels, one example.

It is true that e.g. a crashing SCSI driver shouldn't take the kernel with it in a microkernel design. The problem is that you just lost your hard drive. Your user is left hanging with unsaved data, you can't re-load the driver from disk, and anyway it can crash again any time.

Yes, your kernel is still running. No, that doesn't help much.

(Just to point out that microkernels are not automagically "better".)
Every good solution is obvious once you've found it.
Dreamsmith

Re:Microkernel Design Info

Post by Dreamsmith »

Exactly! The ease of making a flexible implementation is an advantage (a monolithic kernel can be every bit as flexible, but it takes more work to get there). Stability, OTOH, is questionable. As Solar points out, merely having part of your OS crash rather than the whole thing doesn't really make you more stable -- having the rest of your system up and running after the file system crashes is a hollow victory if you have no way of reloading the file system process.

Now you can add some failsafe recovery routines to get the system up and running, even under these circumstances, but this doesn't come for free under a microkernel -- it's something you need to add on top of it. Since you can do the same thing with a monolithic kernel, adding crash recovery, claiming stability as an advantage of a microkernel design is rather questionable...

The ideal way to do things would probably be to use both methodologies as appropriate. Things that are essential and need to be accessed quickly, ala the SCSI driver, should be part of your kernel. But non-essential functions should be moved off into userspace. If you can cleanly describe your kernel as either monolithic or microkernel, you're probably not doing things as well as you could. A modern kernel should probably be both...
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Microkernel Design Info

Post by Colonel Kernel »

Now you can add some failsafe recovery routines to get the system up and running, even under these circumstances, but this doesn't come for free under a microkernel -- it's something you need to add on top of it. Since you can do the same thing with a monolithic kernel, adding crash recovery, claiming stability as an advantage of a microkernel design is rather questionable...
I don't really see how the same thing is possible in a monolithic kernel. Let's say you have a file system driver in kernel space that crashes. The only viable course of action I can see is to halt the machine. In the case of a microkernel, you can guarantee that none of the kernel's memory has been touched by the process that crashed. If you have a mini-file system driver for the boot device in the kernel, it ought to be possible to bring the "real" file system back up. QNX handles these problems with its "High Availability Manager".
If you can cleanly describe your kernel as either monolithic or microkernel, you're probably not doing things as well as you could. A modern kernel should probably be both...
I sort of agree, if only in the sense that you have to find a good balance in your design. I don't think this entails shoving more things back into kernel space. In that case, you pay both penalties -- the complexity of factoring things out into server processes, and the complexity of having to support two run-time environments (a user-space one for apps and a kernel-space one for drivers). I'd rather avoid the second kind of complexity entirely.

One way to achieve balance in a microkernel system is to not put everything in its own server process, but rather to pool certain components together into the same process where it makes sense. For example, you can have a process for your SCSI driver, which can load different logical file system drivers as shared libraries.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
kiran

Re:Microkernel Design Info

Post by kiran »

Brendan wrote: I've ignored Tanenbaum after seeing the (IMHO) lack of insight that is the design of AST's Ameoba "distributed" operating system.
But what is the problem with Amoeba? Isnt it really a distributed operating system? I thought it was a true distributed os unlike mach which are extensions of unix.

but forgive if i am ignorant.

Kiran
Dreamsmith

Re:Microkernel Design Info

Post by Dreamsmith »

Colonel Kernel wrote:I don't really see how the same thing is possible in a monolithic kernel. Let's say you have a file system driver in kernel space that crashes. The only viable course of action I can see is to halt the machine.
Good thing the people who designed the Apollo computers weren't so pessimistic... ;)

If you're not familiar with the story, when they sent the first lunar lander to the surface of the moon, the onboard computer was overloading and erroring out in a way it had never done during testing. Some tasks were running that shouldn't have been, and it was overloading the scheduler. It essentially crashed and recovered at 10 second intervals during the entire landing procedure. Thankfully, it was designed to recover from crashes rather than simply halt, or Neil Armstrong would have been the first man to go splat on the moon.

There are numerous different strategies for handling the problem. First, you don't need filesystem access if you've made sure your code pages aren't writable (otherwise, you use the same trick you suggested for the microkernel -- a basic read-only backup filesystem driver). You probably want to kill the process that was executing at the time, it's probably responsible for causing the crash, even if it occured during kernel execution, and even if it's not its fault, you may have corrupted it, and you don't know how to check or fix it, because you didn't write it. You wrote the kernel, though -- you ought to know how to check and fix anything corrupted in its data structures. Think of it as a memory fsck. You probably want to essentially "reboot" the kernel, but with special routines that, rather than simply initialize your structures, also attempt to rebuild them from the old structures. If you have some kind of journal, even better, you can complete or back out of any operations that were in progress. In the end, most processes won't even know anything happened, save for the one you had to kill.

People have been designing crash recovery systems since long before the word "microkernel" was coined. It seems odd anyone would suddenly think they now have a monopoly on reliability...
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:Microkernel Design Info

Post by Solar »

The Colonel is right. In a monolithic design, a crashing subsystem is capable of taking the rest of the system with it, because it's running in the same address space. No matter what safeguards you have in place, they could be compromised.

But Dreamsmith is also right - just making your design a microkernel one doesn't earn you zilch if you don't add such safeguards too.

If your goal is maximum stability, a microkernel with safeguards for component crash recovery is a good design decision. If you want an easy design with maximum efficiency, monolithic is probably the way to go.
Every good solution is obvious once you've found it.
Post Reply