Tanenbaum's Book (out of Book Recomm.)

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
User avatar
gaf
Member
Member
Posts: 349
Joined: Thu Oct 21, 2004 11:00 pm
Location: Munich, Germany

Re:Tanenbaum's Book (out of Book Recomm.)

Post by gaf »

Candy wrote:The use of monolithic in OS kernels does not always imply that it should be a static chunk you can't change without recompile, which is what most people still assume monolithic is.
Have you ever wondered why it's 'monolithic' and not 'polylithic' ? A real monolithic operating system consists of one piece and if you want to change anything you have to hack the source. Any design that allows more modularity has already some characteristics of a layered-system or a even a bloated ?-kernel.
Candy wrote:The division between monolithic and microkernel is that a monolithic kernel runs drivers in kernel space, in the reasoning that you can't prevent the system from crashing if the driver is handling irresponsible, so why bother with the extra layer if it's still going to crash. Microkernels are based on the assumption that you can recover from a device or driver crash.
That's your personal definition and if you use it when designing your operating system I don't mind you building a "monolithic" kernel a bit.
My definition is however that systems can be categorized in monolithic, ?kernel or exokernel by how much flexibility/modularity they provide and to what extend the system policy is defined by the apps rather than the operating system. Whether drivers run in user-space or not is just a detail of the implementation, what really matters is the internal design. How else could you explain that exokernels, which are according to the faq "an attempt to drive the Microkernel concept to the extreme", run all drivers a as part of the operating system in kernel space ?

regards,
gaf
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:Tanenbaum's Book (out of Book Recomm.)

Post by Solar »

Sorry, but both Candy and gaf are wrong IMHO.
gaf wrote: A real monolithic operating system consists of one piece and if you want to change anything you have to hack the source.
Quoting from Wikipedia: "In computer operating systems, a monolithic kernel is a kernel which behaves as a single program, rather than as a collection of intercommunicating programs as in the microkernel design."

The Linux kernel is more than one binary, but the drivers are compiled directly against the kernel sources, and not stand-alone programs.
Candy wrote:The division between monolithic and microkernel is that a monolithic kernel runs drivers in kernel space, in the reasoning that you can't prevent the system from crashing if the driver is handling irresponsible, so why bother with the extra layer if it's still going to crash.
On the same tangent, nothing was protecting AmigaOS' Exec (you saw that coming, did you? ;) ) from its drivers. Yet still, drivers were independent pieces of code (actually, a special case of shared library), and they were intercommunicating through a minimalistic kernel API.
gaf wrote: That's your personal definition and if you use it when designing your operating system I don't mind you building a "monolithic" kernel a bit.
I hope you don't mind at all if someone builds a monolithic kernel. It's not like it's a crime to do so.
gaf wrote: Whether drivers run in user-space or not is just a detail of the implementation, what really matters is the
internal design.
Correct. And Linux is a modular monolithic kernel. Heck, they even added a web and ftp server to the kernel, now that's monolithic...
Every good solution is obvious once you've found it.
User avatar
Candy
Member
Member
Posts: 3882
Joined: Tue Oct 17, 2006 11:33 pm
Location: Eindhoven

Re:Tanenbaum's Book (out of Book Recomm.)

Post by Candy »

Solar wrote: Sorry, but both Candy and gaf are wrong IMHO.

Quoting from Wikipedia: "In computer operating systems, a monolithic kernel is a kernel which behaves as a single program, rather than as a collection of intercommunicating programs as in the microkernel design."

The Linux kernel is more than one binary, but the drivers are compiled directly against the kernel sources, and not stand-alone programs.
Linux drivers are not directly compiled against kernel sources. They're compiled against the headers, but that is something you also do with all other drop-in replacements. You DO get problems with constants etc, but so do you in user space.

I stand with my reasoning that a microkernel is mainly a microkernel because:

1. the kernel does not include the drivers
2. Drivers are protected from each other, and the kernel is protected from the drivers

If it does the inverse of this it's a monolithic kernel. That means, drivers are in itself not protected and the kernel can include drivers. Note, that's a can, not a must.

An exokernel is a monolithic kernel made by somebody with very little motivation (put very rough). He does the basics and makes a basic protection thing, then leaves the rest of it to the next guy. It's still a monolithic kernel, with drivers in kernel space and all.
Candy wrote:The division between monolithic and microkernel is that a monolithic kernel runs drivers in kernel space, in the reasoning that you can't prevent the system from crashing if the driver is handling irresponsible, so why bother with the extra layer if it's still going to crash.
On the same tangent, nothing was protecting AmigaOS' Exec (you saw that coming, did you? ;) ) from its drivers. Yet still, drivers were independent pieces of code (actually, a special case of shared library), and they were intercommunicating through a minimalistic kernel API.
AmigaOS didn't have a microkernel in the same sense as they're being created today. AmigaOS' kernel was a kernel into which drivers were loaded and if a driver crashed it could (and probably would) take the system along with it.

It's almost like reasoning that Linux programs are better than Windows programs because they don't crash. No, ok, they're better tested and they don't crash as often, but that's not any reason why Linux systems crash less.
gaf wrote: Whether drivers run in user-space or not is just a detail of the implementation, what really matters is the
internal design.
Correct. And Linux is a modular monolithic kernel. Heck, they even added a web and ftp server to the kernel, now that's monolithic...
I would call it crazy... Services don't belong in there, but stuffing them in there does make the kernel vulnerable... so, yes, that would fit the definition I'm using for monolithic quite well.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:Tanenbaum's Book (out of Book Recomm.)

Post by Solar »

Candy wrote: Linux drivers are not directly compiled against kernel sources. They're compiled against the headers, but that is something you also do with all other drop-in replacements. You DO get problems with constants etc, but so do you in user space.
The point is that they are compiled against headers that are purposefully defined unstable by the kernel maintainers, i.e. they change structures, identifiers etc. between kernel revisions without further warnings. A driver compiled against 2.6.1 kernel headers probably won't work with a 2.6.12 kernel.

The kernel and its drivers are considered one monolithic build package.
1. the kernel does not include the drivers
2. Drivers are protected from each other, and the kernel is protected from the drivers

If it does the inverse of this it's a monolithic kernel.
"Include" is a tricky thing here. Does Windows include its drivers just because you won't get them shipped on a seperate CD?

Protection is also tricky - see AmigaOS.
An exokernel is a monolithic kernel made by somebody with very little motivation (put very rough). He does the basics and makes a basic protection thing, then leaves the rest of it to the next guy.
ROTFLBTCSTC!

;D 8) :D ;D 8)
AmigaOS didn't have a microkernel in the same sense as they're being created today. AmigaOS' kernel was a kernel into which drivers were loaded and if a driver crashed it could (and probably would) take the system along with it.
The drivers were not loaded "into the kernel". I admit that the whole definition thing gets tricky if you don't have a MMU to draw a clear line between kernel and "the rest", and I won't go into details here, but the design was clearly a microkernel.
It's almost like reasoning that Linux programs are better than Windows programs because they don't crash. No, ok, they're better tested and they don't crash as often, but that's not any reason why Linux systems crash less.
I very much doubt that an off-the-mill microkernel could recover from a serious fault in the device driver. Even if it can reload the driver and can re-initialize the device in question, well, the driver is still buggy...
Correct. And Linux is a modular monolithic kernel. Heck, they even added a web and ftp server to the kernel, now that's monolithic...
I would call it crazy... Services don't belong in there, but stuffing them in there does make the kernel vulnerable... so, yes, that would fit the definition I'm using for monolithic quite well.
But it does reduce memory footprint and does increase efficiency / throughput... as always, there are two sides to the medal. ;)
Every good solution is obvious once you've found it.
JoeKayzA

Re:Tanenbaum's Book (out of Book Recomm.)

Post by JoeKayzA »

Candy wrote: I stand with my reasoning that a microkernel is mainly a microkernel because:

1. the kernel does not include the drivers
2. Drivers are protected from each other, and the kernel is protected from the drivers

If it does the inverse of this it's a monolithic kernel. That means, drivers are in itself not protected and the kernel can include drivers. Note, that's a can, not a must.
Again, we seem to have reached the point where opinions are drifting away because of a lack of definitions. To me, the term 'microkernel' always meant that everything that gets added or loaded at runtime (mainly the drivers) runs in a seperate address space, and can be preemted by the system (this actually means that they are protected from each other, and that the kernel is protected against them). You might say that these are highly implementation specific points (address spaces, threads), but I think that the term 'kernel' was first introduced with multitasking systems (correct me if I'm wrong in this point), as the portion of system code that manages task and address space switching, _at least_.

BTW: If you remember my talking about as system that offers protection through enforcing the use of a 'safe' programming language (I was talking about a java-style virtual machine there): Would you call this concept a microkernel too?

The drivers are protected against each other, and the kernel is too, because the code is guaranteed (by any means) to not randomly access memory or use privileged instructions. Technically however, the generated code all runs in the same address space and all in kernel mode, so after my definitions this sounds like a super-monolythic system then.



cheers Joe
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Tanenbaum's Book (out of Book Recomm.)

Post by Colonel Kernel »

JoeKayzA wrote:BTW: If you remember my talking about as system that offers protection through enforcing the use of a 'safe' programming language (I was talking about a java-style virtual machine there): Would you call this concept a microkernel too?

The drivers are protected against each other, and the kernel is too, because the code is guaranteed (by any means) to not randomly access memory or use privileged instructions. Technically however, the generated code all runs in the same address space and all in kernel mode, so after my definitions this sounds like a super-monolythic system then.
If you replace "address space" in the definition of microkernel with "protection domain", then the safe-language approach still qualifies IMO. :)

Even if it isn't, Minix is widely regarded as a microkernel and it has no memory protection....
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re:Tanenbaum's Book (out of Book Recomm.)

Post by Brendan »

Hi,
Solar wrote:
Candy wrote: Linux drivers are not directly compiled against kernel sources. They're compiled against the headers, but that is something you also do with all other drop-in replacements. You DO get problems with constants etc, but so do you in user space.
The point is that they are compiled against headers that are purposefully defined unstable by the kernel maintainers, i.e. they change structures, identifiers etc. between kernel revisions without further warnings. A driver compiled against 2.6.1 kernel headers probably won't work with a 2.6.12 kernel.
It seems to me that back when Linux was "a single piece like stone" the developers realized how many problems this would cause, and slapped some modularity into it without bothering to define decent/standard interfaces (that, or device driver writers ignored the decent/standard interfaces). Either way it sounds like an ugly hack.
Solar wrote: I very much doubt that an off-the-mill microkernel could recover from a serious fault in the device driver. Even if it can reload the driver and can re-initialize the device in question, well, the driver is still buggy...
If a sound card driver is buggy then it'd be nice to dump it and keep running without sound (or give the user a chance to download a better driver).

With a micro-kernel design it's actually fairly easy to prevent serious device driver faults from effecting the kernel and other software (with the exception of bus mastering devices), but the difficulty involved with allowing full functionality to be restored depends on the device that failed and what it was being used for. Obviously if a hard disk driver is being used for swap space and it crashes then your chances of recovering are very small. For cetain distributed designs it'd be possible to recover from a video or keyboard failure by logging in from another computer, or to recover from a file system failure (as long as the VFS continues working and there's enough redundancy).
Solar wrote: But it does reduce memory footprint and does increase efficiency / throughput... as always, there are two sides to the medal. ;)

The latest news tells me that Intel currently has 10 or more projects involving chips with 4 or more cores, and Microsoft's longhorn/vista is expected to require around 1 GB of RAM. IMHO an OS developer can afford to sacrifice some efficiency and a little memory for other features...


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
gaf
Member
Member
Posts: 349
Joined: Thu Oct 21, 2004 11:00 pm
Location: Munich, Germany

Re:Tanenbaum's Book (out of Book Recomm.)

Post by gaf »

Solar wrote:Quoting from Wikipedia: "In computer operating systems, a monolithic kernel is a kernel which behaves as a single program, rather than as a collection of intercommunicating programs as in the microkernel design."

The Linux kernel is more than one binary, but the drivers are compiled directly against the kernel sources, and not stand-alone programs.
You shouldn't take everything I say that literally..
What's the difference between several smaller apps that depend on each other and one big app that consists of a larger number of procedured ?
Candy wrote:I stand with my reasoning that a microkernel is mainly a microkernel because:

1. the kernel does not include the drivers
2. Drivers are protected from each other, and the kernel is protected from the drivers

If it does the inverse of this it's a monolithic kernel.
I had no idea that operating system design can be that simple..

The two points you mention are valid for most ?-kernels but there are exeptions and they are definitly not appropriate for a definition as they somewhat miss the point. It's like saying that the difference between a king and a president is that the formar lives in a palace (kernel mode) while the latter has to work in a parliament which is open to the public (user-mode).
JoeKayzA wrote:If you remember my talking about as system that offers protection through enforcing the use of a 'safe' programming language (I was talking about a java-style virtual machine there): Would you call this concept a microkernel too?
I agree with Colonel Kernel here: What's the difference between providing protection using the TLB and prviding the same protections with the means of a safa language ?

I personally don't think that protection is the main criteria, what really matters is the increased flexibility that can only be archived with a more modular design. This more modular design might make protection usefull as malicious code can otherwise run easiliy at kernel-level, but the decision whether this risk is acceptable or not should be left to the user.

regards,
gaf
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Tanenbaum's Book (out of Book Recomm.)

Post by Colonel Kernel »

gaf wrote:I personally don't think that protection is the main criteria, what really matters is the increased flexibility that can only be archived with a more modular design.
I don't think a good taxonomy is based on listing the benefits of a given approach (flexibility), but rather the attributes of that approach. So, yes, protection does matter, I just happen to believe that most people take it too literally (e.g. -- protection == address spaces, when it could just be type-safe verifiable code). But you're right in that protection is not the only criteria.

Abstraction is the other criteria, and it's one that until recently I took for granted. The evidence of this is what I just mentioned about "protection == address spaces". We're mostly indoctrinated to think in terms of address spaces... When you have a hammer, everything looks like a nail.
An exokernel is a monolithic kernel made by somebody with very little motivation (put very rough). He does the basics and makes a basic protection thing, then leaves the rest of it to the next guy. It's still a monolithic kernel, with drivers in kernel space and all.
I think the reason why exokernels are so misunderstood is because they're so different on the abstraction side of things, which people aren't used to. They push all abstractions out of the kernel, providing a very low-level and architecture-specific API. IMO, they most definitely are not monolithic because they don't provide the high-level abstractions that a monolithic kernel does. I wouldn't even call the things in an exokernel "drivers", because they don't provide a higher-level interface (e.g. -- /dev filesystem) to the hardware that they "drive" , instead they focus exclusively on multiplexing the hardware.

When you think in these terms, you begin to see a lot more possibilities for OS design. It was quite a revelation to me that threads and address spaces need not be the lowest-level abstractions in your design. I'm still wrapping my brain around it, but I get the sense that if you put your protection boundary in the right place relative to your "abstraction boundary", and if you use the right kind of protection boundary, you can get all the flexibility you need without the performance hit that many microkernels have suffered. This is just a hunch at this point, but an interesting one IMO.

I've seen other (non-OS) architectures fail by either being fast but inflexible or flexible but really slow, usually because those architectures have some critical boundary that cuts the wrong way (i.e. -- the pieces of the architecture are divided in a manner orthogonal to how they should be divided). Usually this happens because the designer got confused between the goals of the design and the means used to decompose the design (i.e. -- the "hammer-and-nail" problem). For example, an architecture that needs to be flexible might be entirely object-oriented, but that architecture's goal is to manipulate vast amounts of data, which is inefficient if each datum is represented as an object. Sounds stupid, but I've seen it happen.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
Post Reply