microkernel development
microkernel development
Hello,
I've been thinking of starting a new microkernel.. My previous kernels have been monolithic kernels with POSIX-like APIs.
Are microkernels more difficult to develop?
I can see lots of issues, for example, the difficulty in debugging the interaction between several different programs, needing to get shared library support working to avoid wasting lots of memory, needing to have a careful design of how the different servers communicate with each other to avoid big redesigns later on down the line.
Are there many real advantages to the microkernel design? (I know about the stability/security benefits of having drivers/servers run in user mode)
Can the be made quickly enough to be practical? (L4 seems to be quick, but implementing a POSIX API over it would probably punish it)
I know that some aspects of this question have been done to death, and I'm not looking for a flame war (given the history of this debate), but I proper discussion..
Thanks
I've been thinking of starting a new microkernel.. My previous kernels have been monolithic kernels with POSIX-like APIs.
Are microkernels more difficult to develop?
I can see lots of issues, for example, the difficulty in debugging the interaction between several different programs, needing to get shared library support working to avoid wasting lots of memory, needing to have a careful design of how the different servers communicate with each other to avoid big redesigns later on down the line.
Are there many real advantages to the microkernel design? (I know about the stability/security benefits of having drivers/servers run in user mode)
Can the be made quickly enough to be practical? (L4 seems to be quick, but implementing a POSIX API over it would probably punish it)
I know that some aspects of this question have been done to death, and I'm not looking for a flame war (given the history of this debate), but I proper discussion..
Thanks
Re: microkernel development
I'd say no. However there are far fewer working operating systems to study. TBH, I've never worked on a monolithic kernel.abhoriel wrote:Are microkernels more difficult to develop?
One thing I have found very useful is a unified system log (I use the serial port) where all processes and the kernel can log to. This allows you to see the sequence of what's happening in all those communicating processes in one place. Shared libs aren't essential but they do make debugging your system easier. A large part of your OS will be implemented in your libc or equivalent and continually needing to rebuild the world because you change some detail of your message passing will quickly become boring.abhoriel wrote:I can see lots of issues, for example, the difficulty in debugging the interaction between several different programs, needing to get shared library support working to avoid wasting lots of memory, needing to have a careful design of how the different servers communicate with each other to avoid big redesigns later on down the line.
I don't know. I like modular designs and microkernels have that.abhoriel wrote:Are there many real advantages to the microkernel design? (I know about the stability/security benefits of having drivers/servers run in user mode)
Can the be made quickly enough to be practical? (L4 seems to be quick, but implementing a POSIX API over it would probably punish it)
If a trainstation is where trains stop, what is a workstation ?
Re: microkernel development
Hi,
For the equivalent amount of functionality to a monolithic kernel (e.g. the micro-kernel itself, plus the drivers, services, etc in user-space, plus the communication protocols needed to make it work), micro-kernels are harder than monolithic kernels.
Typically (not always) shared libraries waste memory because they're "shared" by a small number of processes that each only uses part of the shared library (in other words, you waste RAM for parts of the shared library that aren't actually used by any process). In addition to wasting memory they add overhead (due to compilers and link-time optimisers not being able to inline and/or optimise the library's functions; which can include optimisations that reduce the memory consumed, like constant folding and dead code elimination). However; shared libraries are not really any worse for micro-kernels than they are for monolithic kernels (they just always bad, except for special cases like system libraries for programming languages if a lot of processes use that programming language - e.g. "libC").
Needing careful design of the communication between pieces (drivers, services, etc) is the real problem; however (even for monolithic kernels where it's "communication between pieces in kernel") careful design of the communication is beneficial to avoid "long term churn" (where other pieces break and have to be updated/modified/rewritten because you changed something else). Note that for a hobbyist monolithic kernel there's very little chance of "long term" churn because there's very little risk of "long term" (e.g. never any real need to maintain backward compatibility with previous versions, etc).
Beyond that, it depends on how the OS is designed. If you want to put in some extra effort; it's easier to do fault tolerance with a micro-kernel, easier to do a distributed system with a micro-kernel, easier to do real time tasks with a micro-kernel, easier to do "high availability" (e.g. update drivers, etc without rebooting) with a micro-kernel, etc.
Cheers,
Brendan
Micro-kernels are a lot easier to develop than monolithic kernels; however, you shouldn't let that give you a false sense of optimism.abhoriel wrote:Are microkernels more difficult to develop?
For the equivalent amount of functionality to a monolithic kernel (e.g. the micro-kernel itself, plus the drivers, services, etc in user-space, plus the communication protocols needed to make it work), micro-kernels are harder than monolithic kernels.
Debugging is actually easier (because drivers, etc are in user-space you can use normal "user-space debugging" tools; because everything is isolated bugs are more likely to be instantly visible rather than "hard to diagnose random corruption of something unrelated"; because everything is isolated you don't need to care about other pieces when debugging one piece).abhoriel wrote:I can see lots of issues, for example, the difficulty in debugging the interaction between several different programs, needing to get shared library support working to avoid wasting lots of memory, needing to have a careful design of how the different servers communicate with each other to avoid big redesigns later on down the line.
Typically (not always) shared libraries waste memory because they're "shared" by a small number of processes that each only uses part of the shared library (in other words, you waste RAM for parts of the shared library that aren't actually used by any process). In addition to wasting memory they add overhead (due to compilers and link-time optimisers not being able to inline and/or optimise the library's functions; which can include optimisations that reduce the memory consumed, like constant folding and dead code elimination). However; shared libraries are not really any worse for micro-kernels than they are for monolithic kernels (they just always bad, except for special cases like system libraries for programming languages if a lot of processes use that programming language - e.g. "libC").
Needing careful design of the communication between pieces (drivers, services, etc) is the real problem; however (even for monolithic kernels where it's "communication between pieces in kernel") careful design of the communication is beneficial to avoid "long term churn" (where other pieces break and have to be updated/modified/rewritten because you changed something else). Note that for a hobbyist monolithic kernel there's very little chance of "long term" churn because there's very little risk of "long term" (e.g. never any real need to maintain backward compatibility with previous versions, etc).
Security/stability, debugging, and making the importance of careful design more obvious are all benefits of micro-kernels. These also have secondary benefits (making it possible to trust third-party drivers and services, making it faster/easier for people to write drivers and services, etc).abhoriel wrote:Are there many real advantages to the microkernel design? (I know about the stability/security benefits of having drivers/servers run in user mode)
Beyond that, it depends on how the OS is designed. If you want to put in some extra effort; it's easier to do fault tolerance with a micro-kernel, easier to do a distributed system with a micro-kernel, easier to do real time tasks with a micro-kernel, easier to do "high availability" (e.g. update drivers, etc without rebooting) with a micro-kernel, etc.
POSIX is extremely bad for micro-kernels - it's not designed to avoid or mitigate the additional cost of inter-process communication that an OS designed for a micro-kernel requires. Sadly, lots of micro-kernels (in the past) have implemented POSIX, and this has caused micro-kernel's to a far worse reputation for performance than they deserve. Note: Don't get me wrong here - a micro-kernel must sacrifice some performance to gain other benefits (security, etc), it's just that the amount of performance sacrificed is exacerbated by POSIX compliance.abhoriel wrote:Can the be made quickly enough to be practical? (L4 seems to be quick, but implementing a POSIX API over it would probably punish it)
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: microkernel development
Many thanks for your comprehensive responses.
I'm going to go ahead with writing a microkernel. I guess a lot of code is required to get the whole concept running in the first place.. Fortunately, I can grab a lot of code from my old kernels.
I agree about POSIX, I will probably try and loosely be compatible with it. Maybe I will try and isolate its horribleness in some kind of server, and also construct a proper API for native programs.
I'm going to go ahead with writing a microkernel. I guess a lot of code is required to get the whole concept running in the first place.. Fortunately, I can grab a lot of code from my old kernels.
I agree about POSIX, I will probably try and loosely be compatible with it. Maybe I will try and isolate its horribleness in some kind of server, and also construct a proper API for native programs.
Re: microkernel development
Hi,
Cheers,
Brendan
Just be aware that during boot micro-kernel's are very different - the goal is to shift things out of the kernel, and this includes boot code, and can even include (most) of what you'd consider "kernel initialisation". For example, I do things like (e.g.) build physical memory management data structures, parse ACPI tables, start other CPUs, enable paging, etc; all before starting the micro-kernel.abhoriel wrote:I'm going to go ahead with writing a microkernel. I guess a lot of code is required to get the whole concept running in the first place.. Fortunately, I can grab a lot of code from my old kernels.
That's a good start; but if it's easier for people to port POSIX stuff than it is to write native software then they'll just port POSIX stuff (and then complain the OS has poor performance and is inferior). To avoid this, I'd recommend not supporting POSIX until after you have plenty of native software (e.g. similar to how Microsoft waited until their native APIs dominated everything that matters before they bothered with a POSIX compatibility layer that was actually usable ).abhoriel wrote:I agree about POSIX, I will probably try and loosely be compatible with it. Maybe I will try and isolate its horribleness in some kind of server, and also construct a proper API for native programs.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: microkernel development
I dislike microkernels.
I like the hybrid kernel concept : Some important drivers are put in kernel mode, some less important drivers are putted in user mode, etc.
I like the hybrid kernel concept : Some important drivers are put in kernel mode, some less important drivers are putted in user mode, etc.
-
- Member
- Posts: 501
- Joined: Wed Jun 17, 2015 9:40 am
- Libera.chat IRC: glauxosdever
- Location: Athens, Greece
Re: microkernel development
Hi,
Most kernels are probably a mix of monolithic kernel design and microkernel design. There are kernels that are "mostly monolithic kernels" (but have some facilities for userspace drivers, therefore not being purely monolithic) and kernels that are "mostly microkernels" (but have some drivers in kernelspace, therefore not being purely microkernels).
I am however unsure whether there are kernels that have adopted either purely monolithic kernel design, either purely microkernel design. Feel free to correct me on this.
Regards,
glauxosdever
Most kernels are probably a mix of monolithic kernel design and microkernel design. There are kernels that are "mostly monolithic kernels" (but have some facilities for userspace drivers, therefore not being purely monolithic) and kernels that are "mostly microkernels" (but have some drivers in kernelspace, therefore not being purely microkernels).
I am however unsure whether there are kernels that have adopted either purely monolithic kernel design, either purely microkernel design. Feel free to correct me on this.
Regards,
glauxosdever
Re: microkernel development
Hi,
The most widely accepted definition is Jochen Liedtke's "a concept is tolerated inside the kernel only if moving it outside the kernel, i.e. permitting competing implementations, would prevent the implementation of the system's required functionality". Under this definition (which is "more strict" than my definition, despite the lack of consensus on what does/doesn't constitute "required functionality") there are many pure micro-kernels (L4, QNX, Minix 3, etc).
Note: My own definition is more like "a concept is tolerated inside the kernel if there are no advantages to moving it outside the kernel", which differs in cases where a concept could be implemented in user-space without preventing the implementation of the system's required functionality but doing so has no advantages (and only disadvantages). I also define nano-kernel as "everything in user-space where possible, even if it's completely pointless" (which is closer to Liedtke's definition of micro-kernel).
It would be possible to use monolithic design in a micro-kernel (e.g. by emulating "anything can directly access anything" with remote procedure calls); and it would be possible to use micro-kernel design in a monolithic kernel (e.g. by emulating message passing with direct function calls); but in both of these cases there's a mismatch between the design of interfaces and reality, and this mismatch leads to "worse performance with no advantages" in practice (for both cases).
Cheers,
Brendan
This depends (slightly) on how you define "micro-kernel".glauxosdever wrote:Most kernels are probably a mix of monolithic kernel design and microkernel design. There are kernels that are "mostly monolithic kernels" (but have some facilities for userspace drivers, therefore not being purely monolithic) and kernels that are "mostly microkernels" (but have some drivers in kernelspace, therefore not being purely microkernels).
I am however unsure whether there are kernels that have adopted either purely monolithic kernel design, either purely microkernel design. Feel free to correct me on this.
The most widely accepted definition is Jochen Liedtke's "a concept is tolerated inside the kernel only if moving it outside the kernel, i.e. permitting competing implementations, would prevent the implementation of the system's required functionality". Under this definition (which is "more strict" than my definition, despite the lack of consensus on what does/doesn't constitute "required functionality") there are many pure micro-kernels (L4, QNX, Minix 3, etc).
Note: My own definition is more like "a concept is tolerated inside the kernel if there are no advantages to moving it outside the kernel", which differs in cases where a concept could be implemented in user-space without preventing the implementation of the system's required functionality but doing so has no advantages (and only disadvantages). I also define nano-kernel as "everything in user-space where possible, even if it's completely pointless" (which is closer to Liedtke's definition of micro-kernel).
Micro-kernel design revolves around isolated entities (e.g. processes) communicating via. some form of IPC (e.g. messaging), where (by necessity) that IPC involves switching between isolated entities (e.g. task switches) at some point. Monolithic design uses "anything can directly access anything" instead. These are mostly mutually exclusive.glauxosdever wrote:Most kernels are probably a mix of monolithic kernel design and microkernel design.
It would be possible to use monolithic design in a micro-kernel (e.g. by emulating "anything can directly access anything" with remote procedure calls); and it would be possible to use micro-kernel design in a monolithic kernel (e.g. by emulating message passing with direct function calls); but in both of these cases there's a mismatch between the design of interfaces and reality, and this mismatch leads to "worse performance with no advantages" in practice (for both cases).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
-
- Member
- Posts: 501
- Joined: Wed Jun 17, 2015 9:40 am
- Libera.chat IRC: glauxosdever
- Location: Athens, Greece
Re: microkernel development
Hi,
I agree with you, Brendan. However, some people say that the monolithic kernel and microkernel distinction is not really relevant, or even troublesome, since most kernels are a mix of both worlds. For example, according to those people, Linux is a "mostly monolithic kernel" hybrid kernel. It allows having drivers in userspace, while having most drivers in kernelspace. And many microkernels have some drivers in kernelspace too, therefore being "mostly microkernel" hybrid kernel.
As you said, it all depends on which definition one adopts.
Regards,
glauxosdever
I agree with you, Brendan. However, some people say that the monolithic kernel and microkernel distinction is not really relevant, or even troublesome, since most kernels are a mix of both worlds. For example, according to those people, Linux is a "mostly monolithic kernel" hybrid kernel. It allows having drivers in userspace, while having most drivers in kernelspace. And many microkernels have some drivers in kernelspace too, therefore being "mostly microkernel" hybrid kernel.
As you said, it all depends on which definition one adopts.
Regards,
glauxosdever
- Schol-R-LEA
- Member
- Posts: 1925
- Joined: Fri Oct 27, 2006 9:42 am
- Location: Athens, GA, USA
Re: microkernel development
You might find it both useful and interesting to look at the 'level folding' techniques used in Synthesis. While Synthesis is a mostly monolithic design, and its techniques may not be immediately applicable to a micro-kernel, it may be possible to find an exokernel-esque application of folding in which the driver can run as a shared library within the calling process itself.Brendan wrote:Micro-kernel design revolves around isolated entities (e.g. processes) communicating via. some form of IPC (e.g. messaging), where (by necessity) that IPC involves switching between isolated entities (e.g. task switches) at some point. Monolithic design uses "anything can directly access anything" instead. These are mostly mutually exclusive.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Re: microkernel development
Althrough there is no hardness between them as far I know;
Microkernels are kernels where almost everything is in user-mode.
Good for failure protection.
Example : Ghost Kernel
Monolothic kernels are kernels where almost everything is in kernel-mode (Except applications).
Bad Smart-card driver could lead in system-failure.
Remember this : https://www.youtube.com/watch?v=er8g6D_PqvY ? It was caused by bad Plug and Play scanner driver.
Example : All beginning operating systems, Windows 9x, Linux Kernel.
Hybrid kernels are kernels where more important drivers are put in kernel mode; While less important in user mode.
Mix of Monolothic and Microkernel. My favorite.
Example : Mac OS X, post Windows 9x, Haiku, ReactOS, DragonFly BSD, eComStation, etc.
Microkernels are kernels where almost everything is in user-mode.
Good for failure protection.
Example : Ghost Kernel
Monolothic kernels are kernels where almost everything is in kernel-mode (Except applications).
Bad Smart-card driver could lead in system-failure.
Remember this : https://www.youtube.com/watch?v=er8g6D_PqvY ? It was caused by bad Plug and Play scanner driver.
Example : All beginning operating systems, Windows 9x, Linux Kernel.
Hybrid kernels are kernels where more important drivers are put in kernel mode; While less important in user mode.
Mix of Monolothic and Microkernel. My favorite.
Example : Mac OS X, post Windows 9x, Haiku, ReactOS, DragonFly BSD, eComStation, etc.
Re: microkernel development
Only if you design bad debugging interfaces. My user level debugger can trace into drivers in kernel. At source level. The lack of this feature in other OSes is just a lack of imagination and/or poor design choices.Brendan wrote: Debugging is actually easier (because drivers, etc are in user-space you can use normal "user-space debugging" tools; because everything is isolated bugs are more likely to be instantly visible rather than "hard to diagnose random corruption of something unrelated"; because everything is isolated you don't need to care about other pieces when debugging one piece).
Actually, this feature makes it much easier to do debug than passing messages to another process, something that no debugger really can handle. You will typically only debug either the client or the server process, not both, and certainly not with the same debugger.
Additionally, I typically debug kernel server threads by starting them from user mode. That way I just create a simple user-mode app that starts the thread, and then I can debug the kernel-mode server thread with the usual application debugger.
Re: microkernel development
Hi,
Cheers,
Brendan
For monolithic there's always a risk of "one thing corrupts something entirely different without any noticeable symptoms until much much later" (where you have no idea which driver actually caused the problem or when) which will always be harder to debug than an extremely obvious page fault that tells you exactly which instruction in which process caused a problem the instant it happens. How good/bad a debugger is makes no difference when you're comparing "need to use a debugger" to "didn't need to use any debugger at all".rdos wrote:Only if you design bad debugging interfaces.Brendan wrote:Debugging is actually easier (because drivers, etc are in user-space you can use normal "user-space debugging" tools; because everything is isolated bugs are more likely to be instantly visible rather than "hard to diagnose random corruption of something unrelated"; because everything is isolated you don't need to care about other pieces when debugging one piece).
Most kernels do support this in one way or another (e.g. Linux kernel debugging via. GDB); but treat it as a special case because of security concerns and the fact that people debugging something in user-space have no reason to want to debug kernel-space in the first place.rdos wrote:My user level debugger can trace into drivers in kernel. At source level. The lack of this feature in other OSes is just a lack of imagination and/or poor design choices.
For micro-kernels; almost all bugs are either "process crashed" (where you rarely need to use a debugger to find out why), or "process gave bad reply to a request message of a specific type" (where you start debugging from the point where that specific type of message is received up until a reply is sent), or "process sent bad request" (where you work backwards to determine why a bad request was sent). You never need to debug multiple processes at the same time.rdos wrote:Actually, this feature makes it much easier to do debug than passing messages to another process, something that no debugger really can handle. You will typically only debug either the client or the server process, not both, and certainly not with the same debugger.
Your debugger is so bad that it's easier to begin debugging from the wrong place (from user-space and not from kernel space)?rdos wrote:Additionally, I typically debug kernel server threads by starting them from user mode. That way I just create a simple user-mode app that starts the thread, and then I can debug the kernel-mode server thread with the usual application debugger.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: microkernel development
FYI, this problem can much more efficiently be solved with segmentation, or by using the 16 higher bits of long mode addresses as a memory indicator (combined with using RIP-relative code of course). That way, you don't need the incredibly inefficient message passing (and TLB shoot-downs) of micro kernels.Brendan wrote: For monolithic there's always a risk of "one thing corrupts something entirely different without any noticeable symptoms until much much later" (where you have no idea which driver actually caused the problem or when) which will always be harder to debug than an extremely obvious page fault that tells you exactly which instruction in which process caused a problem the instant it happens. How good/bad a debugger is makes no difference when you're comparing "need to use a debugger" to "didn't need to use any debugger at all".
It's the standard debugger of the developmental tool chain (Open Watcom). I typically run it remote over TCP/IP on a Windows machine. No need to reinvent the wheel.Brendan wrote: Your debugger is so bad that it's easier to begin debugging from the wrong place (from user-space and not from kernel space)?
Re: microkernel development
Hi,
TLB shoot-down is required by anything that uses paging (including all monolithic OSs that use paging) and has nothing to do with micro-kernel's.
Cheers,
Brendan
FYI, a micro-kernel isolates pieces (drivers, etc). It doesn't matter much if you use paging for that isolation, or if you use segmentation for that isolation (e.g. L4's "small address spaces"), or even if you use managed languages for that isolation (e.g. Singularity); in all these cases you're still isolating the pieces from each other.rdos wrote:FYI, this problem can much more efficiently be solved with segmentation, or by using the 16 higher bits of long mode addresses as a memory indicator (combined with using RIP-relative code of course).Brendan wrote: For monolithic there's always a risk of "one thing corrupts something entirely different without any noticeable symptoms until much much later" (where you have no idea which driver actually caused the problem or when) which will always be harder to debug than an extremely obvious page fault that tells you exactly which instruction in which process caused a problem the instant it happens. How good/bad a debugger is makes no difference when you're comparing "need to use a debugger" to "didn't need to use any debugger at all".
Whenever there's some form of isolation between pieces you need some form of IPC to "punch through" the isolation, regardless of whether that IPC is some form of messaging, or something like RPC (Remote Procedure Call).rdos wrote:That way, you don't need the incredibly inefficient message passing (and TLB shoot-downs) of micro kernels.
TLB shoot-down is required by anything that uses paging (including all monolithic OSs that use paging) and has nothing to do with micro-kernel's.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.