Page 3 of 3

Re: Microkernels are easier to build than Monolithic kernels

Posted: Tue Sep 06, 2022 3:09 pm
by eekee
It seems I've been dreaming of the best of both worlds: restartable services and everything thoroughly tested. It's a dream. :) It might be achievable in a single-developer or small-team OS. I chose interactive Forth partly because it makes simple testing so easy, but the other day I caught myself not testing a definition because it wasn't easy to test interactively.

nexos wrote:The idea that microkernels are "slow" is based off of first-generation microkernels (MINIX and Mach mainly). MINIX is a teaching OS, not really production quality. Mach is a failure-by-design in a lot of ways. Don't get me wrong, Mach has clever ideas but is implemented very poorly. This was due to Mach's poor spatial locality and large, complicated IPC mechanism
Smol correction: MINIX version 3 is not intended to be a teaching OS. It's aiming for full production quality with all services restartable. I last heard about it years ago when they'd recently had to accept that they needed the MMU, and seemed to have a lot of work to do to adapt. That may have been 8-10 years ago. It can't have been released since then or I'm sure we'd all know, so it doesn't seem to be progressing much faster than GNU Hurd.

Mach has poor performance? That's interesting to me, because I've recently been comparing Mach, in the form of OS X 10.5, with Windows 10 & 11. Different generations, I know, and it's only a subjective comparison, but there's this one use case where old OS X was clearly faster than new Windows. It's Eagle Mode, a zoomable UI in which the file manager reads the contents of files to display them as you zoom in and out through the directory structure. If I'm not mistaken, it uses POSIX-ish open/close/read/write; I don't see any signs of mmap. It was easily usable as my normal file manager in OS X (and Linux and FreeBSD too), but in Win10 and Win11, it stutters so much as to be uncomfortable in Win11, barely usable in Win10. (Or perhaps: uncomfortable on exfat, barely usable on NTFS.) I think it's a known fact amongst Windows programmers that "sequential IO" (as they call it) is unoptimized, but Apple must have done something interesting to Mach to put it in the same league as Linux in this use case. :)

(I'm not praising Eagle Mode here, it should have been designed with more asynchronous internal interfaces. It's just interesting to see what it reveals.)
nexos wrote:Second-generation microkernels (L4) are much faster. E.g, a round trip syscall on Mach took 500 us, on L4, 16 us. I believe a Unix system based on L4's design principles could potentially take the OS world by storm. By implementing small, fast messaging using many optimizations, microkernels could get somewhat close to their monolithic counterparts.
This could be very nice, but the phrase "many optimizations" raises a warning flag in my mind. It suggests complex or counterintuitive code, perhaps with difficult debugging for OS developers.

But speaking of 1st-generation microkernel Unixes, what about QNX? It was renowned for being quick (and very small). Perhaps they invented optimizations and didn't publish.

Re: Microkernels are easier to build than Monolithic kernels

Posted: Tue Sep 06, 2022 4:49 pm
by thewrongchristian
eekee wrote:Mach has poor performance? That's interesting to me, because I've recently been comparing Mach, in the form of OS X 10.5, with Windows 10 & 11. Different generations, I know, and it's only a subjective comparison, but there's this one use case where old OS X was clearly faster than new Windows.
Mach, as used in NextSTEP (as well as OSF/1), was based on Mach 2.5, which was basically a hybrid kernel with the BSD kernel ported to Mach primitives. So Mach implemented the threading, MM etc, and the BSD syscall interface was implemented in kernel mode in terms of those Mach primitives, instead of the crufty old pdp-11/VAX like primitives used previously.

So it's not so far removed from Windows NT really, which isn't surprising, as Rick Rashid worked on both.

Re: Microkernels are easier to build than Monolithic kernels

Posted: Tue Sep 06, 2022 5:01 pm
by eekee
thewrongchristian wrote:
eekee wrote:Mach has poor performance? That's interesting to me, because I've recently been comparing Mach, in the form of OS X 10.5, with Windows 10 & 11. Different generations, I know, and it's only a subjective comparison, but there's this one use case where old OS X was clearly faster than new Windows.
Mach, as used in NextSTEP (as well as OSF/1), was based on Mach 2.5, which was basically a hybrid kernel with the BSD kernel ported to Mach primitives. So Mach implemented the threading, MM etc, and the BSD syscall interface was implemented in kernel mode in terms of those Mach primitives, instead of the crufty old pdp-11/VAX like primitives used previously.

So it's not so far removed from Windows NT really, which isn't surprising, as Rick Rashid worked on both.
Interesting. Thanks! That is a hybrid.

Re: Microkernels are easier to build than Monolithic kernels

Posted: Thu Nov 24, 2022 4:49 pm
by AndrewAPrice
I've been thinking about porting my OS to another architecture (ARM to my run on my Raspberry Pi) and because the kernel is a microkernel and limited in scope, it seems the simplest and cleanest option is just to rewrite the kernel from scratch for each architecture. Then, for my system library the only thing I'd need to change are adapting the system calls for each architecture, which I could consolidate this code into one place to make it obvious what needs to be ported.

Re: Microkernels are easier to build than Monolithic kernels

Posted: Thu Nov 24, 2022 10:31 pm
by nullplan
AndrewAPrice wrote:I've been thinking about porting my OS to another architecture (ARM to my run on my Raspberry Pi) and because the kernel is a microkernel and limited in scope, it seems the simplest and cleanest option is just to rewrite the kernel from scratch for each architecture.
Wat? But the kernel is supposed to present the same interface to userspace. Even a microkernel needs to do that. And memory management (the one core component that must be in the kernel) isn't all that different between architectures.

Normally, kernels have architecture-specific abstractions to let the core kernel, the main logic, remain unchanged. Then porting to a new architecture is just a matter of finding out all the differences and building abstractions for them. This of course requires the kernel author to separate arch-specific and arch-independent parts from the word go. So you saying that rewriting the kernel is simpler tells me you didn't do that, and will continue to not do that, but instead will write everything again.

I think it would be better to expend the effort of separating the arch-specific parts out from the arch-independent ones. Because your way would leave you with two independent code bases indefinitely, increasing maintenance burden. If you port to more architectures, you get even more code bases. You will go crazy fixing the same bugs n times over and over again.

Re: Microkernels are easier to build than Monolithic kernels

Posted: Fri Nov 25, 2022 4:47 pm
by AndrewAPrice
nullplan wrote:Then porting to a new architecture is just a matter of finding out all the differences and building abstractions for them. This of course requires the kernel author to separate arch-specific and arch-independent parts from the word go. So you saying that rewriting the kernel is simpler tells me you didn't do that, and will continue to not do that, but instead will write everything again.
I wasn't honest about the "from scratch" part. There will be reuse but instead of "take the existing kernel and figure out what can be abstracted" I was thinking of "rewrite for a new architecture but figure out what can be reused and move it into shared code."

The kernel is an implementation of the system calls. Each port needs to implement those system calls. The system call interface will vary slightly different between architectures (the opcode and available registers) but as long as we implement the same user space C interface to the system calls, all user code (except those requiring assembly) that compile against these syscall stubs will then be portable.

My micro-"kernel" itself is small that it feels like it's just an abstraction on top of the kernel to provide for messaging and multitasking. My first hunch is that maybe 20% of my code isn't arch-specific, but maybe once I undertake this task it ends up being 80%.