Page 1 of 1

Request for proper terminology

Posted: Mon Apr 02, 2012 5:57 am
by turdus
I've read many documentations to find a proper name for my kernel model. And I've found none. I would vote for macrokernel, but it's not correct, and could be misleading. On wiki it says:

Code: Select all

The hybrid kernel, often called the macrokernel, is primarily a monolithic kernel.
which does not apply to my model.

What I have is clearly a microkernel design by nature. I have only 4 system calls (2 for async send/recv, and 2 for synchronous rpc call/dispatch). You can send messages to userspace servers (fs,terminal,net etc.). Also every driver is a normal userspace thread with the privilege to use io ports and map mmio areas (another reason why it's not a hybrid model, see http://en.wikipedia.org/wiki/Hybrid_kernel, see picture on the right).
So far what I wrote is a microkernel. My problem is, I have (exactly one) server which runs in kernelspace (in lack of terminology I call it "the core") what's mapped in all thread's address space. If you communicate with that specific server, the API seems exactly like sending messages (for convience), but no message passing is involved, it's rather acts like in a monolithic kernel. The query is dispatched and the results are stored in the same fashion as if it would arrive in a response message, but no task switch happens. As long as userpsace application concers, this "trick" is fully transparent.
My kernel therefore has 2 parts: sys.arch, a hardware abstraction layer (low level irq and exception handling and IPC messaging), and the forementioned sys.core server.
I asked myself what are the parts that cannot be restarted after crash, or placing in userspace would make the system ineffective. These were: memory management (physical memory and paging, but not userspace malloc), and multitasking/threading (cannot restart threads without). All kernel calls that belong one of these (map, unmap, fork, yield etc.) implemented by "sending messages" to the core server. Others (for example open, close, listen, accept etc.) use normal messaging to the appropriate server as in any microkernel.

My question is, how would one call such a kernel model? Is there a better name than hybrid?

Re: Request for proper terminology

Posted: Mon Apr 02, 2012 6:33 am
by bubach
microlithic? :P

anyway, seems like a cool concept - combining the best parts of both models. what if the core crashes, bringing down pretty much everything except sys.arch with it - can you restart it and get out of any BSOD situation? and how would you do that without anything else functioning, keeping clean copies saved in ram at boot time?

Re: Request for proper terminology

Posted: Mon Apr 02, 2012 6:57 am
by bluemoon
I think that still fits into microkernel.

In microkernel there is always a small parts like your sys.core, shared amount all processes and servers, providing basic communications and other dirty stuff, which is unavailable. In the end, you need a kernel to coordinates the servers. :lol:

Re: Request for proper terminology

Posted: Mon Apr 02, 2012 8:05 am
by turdus
@bubach: if core crashes, I kill the thread. I don't restart core server at all, simply because all stuff required to restart a server is in the core, which is inaccessible at the moment and core server does not have it's own process, so it's hard to define what to restart. Therefore core is the only server that must not fail under any circumstances, so I double check everything in it, and also double check each input argument. If dispite of this it crashes in a state that affects not only the current thread (for example it messes up GDT), the core debugger starts. After that the only way out is to reboot. The debugger is like a mini OS in an OS, it has it's own descriptor tables, exception handlers and does not depend on any function in the rest of the system.

@bluemoon: yes, but I provide basic communication and coordination in sys.arch (preemption, blocking, unblocking, system interrupt etc.), the core server contains functions that would go in syscalls in a monolithic kernel. It's more like a privileged library. For example in minix3 (which is pretty much a microkernel) you have several copies of process list (one for every server, since they cannot see each others address space). I put the list in core, one copy only, and provide core functions to access it. If a server needs an information from it, it has to do a core call (where mutual exclusion guaranteed btw). It's possible that it still fits in microkernel definition though.

Re: Request for proper terminology

Posted: Mon Apr 02, 2012 8:17 am
by bluemoon
Yes, I think it matter less if that packed into one file, or splited into two. sys.core are those I referred as dirty works.

Re: Request for proper terminology

Posted: Mon Apr 02, 2012 8:51 am
by gravaera
Just make up your own name if you think your design is original enough: I call mine an "emulated microkernel" because of certain design choices I made. It's not like anybody can look at your creation and see the name you gave it, and tell you what to call it instead ;)

Re: Request for proper terminology

Posted: Mon Apr 02, 2012 9:39 am
by turdus
gravaera wrote:Just make up your own name if you think your design is original enough: I call mine an "emulated microkernel" because of certain design choices I made. It's not like anybody can look at your creation and see the name you gave it, and tell you what to call it instead ;)
I'm not sure how original it is, that's why I'm asking. I think it's named "core" then :-)
Btw, "emulated microkernel" looks interesting, would you mind describing your design choices?

Re: Request for proper terminology

Posted: Mon Apr 02, 2012 11:15 am
by gravaera
Yo:

In spirit it's a microkernel, but in implementation, I code a lot of it as a monolithic kernel, at least for now. For example, there isn't one driver API, but there are two: one for "very performance critical" drivers, such as timers, interrupt controllers, hotswap memory detection, etc. which are done as "monolithic modules", and then the "general" API whose drivers are separate address-space. In areas where performance would be compromised where a microkernel implementation is used, I use a monolithic implementation; but APIs are mostly "microkernel" like, with message queueing, etc. For example, I use an "interCpuMessager" class for transmitting messages between CPUs, with a very "microkernel" like API; but behind it the details are more like a monolithic kernel.

The goal is really to allow for easy transition into microkernel-like code in any area, should I choose to re-code some part of the kernel as such later on -- my original vision was for a "pure microkernel".

"Hybrid" may work, but my mindset, the way I personally think of the project, and more importantly my future aim for it just makes "emulated microkernel" fit more appropriately :)

--Peace out
gravaera

Re: Request for proper terminology

Posted: Mon Apr 02, 2012 12:09 pm
by turdus
@gravaera: nice. Like the way you keep it open to recoding while keeping the API unchanged. And I understand why don't you like to call it hybrid.

Re: Request for proper terminology

Posted: Mon Apr 02, 2012 12:33 pm
by Combuster
I vote microkernel.

As far as I'm concerned, anything qualifies as such when the potential amount of future drivers is infinite, and each of them will be run in isolation from the kernel and each other. Some things are slow in userland and the purer you become the more dependent kernel space becomes on the proper working of userland instead of being resistant to it. Many people often find themselves writing a monolithic core at first to encompass a system's basic functionality and proceed to add userland drivers only later.

Think about it the other way: if purists would have their say, there would be no usable microkernels left.

Re: Request for proper terminology

Posted: Mon Apr 02, 2012 12:36 pm
by Brendan
Hi,
turdus wrote:My problem is, I have (exactly one) server which runs in kernelspace (in lack of terminology I call it "the core") what's mapped in all thread's address space.
Jochen Liedtke (original author of L4) defined "micro-kernel" like this:
"A concept is tolerated inside the microkernel only if moving it outside the kernel, i.e., permitting competing implementations, would prevent the implementation of the system's required functionality."

What is the server in kernel space; what is your OS's required functionality, and how would shifting this server out of the kernel prevent the implementation of the system's required functionality?

Note: For my OS, required functionality includes "no pointless task switching when spawning a thread". Implementing physical memory management, virtual memory management and scheduling outside the kernel would prevent the implementation of the system's required functionality. ;)


Cheers,

Brendan