Page 2 of 3
Re: Review of my OS's microkernel
Posted: Sat Feb 07, 2009 7:17 am
by itisiuk
I use a process manager server like minix which handles the
execution of process and the memory management.
it sends messages to the VFS to load the process into memory.
then it sends messages to the system process which handles the context stuff and request memory.
once that is done, it does a context switch and runs the new process.
it used to page fault after this,
and i fixed it by resetting all the entry points and map the memory loctions needed
whilst in the page fault.
I keep it in as a sort of re-incarnation server now but its never used as
i fixed the orginal bug in the code.
i kept using the current process pdir instead of the new pdir to map things. doh
Re: Review of my OS's microkernel
Posted: Sat Feb 07, 2009 2:53 pm
by xyzzy
MessiahAndrw wrote:- A server (or manager) must be able to set the instruction pointer and access other processes' memory without the permission (I could introduce a "spawning" state for processes, and this is only possible while in the spawning state).
If your kernel supports threads, you could have the execution server create the process without any threads, map the executable in to that process' address space, and then tell the kernel to create a user-mode thread in that process that begins executing at the binary's entry point. That way you don't have to fiddle around with the instruction pointer.
Re: Review of my OS's microkernel
Posted: Sat Feb 07, 2009 6:39 pm
by AndrewAPrice
I really like the idea of servers being able to spawn threads in processes.
You could have userspace controlled exceptions/interrupts.
There is one security issue though; servers can spawn drivers (which have a higher privilege level). I'm not sure if this would be acceptable? (Convince me!)
Re: Review of my OS's microkernel
Posted: Sun Feb 08, 2009 2:13 pm
by JohnnyTheDon
I think its an acceptable security risk. The truth is servers are more trustable in most cases because they come with the OS, while drivers may be 3rd-party in many cases. They don't really have less privledges because you can't trust them, but because they don't need to do things like access IO ports and it prevents them from doing bad things if they crash and go haywire.
In any case, if a server is compromized you are in trouble, whether they're allowed to spawn drivers or not. If you have an infected file server, it can read your passwords and send them over the network. If you have an infected network server, it can monitor all your internet traffic. Servers have extremely high privledges by virtue of their function.
Re: Review of my OS's microkernel
Posted: Sun Feb 08, 2009 5:36 pm
by Craze Frog
Normally only the root task should be able to spawn processes. If other processes wants to spawn processes they ask the root task to do it for them. This is good because it allows the safe implementation of any security policy on top of the microkernel.
Always remember that the API of the microkernel should facilitate the creation of the operating system API. (Unlike a monolithic kernel, where the kernel API and the OS API are the same.) Thus, the kernel does not need to make any sense whatsoever to the application developer (since he always sees the OS API instead) as long as it allows the simply implementation of any OS policy on top of it.
Re: Review of my OS's microkernel
Posted: Mon Feb 16, 2009 9:16 pm
by elderK
Amen, to both.
~k
Re: Review of my OS's microkernel
Posted: Tue Feb 17, 2009 6:08 am
by jal
Craze Frog wrote:Normally only the root task should be able to spawn processes. If other processes wants to spawn processes they ask the root task to do it for them. This is good because it allows the safe implementation of any security policy on top of the microkernel.
I'm don't entirely agree. Spawning processes could very well be limited to a certain task (e.g. a process manager service) that is not the 'root task' (i.e. the kernel).
JAL
Re: Review of my OS's microkernel
Posted: Mon Apr 06, 2009 3:02 am
by Andy1988
I'm just trying to get an appropriate design for a microkernel architecture by myself.
I looked a bit at the Minix design approach, but I don't really like it. There are too many dependencies between the different servers.
My kernel currently consists of the basic scheduling stuff, where several queues of processes are maintained and switched to, as well as the basic IPC mechanisms.
I don't really know how to manage memory in such a microkernel, yet. Sure, I've got some kind of kernel heap for getting some memory for my internal memory stuff IN the kernel.
How does a usespace process (so also a driver and a server) allocate memory? I thought about a server which is responsible for this. It manages all the mem-alloc stuff for every process (even for itself).
This server has a pool of memory it gets from the kernel for all the userspace programs. So this would the rest of all the memory you have in your system, that is not used by the kernel or some hardware access like DMA.
If a process wants memory, it dispatches a message, the memory server catches it, allocates physical memory out of its pool, swaps some memory out if neccessary and has to ask the kernel for updating the corresponding page directories of the processes.
I also don't know how to communicate with all these processes in a generic way.
I thought about some kind of pipes in a management namespace. A process can create these pipes and can define which processes (by kind - application, driver, server) can hook into them and get data out or put something in.
Then a process can set a hook onto this pipe and wait for data in it and do some work. So it's some kind of synchronous event system.
I could even route interrupts with these pipes.
And how do I communicate with the kernel? For example a driver which needs to do some DMA-stuff? Or spawning a process? I need a way to determine which kind of process did just a syscall and prevent applications to do this stuff. A malware could just access the kernel directly, bypassing my security system implemented somewhere on a higher level in a server or mess around with my processes directly -> BAD!
@MessiahAndrw
How do your drivers communicate with low level things? Over the kernel with special syscalls?
Re: Review of my OS's microkernel
Posted: Mon Apr 06, 2009 6:57 am
by jal
Andy1988 wrote:I looked a bit at the Minix design approach, but I don't really like it. There are too many dependencies between the different servers.
Any type of OS design will have heavy dependancies between it's core OS components, there's really nothing you can do about it.
JAL
Re: Review of my OS's microkernel
Posted: Thu Apr 09, 2009 11:31 pm
by mystran
If I ever resume the work on my kernel, one of the items high on the TODO list is process loading. To give a really good design overview of the kernel:
Originally a microkernel design, but rather than trying to strip everything out, I ended up (after banging my head against the wall) drawing the line at keeping everything "essential" in kernel: memory management, asynchronous message passing, block level disk access and cache (since intelligent memory management needs to communicate with these anyway), console, timers and scheduler (since one needs timers for scheduling and scheduler for timers).. but plan is to pull stuff like filesystem services and the like out of the kernel (hopefully one day)...
Anyway, when it comes to process loading, my idea is basically to let the process load itself... with the actual design being: when a process wants to start another process, it loads (well technically gets a handle for, since there's no reason to actually map the file) a runtime linker, and asks the kernel for another process with the runtime linker as the base image, passing the new process a handle into the actual binary (and filesystem once I get those out of kernel). The runtime linker will be a flat binary which needs no relocation, and it just uses the normal method for mapping the actual binary and any library dependencies into the process, does relevant relocation, and jumps into the binary's startup code... which also means that kernel need not support any binary format: if you wanna load ELF binaries you just use a runtime linker which knows how to parse ELF.. or if you want PE use another runtime linker. Why should kernel care about such application level details?
Re: Review of my OS's microkernel
Posted: Thu Apr 30, 2009 7:55 am
by Benk
Is there any point in having a scheduler in a user process ? IMHO a memory manager is a line ball call and communication should be infrequent but a scheduler?
We seem to be doing it just for the sake of fitting the "Micro Kernel" design.
The purpose of running things in user space is
1) To keep the kernel small and have less bugs
2) To keep a crashing program affecting something else.
1) The code for most schedulers is less than 500 lines
2) If you put it in user space you still need to put the privileged instructions in the kernel + the API interop and my gues is instead of 500 lines in the Kernel you will have 300 in the Kernel and 400 in user space and they are very chatty.
3) Now if your scheduler crashes your OS is gone so the protection argument is redundant.
4) Scheduling needs to be fast.
5) its a critical API and you need to deal with security.
Regards,
Ben
Re: Review of my OS's microkernel
Posted: Fri May 01, 2009 1:31 am
by jal
Benk wrote:a memory manager is a line ball call
This is an international forum. I don't think many people know this expression (including me).
1) The code for most schedulers is less than 500 lines
I seriously doubt whether you have seen the scheduling code of, say, Windows Vista or OS X. I'm pretty sure the scheduler of Linux is more than 500 lines. Of course, a simple scheduler can easily fit in 500 lines (hell, it can fit in one :)), but to claim that something about "most schedulers" is ridiculous.
2) If you put it in user space you still need to put the privileged instructions in the kernel
Yes, the actual iret (with software switching) is in the kernel, but that's in the kernel anyway as the timer IRQ will enter there. There aren't that many any other privilidged instructions a scheduler needs.
instead of 500 lines in the Kernel you will have 300 in the Kernel and 400 in user space and they are very chatty.
I need less than those 300 lines, but even then: it's not about line count.
3) Now if your scheduler crashes your OS is gone so the protection argument is redundant.
If your IRQ driver crashes, your OS is gone as well. If your memory manager crashes, your OS is very likely gone as well (mine is, at least). Still there are good reasons to keep them outside the kernel (as well as good reasons to keep them in, of course).
4) Scheduling needs to be fast.
Not necessarily. On a real time system, scheduling needs to be predictable, not fast. And even on a desktop OS, scheduling needs to be fast
enough. Which in CPU cycles may very well mean "slow".
5) its a critical API and you need to deal with security.
I'm not getting that. You
always need to deal with security, and imho dealing with security is a lot easier if everything is in user processes than if everything resides in the kernel.
JAL
Re: Review of my OS's microkernel
Posted: Sun May 03, 2009 2:55 am
by Benk
I had a whole reply but the web gods swallowed it (god i hate html and web2.0) ..so sorry if this is brief and to the point.
Anyway it comes down to the fact that I see no reason for it to be in user space nor have you given one ( not saying there isnt one) . The natural place is the Kernel as you expose no API and have no overhead. Also its still good programming to expose a kernel API as small as possible an as few data structures as possible.
Acceptable reasons are things like
- Interrupts come into user space and privilege. Since after an interrupt you often call the dispatcher ( and pre empt timer ints) , having it in userspace here is not such a big deal.
- You can restart the scheduler if its in user space ( this is a difficult problem due to the state of the lists when the previous scheduler died) .
etc.
I think having everything in userspace and nano kernels etc is just a fashion started by Minix ( which was teaching aid) . Just think you need solid reason , most commercial Micro Kernels have MM and scheduler in the kernel. The reason for a Micro Kernel is
- The Kernel is small and hence has less bugs
- Apps and services are isolated in independent user processes and service can be restarted ( since they are independent).
For MM and Schedulers these do not hold true.
Regards,
Ben
Re: Review of my OS's microkernel
Posted: Sun May 03, 2009 3:48 am
by Benk
More comments
"
Drivers (the process name will be prefixed with "d\") can access IO ports, terminate a process/thread belonging to another process, send processes to sleep/wake them up, map physical memory locations into their local space, create/destroy device objects (which are merely references to the driver processes, with a unique per-device ID), listen if an IRQ fires, and transform in to (and back out of) a VM86 task."
Why not have drivers access this through a HAL , a large number of OS bugs are due to bad drivers and as long as they have access to critical things like IO ports you are at there mercy. Its better to have all this DMA/IO code safe , checked and bedded down so any intern type programmer cant crash your system by writing bad code - though it still doesn't help if they go to the wrong port. You could have security here that a certain driver is only allowed access to certain ports. In addition you can port to different hardware much easier.
'Shared memory is reference counted memory (divided into 4KB pages) that can exist in more than one process's virtual address space. Shared memory and pipes are random keys as well as their ID to somewhat delay a malicious process from eavesdropping where they shouldn't. Example usage of shared memory:
- Share the application's window's contents with the window manager and/or the graphics driver.
- Implement an application specific method of IPC over shared memory.'
Rather than shared memory is it worth implementing something like Singularities shared heap where only 1 process can own the memory at 1 time. Applications can still gain access this via an API or message .When a message /API arrives with the pointer to the shared heap ( and size) it transfers ownership ( = write access) from the sender. This removes a whole class of bugs and security issues . Also if an app crashes which uses shared memory you don't need to terminate other users of that memory unless it is the current owner , what is the state of shared memory normally if an application that uses it crashes ?. It also encourages apps to do the work needed and release. Note the ownership is on the process so threads within a process can still access it. It will make your design a bit different from the other Micro kernels as well.
Obviously this works really well with Asynch communication but that is a whole different kettle of fish.
Regards,
Ben
Re: Review of my OS's microkernel
Posted: Mon May 04, 2009 4:00 am
by jal
Benk wrote:most commercial Micro Kernels have MM and scheduler in the kernel
I'm not familiar with that many microkernels, but I have no reason to doubt what you are claiming here. Having them in there seems the most probable place.
The reason for a Micro Kernel is
- The Kernel is small and hence has less bugs
I'm not sure whether this is a good reason for it. Even a monolithical kernel can be modularized.
- Apps and services are isolated in independent user processes and service can be restarted ( since they are independent).
This is the same fallacy you mentioned before: it is not just about the possibility to restart service, but also to shield the system from a bad service: crashing is the ultimate demise, but before that, it can easily run havoc on the rest of your system.
For MM and Schedulers these do not hold true.
You are setting up straw man, I never claimed it did (although theoretically, it could be possible to do so for super robust systems, e.g. by having their data in a known location). It's indeed not possible to restart the MM or scheduler (without effectively rebooting the system), but by shielding them in user space they can at least not trash user processes when they fail (perhaps causing data corruption on disk, etc.).
JAL