An hybrid OS design for higher performances
Posted: Tue Aug 14, 2012 9:16 am
There are actually several kernel designs (models) such as Microkernel, Exokernel, Modular Kernel, Monolithic Kernel...
However, each has it's own advantages and pitfalls. The idea is to create an OS mixing many models to take advantage of all benefits, while being careful enough not to inherit the pitfalls.
The main kernel would be a block, with a table indexing available functions, and a system that identifies the functions and returns their effective addresses (similar to the hardware interrupts). Then, it would be possible to add blocks to the core kernel, registering their functions, so they can be accessed easily. This is the Modular part of the thing. However, contrarily to modular kernels, kernel modules would be added by the kernel itself or the system configuration, not the user directly. The idea is to load only what we need of the kernel.
For system calls, an application would have a number of unresolved symbols. When the system loads the executable, it matches the unresolved functions to known function names, and maps them in the process' address space, filling the hole with the mapped address of the functions. This is what I call ghost symbols. Then, we could also add kernel modules that provide new functions, for implementing new functionnalities. Finally, those API's that can be implemented in user-space should be implemented this way, pretty like Windows DLLs.
With such a system, the booting is quite difficult. My approach is to load, from the reserved sectors of a disk, a minimal system that would understand some common filesystems. It would then be able to load the kernel, and setup the environment.
I think this kernel design is secure as applications don't really access the kernel, so it is not really possible to corrupt it's code.
But the real fact of that system is that it could load the ghost functions from anything. I thought of making a module for each domain (e.g. a module for Windowing, a module for Printing, a module for Networking...). This system would be extremely easy to update, as the core functionnalities are implemented in independant modules, and the core kernel would not have to be changed that often. My drivers would be implemented as special kernel modules that would run in kernel mode. But as the API's using these drivers are loaded in the kernel, a direct communication is possible.
The only request on executables is that they allow unresolved symbols.
The kernel would then be a central processor. When an application needs a service, it calls upon the kernel using ghost function calls, which in turn call the drivers which actually do the thing. This delegation allows a high level of security as nobody even knows how the kernel does the things.
Concretely, we would have a Kernel process and application processes. How about it ?
PS: I have alreay started to implement it, but have not yet come to a point where it is possible to run it
However, each has it's own advantages and pitfalls. The idea is to create an OS mixing many models to take advantage of all benefits, while being careful enough not to inherit the pitfalls.
The main kernel would be a block, with a table indexing available functions, and a system that identifies the functions and returns their effective addresses (similar to the hardware interrupts). Then, it would be possible to add blocks to the core kernel, registering their functions, so they can be accessed easily. This is the Modular part of the thing. However, contrarily to modular kernels, kernel modules would be added by the kernel itself or the system configuration, not the user directly. The idea is to load only what we need of the kernel.
For system calls, an application would have a number of unresolved symbols. When the system loads the executable, it matches the unresolved functions to known function names, and maps them in the process' address space, filling the hole with the mapped address of the functions. This is what I call ghost symbols. Then, we could also add kernel modules that provide new functions, for implementing new functionnalities. Finally, those API's that can be implemented in user-space should be implemented this way, pretty like Windows DLLs.
With such a system, the booting is quite difficult. My approach is to load, from the reserved sectors of a disk, a minimal system that would understand some common filesystems. It would then be able to load the kernel, and setup the environment.
I think this kernel design is secure as applications don't really access the kernel, so it is not really possible to corrupt it's code.
But the real fact of that system is that it could load the ghost functions from anything. I thought of making a module for each domain (e.g. a module for Windowing, a module for Printing, a module for Networking...). This system would be extremely easy to update, as the core functionnalities are implemented in independant modules, and the core kernel would not have to be changed that often. My drivers would be implemented as special kernel modules that would run in kernel mode. But as the API's using these drivers are loaded in the kernel, a direct communication is possible.
The only request on executables is that they allow unresolved symbols.
The kernel would then be a central processor. When an application needs a service, it calls upon the kernel using ghost function calls, which in turn call the drivers which actually do the thing. This delegation allows a high level of security as nobody even knows how the kernel does the things.
Concretely, we would have a Kernel process and application processes. How about it ?
PS: I have alreay started to implement it, but have not yet come to a point where it is possible to run it