eekee wrote:
kerravon wrote:
So perhaps we need to define "simple". Simple to debug? Simple to understand how the OS works? Simple to maintain the OS code? Simple to add debug at the kernel level if something isn't behaving correctly?
Indeed, it's always good to figure out where you most need to simplify and where you can tolerate complexity. As I've learned by considering how to do without various OS features from memory protection to files, everything you can simplify complexifies something else.
For a given set of responsibilities that you give to a system, there's going to be some minimum number of bits needed to describe a system that can meet those responsibilities, and the size of the HDL code for the hardware, plus the size of the firmware, plus the size of the OS, plus the size of the middleware, plus the size of the application(s) is going to be greater than or equal to that minimum size, regardless of how you divide it up between them. Plus, the information-theoretically smallest description of a system is not always the most convenient, easiest to understand, or most performant implementation of the system. The PDP-8/S was probably the simplest implemenation of the PDP-8 architecture built in terms of number of transistors, or the number of bits needed to describe the physical layout, but a bit serial implementation of a given architecture probably takes more human labor to design than an implementation that has a full width ALU and bus.
Quote:
As for unikernels, I agree with vvaltchev when he
wrote that bundling the OS as a library is just marketing. A unikernel is a simple kernel, a library is just one way of packaging it.
Well, pretty much any kernel I've ever heard of is basically just a library. On pre-MMU/pre-protection architectures, the kernel was basically a library or set of libraries that provided a convenient software interface to the hardware. With the advent of protection and memory management, the kernel retained the job of "shared library for hardware access", except that now nothing else *could* access the hardware. It also gained the job of providing every process with its own virtual machine of sorts (not in the modern sense of "virtual machine", where the VM has access to virtualized hardware that behaves in the same way as the bare metal, complete with its own protection and mapping capabilities, but in the sense that each process has the illusion of having *a* machine of some sort all to itself). In each "virtual machine", the C runtime (or equivalent) now played the roll that the OS had originally played, acting as a software interface to the "hardware" (i.e, the OS system-call interface).
This, I think, is the problem with microkernels: services that are best implemented as libraries end up shoved out into their own "machines", communicating, after a fashion, over a "network". With a monolithic kernel, the divide between kernel and userspace allows these services to behave as libraries while preventing userspace code from messing with their data. There are ways hardware could be designed to allow services to still be called as libraries while isolating them not only from userspace but from each other, but such designs are not currently common.