This post is continuing in the wake of this previous thread, where the primary topic of discussion was schemes relevant to the driver-hardware interface and the impact/ramifications those decisions would have on inter-process communication in microkernels. It quickly became an interesting conversation, but I decided to start a new topic that I could begin after sitting down, consolidating my views and objectives and doing some reading on the subject, rather than being forced into the those matters on-the-fly.
As was established in the previous thread, I'm looking to build a microkernel architecture where security, modularity, and portability of the kernel are not just marketing buzzwords or succinct summaries - they're top-level design criteria. My line of thinking was that I would prefer to abstract driver access to hardware through kernel system call-based I/O utilities specific to drivers, as it produces a number of benefits for the architecture I'm looking to develop:
- It further separates the kernel and drivers. If common hardware (say, an Ethernet or graphics card) was shared between two different instances of the kernel on different architectures, the driver would simply need recompilation to the new system call format, calling convention format, and instruction set but would not have to change its source, because it does not rely on a particular architecture's I/O protocols.
- It enables better security and compartmentalization, at least from the context of nefarious or malfunctioning drivers. Because the kernel would mediate driver-hardware access, it could vet the driver's request against the role the driver is assigned to perform. The canonical example presented in the previous thread was that of a compromised, keylogging keyboard driver. The driver, upon mounting, would acknowledge itself to the kernel as a keyboard driver (my thoughts, at least initially, are something analogous to PCI BAR class codes). For the keylogger to do anything with the intercepted data, it would have to transmit to the network stack or hard disk interface, and this would be prohibited by the kernel (if implemented correctly!) on the sole fact that those categories of accesses are external to the scope of a keyboard driver.
- Redirection. It enables the kernel to "virtualize" a device - while most obvious for things such as /dev/null and /dev/random, it would also allow for the kernel to symbolically utilize a resource that may altogether not be present on that machine (such as networked drivers or systems like NASes, computing farms, etc.) but present it to the requesting process as though it were physically present and without requiring that the requesting process be cognizant of that device's status.
- It enables better reliability and development-friendliness, for much the same reasons - a kernel system call allows the kernel to have information about what the drivers are doing and what information they are requesting, which is important for driver development, debugging, and reverse-engineering of proprietary drivers (brings this to mind). It also allows the kernel to take action should a driver meet a threshold of illegal accesses (or just a malfunctioning driver), such as forcibly unloading the module entirely or taking other administrative action against it.
Now, obviously, this involves a number of ugly delay steps that have to be optimized out. Even assuming the IPC is fairly expedient (which I'm not counting on for the first 1,000 builds of my kernel!), this process involves 3 different context switches, simply to get the message to the hardware that "hey, this program wants this". An even better illustration is what occurs when drivers are compounded, such as SATA drivers that abstract over PCIe, or high-level USB device drivers that must interact with the USB stack and bus. The above image gets uglier:
So, I've begun looking into other means of tackling the issue that still retain the core objectives. When I was looking into ways microkernels handle these issues, I managed to stumble across the wiki for GNU Hurd, built on the old Mach microkernel, which notes:
This appears to be a potential solution to the IPC and latency issues described above, at the potential complication of making the interface a little hairier. I'd have to sit down and attempt to figure a means with which to allow for system call routing to drivers in scenarios where multiple drivers of that type are present. For what it's worth, however, it's interesting to see some of the means by which other microkernels tackle these problems. So, now to put the question fairly broad-side-of-barn, does the "system call"-based kernel API scheme still have merits, and can it be optimized to the point of efficiency? I'm not looking for a microkernel that lounges comfortably in pedagogy, another MINIX - I'm looking for one that shows potential, even if it consumes my life force to realize. If so, what are some of your thoughts on how a "better version" of this scheme might pan out?Wikipedia wrote:The servers collectively implement the POSIX API, with each server implementing a part of the interface. For instance, the various filesystem servers each implement the filesystem calls.