Question about a design of syscalls for microkernels
Posted: Fri Jun 18, 2021 6:52 pm
So, I've been mulling over this idea in my mind but thought I'd bring it to you guys.
I know that, typically, a microkernel uses message passing/IPC to communicate between processes and the kernel. My idea is to change this in a couple ways:
One advantage I could see would be that I could (possibly) bypass traditional IPC entirely. IPC could be handled via shared buffers instead of going through the kernel or VFS layer as is done on Linux/BSD. That would eliminate the overhead of syscalls (I think) as well as the need to allocate VFS objects for pipes and such. It might also be riskier too; for example, it might not be as secure. But I thought I'd post it here and see what you guys thought.
I know that, typically, a microkernel uses message passing/IPC to communicate between processes and the kernel. My idea is to change this in a couple ways:
- The kernel would only contain the absolute minimum syscalls. As much code as possible would run in userspace. The kernel would contain syscalls for threading, processes, memory allocation and such, as well as PCI device access, but that would be it.
- Device access would run through userspace servers via shared memory. The server would initialize and access the PCI/CXL bus through the kernel and get the info it needed, but then it would allocate a shared memory buffer and communicate with the device that way.
- System calls would go through a similar mechanism. For the majority of tasks in libc, for example (printing to the console, accessing the network, communicating with the filesystem, ...) applications would send request packets to the server in question. The request and response communication mechanism would occur "under the hood" via libc, so applications wouldn't be aware that this was happening. When libc was initialized for a given process it would allocate a shared buffer to all of the servers that it needed access to or could use a central dispatch server that would handle the communication. To the application, it would just be calling fread/fopen/fclose/... and would have no idea that the underlying interface was using this method. This, in turn, would make porting apps simpler.
One advantage I could see would be that I could (possibly) bypass traditional IPC entirely. IPC could be handled via shared buffers instead of going through the kernel or VFS layer as is done on Linux/BSD. That would eliminate the overhead of syscalls (I think) as well as the need to allocate VFS objects for pipes and such. It might also be riskier too; for example, it might not be as secure. But I thought I'd post it here and see what you guys thought.