interesting I/O architecture
Posted: Tue Jun 09, 2009 6:57 pm
I've hit a somewhat gray area in my OS design relating to I/O. I have a sort of hybrid architecture, but for effectively all I/O aspects, it's a microkernel. My design, as you will see, is quite unorthodox, but I think I have filled in the major holes - if you think there is something totally wrong, please tell me, but do not assume I haven't weighed my decisions .
Here's my original plan. Drivers are privileged processes that may be communicated with via a simple (optionally) synchronous message passing system. User processes *are* allowed to send messages directly to drivers, but the driver knows the sender's PID, so it can filter requests, and certain important message types are also restricted. The VFS server exists only as a sort of directory system: a user process may request the driver PID and file ID of, let's say, "/home/nick/important_homework.txt" from the VFS, and then request a file handle from the given driver using that information. This means that the drivers are responsible for keeping track of files opened by a specific process, as well as handling permissions to those files. Because user processes are actually responsible for their own self-destruction (i.e. even the kernel may not directly kill a process), they are able to make sure all file handles are closed before they call exit(). Even if the handles are not freed, the driver can see if the holding process is still alive when another tries to open that descriptor. Btw, this "killing architecture" is actually secure and reasonable because the kernel may give control of the user process to a piece of read-only, trusted, but unprivileged code mapped in all address spaces (called the "libsys") at any time, so as long as the libsys is written correctly, everything will work smoothly. When fork()ing, the child process may reopen all file handles before fork() itself returns. I'm just planning to have a table of file descriptors set up by either the C library or the libsys that is used for read() and write() calls which are just wrappers for messages.
I know that everyone always seems to say "don't let users send messages to drivers!" Is there some reasoning behind this that means it still is a bad design even if the drivers can do easy filtering? My messages are preemptible as well, and somewhat DoS-proof by design, so sending too many messages, even asynchronously, won't be a problem.
Is there a major problem in giving the job of handling file handles to the drivers themselves? I thought it would be much more flexible: you could make files act however you want if you're a driver writer, so you could even make things that are not files seem like them (a la Plan 9.) The drivers have *plenty* of address space to do this stuff, but the kernel does not, which is one of the many reasons I'm pushing so many things into userspace.
Are there any gaping holes or naive misinterpretations of the hardware in my design?
P.S. Please, if you can help it, don't try and argue that the "libsys"/voluntary exit() concept won't work - I've made many special design decisions that really do make things secure, I'm sure it works, and without intimate knowledge of my design, nobody else will understand why it works.
Here's my original plan. Drivers are privileged processes that may be communicated with via a simple (optionally) synchronous message passing system. User processes *are* allowed to send messages directly to drivers, but the driver knows the sender's PID, so it can filter requests, and certain important message types are also restricted. The VFS server exists only as a sort of directory system: a user process may request the driver PID and file ID of, let's say, "/home/nick/important_homework.txt" from the VFS, and then request a file handle from the given driver using that information. This means that the drivers are responsible for keeping track of files opened by a specific process, as well as handling permissions to those files. Because user processes are actually responsible for their own self-destruction (i.e. even the kernel may not directly kill a process), they are able to make sure all file handles are closed before they call exit(). Even if the handles are not freed, the driver can see if the holding process is still alive when another tries to open that descriptor. Btw, this "killing architecture" is actually secure and reasonable because the kernel may give control of the user process to a piece of read-only, trusted, but unprivileged code mapped in all address spaces (called the "libsys") at any time, so as long as the libsys is written correctly, everything will work smoothly. When fork()ing, the child process may reopen all file handles before fork() itself returns. I'm just planning to have a table of file descriptors set up by either the C library or the libsys that is used for read() and write() calls which are just wrappers for messages.
I know that everyone always seems to say "don't let users send messages to drivers!" Is there some reasoning behind this that means it still is a bad design even if the drivers can do easy filtering? My messages are preemptible as well, and somewhat DoS-proof by design, so sending too many messages, even asynchronously, won't be a problem.
Is there a major problem in giving the job of handling file handles to the drivers themselves? I thought it would be much more flexible: you could make files act however you want if you're a driver writer, so you could even make things that are not files seem like them (a la Plan 9.) The drivers have *plenty* of address space to do this stuff, but the kernel does not, which is one of the many reasons I'm pushing so many things into userspace.
Are there any gaping holes or naive misinterpretations of the hardware in my design?
P.S. Please, if you can help it, don't try and argue that the "libsys"/voluntary exit() concept won't work - I've made many special design decisions that really do make things secure, I'm sure it works, and without intimate knowledge of my design, nobody else will understand why it works.