The OS-periment's IPC design is looking for "testers"
Posted: Sat Jun 11, 2011 4:57 pm
TOSP : the context
I'm the main and only developer in a project called the OS-periment. This project aims at producing a modern OS that's specifically designed with the needs of personal computing in mind. By personal computers, I designate desktops, laptops, tablets (if they ever happen to get standard and open hardware one day), and as a whole everything with a relatively large screen, which is to be operated by one single user at a time, and which does not require this user to have received an in-depth formation and read kilometric manuals in order to do the most basic things. Implementation is done in C and C++, save for few assembly snippets here and there.
It has appeared to me that such a concept, when pushed sufficiently far, screams for optimization down to the lowest layers of the OS stack. Take, as an example, security models. The user-based security infrastructure which is at the core of most of today's server-centric operating systems is poorly suited to the job, because it is designed to protect a sysadmin from malicious users on large-scale multi-user machines and not to protect an average user from malicious software. Users have to use root privileges all the time for tasks like setting up the system clock or installing software which are trivial for them, so in the end those privileges become meaningless. A security model that's fine-tuned for personal computing should, to the contrary, be application-based, and give each *software* only limited security permissions to play with, while keeping the user as an almighty overlord which can do whatever he wants. Code to do this exists for a lot of modern kernels, but it has appeared too late and does not take a significant part in the associated OS' culture. As an example, try to do anything unusual or fancy on a SElinux-based distribution, and it'll end up getting in your way. Linux and its huge existing software library have simply not be designed for that.
Taking sides also helps with some traditional OS development compromises. Take, as an example, microkernels vs monolithic kernels. Reliability vs raw kernel performance. On most modern single-user machines, I believe that raw performance is not the most vital concern, provided that it remains sufficient, as long as reactivity to user input, which is the most important performance criteria there, can be ensured through other means, like careful prioritizing of interactive tasks above background ones. Therefore, I take the extra reliability and cleanness of microkernels over the throughput of monolithic kernels without hesitating a second.
Help needed !
One important area of a microkernel design is inter-process communication. We have isolated processes from each other, yet they still have to give themselves work to do from time to time. How do they do that ? That's the core question behind IPC. My personal answer to this problem would be a non-blocking variant of RPC, either using asynchronous task queues or pop-up threads depending on the problem that is being solved by the server process which receives the remote calls. But since design mistakes bite very hard if they are found late in the OS design process, I'm asking all theory enthusiasts around here if they would be interested in having a look at my current design, and report any potential problem which they see with it.
Many thanks in advance, here's the link : http://theosperiment.wordpress.com/2011 ... model-rc3/
I'm the main and only developer in a project called the OS-periment. This project aims at producing a modern OS that's specifically designed with the needs of personal computing in mind. By personal computers, I designate desktops, laptops, tablets (if they ever happen to get standard and open hardware one day), and as a whole everything with a relatively large screen, which is to be operated by one single user at a time, and which does not require this user to have received an in-depth formation and read kilometric manuals in order to do the most basic things. Implementation is done in C and C++, save for few assembly snippets here and there.
It has appeared to me that such a concept, when pushed sufficiently far, screams for optimization down to the lowest layers of the OS stack. Take, as an example, security models. The user-based security infrastructure which is at the core of most of today's server-centric operating systems is poorly suited to the job, because it is designed to protect a sysadmin from malicious users on large-scale multi-user machines and not to protect an average user from malicious software. Users have to use root privileges all the time for tasks like setting up the system clock or installing software which are trivial for them, so in the end those privileges become meaningless. A security model that's fine-tuned for personal computing should, to the contrary, be application-based, and give each *software* only limited security permissions to play with, while keeping the user as an almighty overlord which can do whatever he wants. Code to do this exists for a lot of modern kernels, but it has appeared too late and does not take a significant part in the associated OS' culture. As an example, try to do anything unusual or fancy on a SElinux-based distribution, and it'll end up getting in your way. Linux and its huge existing software library have simply not be designed for that.
Taking sides also helps with some traditional OS development compromises. Take, as an example, microkernels vs monolithic kernels. Reliability vs raw kernel performance. On most modern single-user machines, I believe that raw performance is not the most vital concern, provided that it remains sufficient, as long as reactivity to user input, which is the most important performance criteria there, can be ensured through other means, like careful prioritizing of interactive tasks above background ones. Therefore, I take the extra reliability and cleanness of microkernels over the throughput of monolithic kernels without hesitating a second.
Help needed !
One important area of a microkernel design is inter-process communication. We have isolated processes from each other, yet they still have to give themselves work to do from time to time. How do they do that ? That's the core question behind IPC. My personal answer to this problem would be a non-blocking variant of RPC, either using asynchronous task queues or pop-up threads depending on the problem that is being solved by the server process which receives the remote calls. But since design mistakes bite very hard if they are found late in the OS design process, I'm asking all theory enthusiasts around here if they would be interested in having a look at my current design, and report any potential problem which they see with it.
Many thanks in advance, here's the link : http://theosperiment.wordpress.com/2011 ... model-rc3/