Designing for a language other than C.
Posted: Sun May 08, 2005 3:35 pm
In the series of Mystran's random babblings, we will today touch the issue of designing an operating system from a slightly different point of view than most of us would normally do. Hopefully this will provide some insights to someone (including me ;D).
I think most of us take it from granted that the operating system itself is written in some Algol derivative. In fact, in most cases that derivative will be C or C++, although we have at least one Pascal user here as well. There's every now and then discussion about writing the kernel in some "higher level" language, but I think many of us will still assume that the final operating system will feel someone like existing systems.
What I mean is that when we design things like memory management or IPC, we tend to think in terms of what can be done in a half sane way with a language like C or C++.
Suppose for example, that we where writing an OS for a dynamically typed, functional language, say Lisp. There would be several natural things to support. First of all, since a Lisp implementation can trivially support a copying garbage collector, there would be little point in serialization. Just make copy of the structure into a buffer area, and transfer the buffer.
But what happens if the structure contains functions? There are two alternate things that could be done. Either copy the function as normal data (not hard to do since dynamic compilation is likely to be a feature anyway), or make remote pointer, and wrap it inside a stub that calls the other process. Since all processes are assumed to run Lisp, there's little point in specifying types of interfaces. Any function can be called remotely.
Now, let's assume that we have a language such as Scheme, were continuations are first-class, and tail-call elimination is a fact. Now the IPC system should allow one to transfer a continuation from one process to another. But if we can do this, why not allow that continuation be forwarded elsewhere? On regular operating systems this kind of stuff is quite hard to implement, but it's not hard to support them in an IPC design directly.
On the other hand, let's consider a situation where all programs where running E code, which relies heavily on events. Explicit multi-threading would just be extra now. Each of the systems processors could just take events from a global event queue, and execute them in whatever process's address space they happened to belong to. Scheduling would now work on event-level, and events would have priorities, not processes.
One can probably think of tons of other language features (or programming conventions) that would benefit from kernel level implementations, but wouldn't be practical in a regular imperative language like C.
Paul Graham talks about the blub paradox and I think it tends to affect OS development as well. I hope my writing helps you think different.
I think most of us take it from granted that the operating system itself is written in some Algol derivative. In fact, in most cases that derivative will be C or C++, although we have at least one Pascal user here as well. There's every now and then discussion about writing the kernel in some "higher level" language, but I think many of us will still assume that the final operating system will feel someone like existing systems.
What I mean is that when we design things like memory management or IPC, we tend to think in terms of what can be done in a half sane way with a language like C or C++.
Suppose for example, that we where writing an OS for a dynamically typed, functional language, say Lisp. There would be several natural things to support. First of all, since a Lisp implementation can trivially support a copying garbage collector, there would be little point in serialization. Just make copy of the structure into a buffer area, and transfer the buffer.
But what happens if the structure contains functions? There are two alternate things that could be done. Either copy the function as normal data (not hard to do since dynamic compilation is likely to be a feature anyway), or make remote pointer, and wrap it inside a stub that calls the other process. Since all processes are assumed to run Lisp, there's little point in specifying types of interfaces. Any function can be called remotely.
Now, let's assume that we have a language such as Scheme, were continuations are first-class, and tail-call elimination is a fact. Now the IPC system should allow one to transfer a continuation from one process to another. But if we can do this, why not allow that continuation be forwarded elsewhere? On regular operating systems this kind of stuff is quite hard to implement, but it's not hard to support them in an IPC design directly.
On the other hand, let's consider a situation where all programs where running E code, which relies heavily on events. Explicit multi-threading would just be extra now. Each of the systems processors could just take events from a global event queue, and execute them in whatever process's address space they happened to belong to. Scheduling would now work on event-level, and events would have priorities, not processes.
One can probably think of tons of other language features (or programming conventions) that would benefit from kernel level implementations, but wouldn't be practical in a regular imperative language like C.
Paul Graham talks about the blub paradox and I think it tends to affect OS development as well. I hope my writing helps you think different.