Solar wrote:Colonel Kernel wrote:Who should be doing the peeking and the poking? IMO, it should not be application developers. This leaves kernel developers and driver developers.
Unfair assumption, IMHO. For example, many of the more involved applications (databases, graphics applications, just to name two) benefit greatly from e.g. custom memory handling, implemented on top of the generic memory handling of the system. No way how good your garbage collector or how customizable your memory management system, there will
always be someone requiring that bit more of performance or effectivity.
As long as that "someone" is trusted and not some random Joe off the street. My point is that there's no reason to give all applications the same access to the system just because some of them need specializations for performance reasons. Why not differentiate between the two as a matter of policy?
I agree that a certain amount of specialization is needed to get good performance, but I'm not convinced that such specialization has to break type-safety all the time. Your two examples are worth looking at since I think they show two different ways to deal with performance in such a system.
I concede that databases are a tricky example... A serious DBMS would probably do better to have its own dedicated OS, since it tends to take over all the resources of the machine anyway. DBMSes have their own memory management and often their own concurrency management as well. So, why not have a DBOS (this has been done before BTW)? Now, why couldn't that DBOS implement its memory and thread management as a trusted base, and everything else as type-safe code?
For graphics applications, what kind of specialization did you have in mind? I'm thinking it would be mainly related to memory handling -- allocating large blocks to store raster images, avoiding array bounds checking, etc. One of the things that MS Research has found with their work on Bartok and Singularity is that by keeping processes "closed" so that all the code that will run in a process is known ahead of time, it becomes possible to apply much more aggressive whole-program optimizations than were previously possible. A lot of array bounds checks and other run-time checks can be optimized out as a consequence.
Also, research into dependent types could yield promising results. In a dependently-typed language, you can tie types to run-time values to a certain extent. For example, a dynamically-allocated array's bounds could become part of its type identity, giving the compiler enough information to omit bounds checks nearly all the time.
I highly recommend Tim Sweeny's presentation
The Next Mainstream Programming Language. There's a lot of neat stuff in there on how advances in static typing can give us safety
and good performance.
I still don't understand how you can object to a theoretical systems architecture because it prevents you from hacking, when no one will force you to use it. Seriously, it's bizzare.
This is called constructive criticism.
Of course you can go ahead and build a "perfect" system, but you should actually be happy if people step up and tell you why it wouldn't be as "perfect" for others,
before you learn it the hard and frustrating way at the end of a long development.
To put it in other words, I don't understand how you can object to a criticism when
no one is forcing you to take it into account. (Note big smiley --> )
I think you misunderstand me... the use of type-safe languages in OS architecture is not my idea (it's not even a new idea), nor is it something I'm going to attempt in my own OS project. I'm not even telling other people that they should embrace this idea (which would be the height of hubris).
What I've been getting totally exasperated trying to say is that this idea is
powerful and seductive because of the problems it promises to solve. I think we should at least seek to understand it fully so we can evaluate its advantages and disadvantages.
To put it simply, I am
actively seeking constructive criticism of someone else's idea, but instead what I find is a lot of misunderstanding at best, and downright FUD at worst. It's just frustrating trying to have an intelligent conversation about something when the first thing people bring to the table are their prejudices about Java, .NET, Microsoft, VMs, interpreted languages, GC, etc., which as I've been trying to say, are either
incidental or
completely unrelated to the core (and very broad) idea of an OS based on language safety.
To use an analogy, the reaction I see is like someone who prefers a car with a manual transmission saying that automatic transmissions should never have been invented.
So far in my attempts at sparking real debate on this idea, I've raised a pretty coherent (IMO) objection -- that it may limit language choice too much by imposing certain requirements on the proof-carrying code that compilers targeting the system must produce.
Today I thought of another objection. I recently watched a lecture by Dave Patterson (of
"Hennessy & Patterson" fame) about the future challenges our industry faces because of the shift towards explicit parallelism. One thing he mentioned was that as the feature size of CPUs shrink (65nm and falling), the number of "soft" errors increases. This means random weirdness that no amount of static code verification can solve. Perhaps MMUs are more likely to mitigate the negative consequences of such "soft" errors. In engineering terms, perhaps a better way to make things more robust is to expect them to fail, but design a way for easy recovery (Erlang is based on this principle).
So, above are two pieces of constructive criticism. Can I maybe get more than just "I don't like safe languages!" and "gimme back my pointers!" from other people...?
If progress was measured in terms...
{Warning sounds} Leaving the ground of discussion, entering argument...
Yes, I apologize for getting a little testy. I've just found it very frustrating to get this idea across, as I said above.
The OSes of the future should be designed in such a way that the amount of code that you'd be inclined to write in such a special-purpose "systems programming language" is as small as possible.
Who has the authority to define what "the OSes of the future", all of them,
should look like?
You quoted me out of context. Here is the full quote:
I think you've largely missed the point of the idea though, which is this: The OSes of the future should be designed in such a way that the amount of code that you'd be inclined to write in such a special-purpose "systems programming language" is as small as possible.
This is a hypothesis, not a diktat. It is the same one put forth by the microkernel folks: Too much uber-privileged software leads to reliability and security problems. The kernels of Windows, Linux, and OS X are millions of lines of code. How many vulnerabilities do you think lurk in there, just waiting to be discovered? The type-safety folks are basically saying that you can have your cake (a microkernel) and eat it too (good performance; zero-copy IPC; etc.). So it's up to us to figure out if there is any arsenic in the cake.
Taking away pointers and adding garbage collection might be a good thing for some, but you are taking away a freedom to be replaced with a feature - which not everyone might be OK with.
I suppose the ability to write unsafe code and expect it to run fast could be considered a freedom... There is no reason such unsafe code couldn't be run in a separate address space for safety/backwards compatibility/etc. It will just have slower IPC to the rest of the system. Given the problems such code causes though, I think the trade-off is worth it.
As far as GC goes, what about the ability to choose your own GC? GCs are trusted code, at least for now, but if someone has root and really feels like it, they can create and install a new GC... It would be only slightly more dangerous than installing a kernel-mode driver is today. I think it's nuts, but I'm especially paranoid.
In the future, when type-safe GCs become possible, even this issue will become moot.
The principle of least privilege says we should not be granting things kernel-like powers unless they really, really need it.
Uh-huh... and by designing a system in a way that it
requires something to be written in a specific way (manifest, "safe" language etc.), you take away the ability to do it differently
if one really, really needs it.
I am not saying that it's a bad tradeoff
per se (IMHO it very much depends on the implementation), I just want to make you aware that it
is actually a tradeoff.
Of course it is a tradeoff, but I think appealing to "someone, somewhere might need this and we just can't predict it" is a cop out. IMO the tradeoff exists at a different level, like with your database example. Maybe "type-safe OSes" are good for desktop systems and web servers, but not for DB servers... or maybe they are. Maybe they'll be terrible for embedded systems and mobile devices. Maybe not. Without some prognostication, we will never figure it out.