Page 1 of 1

Comparing KeyKOS and Unununium: Persistency?

Posted: Tue Oct 03, 2006 7:05 pm
by jvff
Hi,

I was doing some nanokernel wikipedia research and as a result I was introduced to KeyKOS and reintroduced to Unununium. Reading through their design goals, I noted some similiar aspects. They both provide a high level of abstraction (with for example orthogonal persistence in UUU or single level store in KeyKOS) with minimal management overhead ("kernel-less" design in uuu and capabilities (or keyed) based system in KeyKOS).

Although very interesting, it does need some new methods to "overcome" traditional system organization. For example in UUU where process protection is in the application source-code level (the site says they plan to use a virtual machine to run apps, perhaps a dynamic recompiler is better but it is a nice solution for both code portability and process/system/"kernel?" protection). An example in KeyKOS is the checkpoint system which (if I understood correctly) saves process and data states to maintain persistency.

Persistency confuses me. Not the idea itself, but the why of that idea and it's coherence with the hardware and applications. Is full persistency a real necessity? From what I see, if you redirected every memory write to a hard disk write, you might be better off by just giving a piece of paper and a calculator to the user so he can run programs by hand. Even if you queued the operations and only did it periodically, you still have a major overhead.

A possible solution is to use process/task/thread/object/etc. independance of the capability based implementation and hand out priorities for checkpointing, doing it more frequently for higher priority data and processes. But in this scenario, you would need to track also dependancies with other processes with lower priority (consider an update of a high priority just after it has copied data from a low priority that still awaits an update).

For the programmer the advantage is quite obvious. Instead of accessing hard-disks, memory, IO, etc. simply access an object/node/file/etc. However, forcing the OS to keep track of what goes in memory, what goes in storage assumes that it can do a better job than the programmer. Probably it's true in most cases, but there might be cases where the programmer has a better implementation that uses a fast cache and a slow memory, so he might be better off without single-level store.

Another thing that has me thinking... What's the point of guaranteed persistency? Viewing all data storage as one is a nice feature, however, keeping non-volatile systems synchrionized with volatile ones? Only advantage I see is fast boot-up (if the hardware hasn't changed), fast power-off, and possible crash reliability (depends on what crashed, was it a kid that triped on the power cord or was there a rootkit installed that corrupted the system state? If it was something like the last example, quick booting simple crashes again).

So, after a long post comes my questions. What is the REAL point of persistency (in the sense of keeping disk updated with memory and single-level store)? What is the most efficient method to implement it? Is it even worth it (or possibly; where is it worth it)?

Thank you and sorry for a long post,


JVFF

PS: Sorry for bad english :/