First, I think this idea is not entirely new. Second, there are rumours that Microsoft is planning to release this sort of thing in Windows 8, called
WinFS.
In short, the whole idea here is to change the underlying paradigm of the operating system of being file based (most operating systems work on the idea of seeing everything as a file, this is the central idea of the UNIX operating system), to a relational (object/relation) database system.
It has the advantage that everything you store on disk is in relational format, and using smart indexes, this could speed up searching. Second, you don't need to bother where you store data, only how you will retrieve it. For example, instead of a normal file dialog in Save as, where you have to enter the exact location of your data, instead you will use tags or meta-tags for your data, and you don't bother where on disks it will be stored, that is up to the operating system.
The hierarchical directory structure and file system, still would be implemented in some form (mostly for compatibility issues), but then you could optionally have alternative parallel hierarchical directory tree/file structure.
In essence though this is a very far reaching shift in operating system design, affecting all applications that can run on this operating system (but, for compatibility, one might think of a special layer that represents the data as a tradiational hierarchical file system, so that existing application might be able to run on this OS).
This new design has various implications, as not only user data but also data used by the kernel/OS will be internally represented in the forms of relations (in fact: tables).
You could for instance implement object dependencie in this relational schema.
For example, suppose you made a document and included a picture. Now when you update the picture you included in your document, the document is then instantly updated too.
Another applicability is that you can store versioned information in the database. Every object in the database has a version number, and you can revert the system to a previous point in time. Of course you need to be able to limit the number of versions, or include features like automatic backup of old revisions (backup the data, but retain the meta data, so that searches will still be able to find older revisions and fetch it from backup).
For application development, this could open new frontiers. No longer source modules need to be stored as flat files, but can be stored in their gramatic/syntactic form. Of course you need specialised (grammar sensitive) editors to be able to do that.
In fact this approach is used in the Intentional Programming project. (see for example:
Wikipedia: Intentional Programming and
Intentional Software and
Jetbrains/Metaprogramming: Intentional Programming).
Manipulating software would become a lot easier then. You can not introduce syntactic bugs, as the only valid code you could enter in the editor is syntactic correct, so only semantic checks need to be taken care of (and partly checked for automatically). In fact, we might in many cases not even to do coding ourselves, but use specialised code generators and/or using meta-programming features and/or a domain work bench to generate code.
Changing a function or variable name in one module will automaticall change the function or variable name in all the modules that reference the same function or variable. etc.
Build tools like the
compiler, linker, make and other build tools would become (almost) obsolete, as the dependency of objects is included as meta data when creating those objects, and also what processing objects uses this object as a source for creating a target, etc.
The simple model is: everything is an
object. A
target object is created by a single
object (process) with optional process-parameters, from one or more source objects (both processing and source objects are themselves
target objects, and so on recursively), and the final source objects consists of user-sessions entering that data.
Implementing this OS on different platforms comes automatically with the feature that you have source compatibility between this OS on different platforms, you simply need to transfer the sources (you just point to the final target objects you want to transfer, and the system will figure out recursively from which sources it was built) to the other platform and there recompile and built it, and you have your target system.
In fact, such an Operating system has many advantages above traditional file system architectures, but why has nobody yet implemented this already?
Some form of implementation of this idea (at least partly) is already existing, and is called
tagging filesystem. But this only deals with user data (adding meta data to files, but not based on a real relational file- or database system; see for example:
oyepa and
TagFS), I guess not the internal data of the underlying operating system.
I don't know if this concept would make the concept of a filesystem completely obsolete (I guess not, if only for the sake of interoperability). As to the question, how to implement a database without an underlying filesystem. This can be done, you just design the database to take a partition or extended partition as its data location. Perhaps only for the boot up procedure you need a very simple filesystem to be able to boot up and start the rdbms, and from then on the OS can take advantage of the rdbms.