My idea is to represent bootable "objects" as a pipeline of device drivers / device mappers / filesystem drivers / network drivers / etc.
For example, if you want to refer the file /boot/kernel on the second floppy drive, who is formatted as minixfs, you would write:
Code: Select all
fd(1) | minixfs(/boot/kernel)
Code: Select all
sata(0) | msdospt(3) | extmsdospt(6) | nilfs(/home/someone/programming_projects/myOS/kernel-latest.img.gz) | gunzip()
Code: Select all
eth(0, 123.456.789.01) | nfs(/var/image.hd) | 7unzip() | bsdPT(1) | ufs(/floppy.mod)
Of course this are some very complex examples, but they show some flexible properties.
Some parts of it aren't particulary difficult to implement. We just need to represent the pipeline as program data (an array of structures, each with a callback and an argument string is enough) and write a function that builds this representation based on a string passed to it (in the form shown above). Then, the pipeline is called. To do this, we call the topmost "driver", passing the argument string and the pipeline in which it has to operate (which is the same as the original pipeline, but without the former topmost item). The process of calling the pipeline is repeated until the "driver" we are calling is a device driver who doesn't read from a pipe, but from the hardware instead...
There are cases where the thing fails a bit...
For example, to parse a PC partition table, you need to know the underlying device's geometry. But it isn't easy to get this information. A workaround could be to transform the concept of "reads from a pipeline" in "messages sent to a pipeline". This way, the interface can be easily extended in order to allow writes and ioctl-like requests. (This would be needed to support network filesystems too, because they work over a protocol, not over an array of bytes/sectors)
Another layout that I don't know how we can handle is logical volume managers. With support for ioctl-like requests, perhaps there could be created a driver that would serve as the lowlevel interface to the physical volumes. This layer would take, as arguments, the "candidates" to be physical volumes, and would reject any who isn't one (or even kill the pipeline...), and match the random keys needed to group the physical volumes in logical volumes. Then, another layer (that could be used for all LVM formats) would act as a wrapper, forwarding the reads requests to the low level layer. For example:
Code: Select all
lvmspace(ide(0)|msdospt(1), ide(2), scsi(3)) | lvmLV(myVG, myLV)
This reflection only takes into account Linux LVM, as I don't know how other LVM schemes work...
I think I will implement sort of this thing in my boot loader. There is, however, a very special reason for this not being practical for an operating system file IO interface: it does not take into account things like caching or mounts.
It could, however, be a valid option on how to organize the IO subsystem layers on the inside of a kernel, provided that the kernel inserts the cache layers in the correct places of the pipeline. (Or if the kernel doesn't want to support caching at all).
JJ