Page 1 of 1

"recursive" device driver design

Posted: Sun Mar 30, 2008 3:40 pm
by bewing
If you have some sort of machine hardware device, writing a driver for it is pretty straightforward. To access an ATA hard disk, the hardware driver part is only 20 lines of code. Similarly for all the timers, graphics cards, serial IO ports, floppy disk drives, ethernet PCI boards, etc., etc.

Devices on busses seem a little more complex. If you want to talk to a USB floppy drive, then you need to have a driver for that floppy drive, and it needs to interface also with the USB driver -- to somehow "tunnel" the floppy commands through the USB driver, onto the proper USB bus.

And then you have drivers for things that can be connected in multiple ways. Let's say a printer, printing some kind of OS specific bitmap. You need a printer driver that converts the bitmap into the printer's preferred format, as a byte stream. But where to send the bytestream? If you hardcode the location of the printer, then it's easy, of course. But in reality, the printer might be hooked to either a parallel port, a COM port, a USB port, or maybe a TCP/IP "port".

And then you have the even worse scenario of a packet of www data. It has to go through a multistage packetizing process -- where each stage is theoretically designed to have its own driver. Each driver stage takes the input packet, wraps it in another packet, and passes it along to some specific next driver. Eventually to pass it to some driver that will tell some ethernet board to send the packet out onto a network.

So: all of these things come down to "services". An application wants to print a bitmap -- it enumerates all the printing "services" available to the user -- the user picks one -- the application spools the bitmap to the print service -- which handles all the mucky details in the background. However, as os designers, creating the process for the mucky details is our job. The thing is, in my mind I'm seeing that one service (like a print service) has to be implemented in terms of other services. The printer driver has to spool its bytestream to a "printer hardware service" -- which may be a direct connection to a parallel port driver -- or it may be a USB emulated serial port driver service, that is then connected to a USB hardware driver.

Is this reasonable? Is this the way it all typically gets handled? That "generic" hardware device drivers must have an identical interface to virtual device services, that can be enumerated? Which may, in turn, call other virtual device services, to perform their functions?

Posted: Mon Mar 31, 2008 5:21 am
by Ready4Dis
Well, I am going to listen in, because I am kind of curious also. In my OS, I have 2 types of drivers, one that is run and searches for hardware, and another that is loaded once the hardware is found (basically has a list of supported hardware based on the PCI data). So, on boot-up, I (will) run my PCI code to first find and enable PCI devices (so, for example, if it finds an IDE controller, it can take it out of legacy mode, or leave it in until my driver is more complete). Then I run all the other drivers that search for hardware (floppy disk driver, ata/atapi driver, etc). So, my PCI driver might return that it finds a USB hub, and searches for the USB hub driver. Once loaded, the USB driver searches for devices, and loads their drivers. Now, in my OS, all disk drivers must have a common format (block devices in *nix), but at their basic they need to perform read and write at a specific block, and let me know how large each block is, and how many blocks their are. So, the USB Mass storage device must know about the USB driver, and since the USB driver is the one that loads the Mass storage driver, it can give it information if required. I haven't gotten all the deteails worked out yet, but basically certain drivers must be aware of other drivers and in a generic manner. So, if I wanted to write a printer driver, it would send the data to a spool service, which would then send it to the printer (of your choice) driver, which knows where the printer is (and can then modify the data accordingly). So, I could write a printer driver that outputs to a text document, writes to a USB port, etc. The direction the print info is directed could be seen as a character device, then you can treat the serial, parallel, usb, serial over network, etc as the same. The driver is loaded by whatever module found it (if the USB driver found it, that's where it got loaded from, so it knows where the printer is coming from, and stores a link of sorts to it, for a printer it would be something like a character device). Like I said, not all details are hammered out, but that's the jist of what i'm envisioning right now, anybody has any better insight or a more clear description, i'd love to be enlightened a bit, because it's difficult getting it all to work together in a nice generic way.

One more thing, don't think of it as floppy commands, think of it as Block Device commands, aka read/write. The commands you send to an IDE controller for example, are comlpetely different than the ones you would send to a USB mass storage (external IDE drive), so neither driver would be anywhere close, you would simply write a driver for usb mass storage, and one for IDE, which would have a standard interface for reading block devices, so for the OS, it would load IDE driver and USB Mass Storage, and not care what they used or how they got their info. Now, the IDE driver might use a DMA driver, while the USB Mass Storage uses the USB driver, but to the kernel they are both block devices (one may have a flag saying it's removeable while the other doesn't, but they are treated as the same).

Posted: Tue Apr 01, 2008 5:48 am
by mrvn
Hi,

recursive drivers is totaly the way to go.

Under Unix/Linux you see this at multiple levels. For example you have two kinds of devices:
- block device: pata, sata, scsi disks, tapes, cdrom, dvd, floppy, ramdisk, ...
- character device: mice, keyboard, ...
Hidden away you have even more drivers stacked one on top of the other. For example pata, sata and scsi disks are all driven by a generic scsi driver nowadays in linux. And that is all below the block cache, scatter gathering and actual user visible block device driver.

Or look at filesystems. You have one VFS layer in the kernel and tons of filesystems that connect through that.

Whenever you have 2 or more things that have a common set of operations then think of using a common interface.

The drivers also don't have to be chained in a straight line or tree. You can have loops in them:

Say you have a filesystem on lvm with cryptograpy and raid 1+0:

FS - Block Device - Logical Volume - Block Device - dm-crypt - Block Device - Raid 0 - Block Device - Raid 1 - Block Device - Scsi Device - Sata disk

It will loop through the Block Device interface again and again.

That is the beauty of having a generic ABI shared between lots of drivers. You can connect them any which way.

MfG
Mrvn

Posted: Tue Apr 01, 2008 6:28 am
by z180
a nice idea is a device tree.Every device in my system has
an general(char,block) and an extended type and a parent driver
pointer for a device tree.

It would also be possible to stack devices like vnodes could be stacked
(someone wrote about it ,I read it and thought of a journalling
FS,a ram FS stacked over a read only FS allowing temporary writes and a
LZ compression FS ,but my VFS cant handle stacking yet).

I also thought about STREAMS message queue like device connecting
stuff where a a message is passed from driver to driver in a
variable connection hierarchy.

Posted: Tue Apr 01, 2008 6:53 am
by ProgressGirl
I find the Unix route a lot more satisfying because a driver could be set up to forward over anything. An example being I could forward:
/dev/fd0 over /dev/usb0
or
/dev/fd0 over /dev/inet0
or even
/dev/fd0 over /dev/hda0!

Posted: Tue Apr 01, 2008 2:30 pm
by bewing
I'm mostly worrying about how (precisely) to set it up. Do I have two types? User services and system services? A printer spooler would be a user service. Any user app can call it, and it's non-locking. It would be a two step service, maybe. User app -> metadata driver -> spooler (to disk until printjob is ready to print). Then the spooler would call a locking system service to send each printjob's complete metadata file through a printer driver -- the spooler would need to be a "system manager process" to have enough priveleges to do this.

My current visualization is that I'm going to have a "services" list, with a bunch of drivers loaded in memory. The drivers can be pipelined together to form each service. The service list describes and enumerates each pipeline. Each pipeline stage stores a structure of arguments for the driver at that stage, except for the input stream. The arguments might describe a particular instance of a hardware device for a hardware driver, or an encryption key for a compression driver. The drivers do not necessarily have to have a standard interface, because any individual quirks can be handled when initializing the service pipeline entry, if necessary. -- Except that the last stage of the pipeline (the hardware stage) needs to be passed extra arguments (and does need to be standardized. Hardware drivers would include interrupt handlers, and would be permanently memory resident, so they could keep track of the state of the device.

It would be really nice if a call to a service could just set up the pipeline to work automatically -- ie. set up n fifo buffers between each pipelined driver stage (plus the input stage), and have all the drivers running concurrently. Perhaps a new instance of each driver gets dynamically loaded, when a service that uses that driver gets called -- for "virtual" drivers that are not permanently resident. But this doesn't quite work for cached stages of the pipeline. The last half of the pipeline doesn't run until the cache decides to flush itself, and it almost certainly won't run in the same order that the service was called (repeatedly, with different input -- that was cached). This is sort of like the print spooler case above -- a pipeline can conceivably stall in the middle, at a caching point.

Alternately, I could have some sort of service manager that parses the service pipeline, accepts a complete bytestream from one driver (or the app), and passes it to the next driver in the pipeline. But there is a problem with bytestream allocation here. Each driver needs to allocate a new bytestream (but how big? and who allocates it?), and the bytestream from the previous driver needs to be freed (only after it's been processed!).

Services that perform read operations have their pipelines run backwards. From the hardware driver back to the app that requested data.

But how do I handle propogating errors out of a service pipeline? Just let the top level driver report a failure to the app, and the app gets to query the service for detailed error info?

And my whole visualization looks buggy when I contemplate a "read" process, combined with a cache. If the read is satisfied by a cache, then (by definition) it shouldn't go through the hardware driver at all. Maybe a cache stage should always pretend that it's an actual hardware driver? So it is the terminal stage of its service? And the cache accesses "real" hardware services to flush / fill its buffers?