Hi,
Combuster wrote:For my OS; it's a micro-kernel. If one process (e.g. file system code) establishes a connection to another process (e.g. "mounts" a storage device driver) and sends a message saying "I want 1234 bytes at offset 12345678", then that message goes directly to the other process. It is possible to have a middle-man (e.g. something in between the file system code and the storage device driver that does encryption), but this increases the communication overheads (task switching between processes, etc).
Even so, if you would include a copy of the same source code that does the encryption in every driver, you end up with bugs and updates equally repeated among all of them. And since you also want the option of doing encryption on a laptop hard drive, you will still put that code in one place and include it in every driver.
More like I'd put the encryption in the kernel for all software to use.
Combuster wrote:Your code to the UDI environment is in one place, and it's included in every driver.
In the "driver driver" shim thing that wasn't supposed to be necessary if the OS uses UDI as its native driver interface?
Combuster wrote:Of course my example was not for my OS and was not for any one specific OS. If there are 1000 very different OSs and 5% of those OS decide to put encryption in the storage device driver, then 5% of OSs are screwed.
Of course they aren't. Any driver without encryption can be turned into a driver with encryption because it's not hardware related at all.
Sure; but which side of the "driver interface" abstraction is encryption on? On the driver's side (where it has to be if the hardware supports encryption), or in the file system/s where it can't work, or in layer/s of unnecessary bloat between the file system and driver?
Combuster wrote:Encryption has no business in hardware drivers, except if the device does encryption in hardware.
If one specific device's hardware supports encryption, then the OS's standard interface used by all of its device drivers should include (e.g.) a "set_encryption_key()" function; and if one specific device's hardware doesn't support encryption then the OS's standard interface used by all of its device drivers should not include (e.g.) a "set_encryption_key()" function?
Did you deliberately put that contradiction in there as an attempt to turn a statement into something completely unrelated, or was that a coincidental interpretation error?
It's a deliberate contradiction. Either the driver always takes care of encryption (via. hardware, a library, a kernel API, or with its own code), or the driver never takes care of encryption (and its done by other layers that can't take advantage of any encryption built into the device), or it's a bad/inconsistent abstraction.
Combuster wrote:Some OSs may be so stupid and broken that they put I/O scheduling in the block layer (where it's unable to make decisions that take into account device specific things, like where the disk drive's heads happen to be)
If they're stupid enough to suffer from general dementia, then perhaps. But most schedulers will at least make an attempt to remember what the last written block was.
Remembering where the last written block was is fairly useless for generic code (on the wrong side of the abstraction) that doesn't know if the device is SSD or mechanical disk or hybrid; or (for mechanical disk) if "block number 12345" is on the same cylinder as "block number 12346" or not.
Combuster wrote:Do you even know what a device driver interface is? It's supposed to be a clean abstraction that covers a super-set of features that all devices provide.
Therefore all video cards must provide an audio interface because it
might have an HDMI port? Therefore all sound cards have to provide a storage interface because old soundblasters came with their CD bus controller when ATAPI wasn't a thing yet? Or a simple harddisk has to know proper responses to anything a blu-ray burner might be doing?
Here's the part you deliberately ignored for your misguided and juvenile attempt to take my words out of context: "
Of course there are many different categories for devices (e.g. storage device, sound, printer, network, video, etc) and therefore (to keep the abstraction for a specific category clean) there's a set of different device driver interfaces (one for each category).
Basically; one interface for storage devices (that includes things like read, write, media change, secure erase, trim, etc); one interface for sound (where nothing used for storage devices makes any sense; that includes things like sending/receiving streams of digitised sound and/or MIDI, volume and balance controls, mixers, effects like echo, etc); one interface for printers (where nothing used for storage devices or sound cards makes any sense), etc.
Combuster wrote:This is why in practice, interfaces actually are a subset rather than a superset, and they offer extensions when they are relevant for that device.
SCSI is not a clean abstraction
Are you suggesting that because a
bus protocol is too generic, you're not allowed to write drivers for both the host controller, and by extension having no option but not to support it's child devices as well?
I'm suggesting that the SCSI command set is not a clean abstraction for any of the different categories of devices. Furthermore, I will suggest that UDI doesn't provide adequate abstractions for any of the different categories of devices; except for SCSI controllers and USB controllers.
Cheers,
Brendan