Driver API design

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
rdos
Member
Member
Posts: 3286
Joined: Wed Oct 01, 2008 1:55 pm

Re: Driver API design

Post by rdos »

Brendan wrote:Um. Imagine a disk with 4 partitions and 4 different file systems (one in each partition). Who decides where the disk heads should move to next?
The virtual drive queues do. Accesses that must complete in order are always served before scheduled reads because of reliability-issues. So the driver will always first serve FAT meta-data changes in sequential order, and then use disc-scheduling algorithms for other requests. Optionally, a driver might serve scheduled accesses in-between, but must never reorder accesses that must complete in order. It is up to the filesystem to make suitable requests (complete in order, schedulable reads, delayed writes).
rdos
Member
Member
Posts: 3286
Joined: Wed Oct 01, 2008 1:55 pm

Re: Driver API design

Post by rdos »

Brendan wrote:There's nothing experimental about it - the disk driver (and anything between the disk driver and the file system) has to support some write synchronisation; typically in the form of atomic writes (where any previous writes are flushed, then the requested sequence of writes occur while no other writes are allowed to intervene).
The ordered request scheme I described above takes care of this, and probably more efficiently. What the filesystem does is that it queues these writes as ordered requests. It will not need to flush or wait for their completion. Also, ordered writes for one file system can be intermixed with ordered writes for other file systems, as well as potentially delayed writes and scheduled reads.
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: Driver API design

Post by Owen »

rdos wrote:
Brendan wrote:There's nothing experimental about it - the disk driver (and anything between the disk driver and the file system) has to support some write synchronisation; typically in the form of atomic writes (where any previous writes are flushed, then the requested sequence of writes occur while no other writes are allowed to intervene).
The ordered request scheme I described above takes care of this, and probably more efficiently. What the filesystem does is that it queues these writes as ordered requests. It will not need to flush or wait for their completion. Also, ordered writes for one file system can be intermixed with ordered writes for other file systems, as well as potentially delayed writes and scheduled reads.
So you complete disk writes in the order they arrive, ensuring that it spends 80% of its time its seeking?
rdos
Member
Member
Posts: 3286
Joined: Wed Oct 01, 2008 1:55 pm

Re: Driver API design

Post by rdos »

Owen wrote:So you complete disk writes in the order they arrive, ensuring that it spends 80% of its time its seeking?
In the current implementation of FAT I only do this for disc writes related to metadata (directory entries and allocation tables), not for file data. This is because errors in these areas because of power losses or reboots could make the disc full of errors that I cannot correct for. Since RDOS targets embedded systems, corrupt discs are not aceptable.
rdos
Member
Member
Posts: 3286
Joined: Wed Oct 01, 2008 1:55 pm

Re: Driver API design

Post by rdos »

berkus wrote:Shouldn't metadata updates collect in the md log area with a single atomic write to switch to a new version, then?
In reality, there is no such thing as an atomic write. All writes can be interrupted by a power failure, and could then complete only partially. By doing careful modications to directory entries and allocation tables, and ensuring that these happens in a known order, it is possible to minimize errors to only lost allocation clusters, and minor errors in directory entries that the FAT driver can correct for in runtime.
Post Reply