Driver API design

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
rdos
Member
Member
Posts: 3286
Joined: Wed Oct 01, 2008 1:55 pm

Re: Driver API design

Post by rdos »

It differs in the sense that FileName can only be an ordinary file in the filesystem, it cannot be a device or any other special file. The formatting options (text-mode) are not supported, rather buffers are treated as bytes only. Handles for different devices cannot be mixed (for instance, a socket handle or serial-port handle cannot be used in place of a file handle). When implementing the usual C-functions, there is a need to add the complexity on top of the native interface.

Also, even though (for historical reasons only), there is an access-mode in opening an file, this is not implemented, but rather also needs to be implemented in the C runtime libary in order to comply with the C standard.
rdos
Member
Member
Posts: 3286
Joined: Wed Oct 01, 2008 1:55 pm

Re: Driver API design

Post by rdos »

Combuster wrote:
rdos wrote:Not. Asynchronous IO is only good for people that don't know about threads.
People that still can't deal with asynchronous systems should not be in the industry at all. Try to download something off the internet with the ability to cancel it through the UI. Do you honestly believe that such a task is allowed to be beyond a professional developer's skills?
This is why you do not mixup local file-IO with sockets. A socket interface is necesarily different from a local-file-IO interface, which is why they have different interfaces. This is also solved without asynchronous IO in RDOS. When a thread is waiting for IO from a socket, it can combine this with waits for other objects, like a timer. A second thread can also cancel the request with a signal. There is no need for the complexity of asynchronous IO in order to solve this.
gerryg400
Member
Member
Posts: 1801
Joined: Thu Mar 25, 2010 11:26 pm
Location: Melbourne, Australia

Re: Driver API design

Post by gerryg400 »

rdos wrote:It differs in the sense that FileName can only be an ordinary file in the filesystem, it cannot be a device or any other special file. The formatting options (text-mode) are not supported, rather buffers are treated as bytes only. Handles for different devices cannot be mixed (for instance, a socket handle or serial-port handle cannot be used in place of a file handle). When implementing the usual C-functions, there is a need to add the complexity on top of the native interface.

Also, even though (for historical reasons only), there is an access-mode in opening an file, this is not implemented, but rather also needs to be implemented in the C runtime libary in order to comply with the C standard.
To me it seems that the only difference between your interface and the Posix interface is that you have removed some features. Of course there is nothing wrong with that. However, you began by saying that you don't like the c file interface. I can't see how you have modified the interface, except for changing the names.
If a trainstation is where trains stop, what is a workstation ?
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Driver API design

Post by Combuster »

rdos wrote:This is also solved without asynchronous IO in RDOS. When a thread is waiting for IO from a socket, it can combine this with waits for other objects
That's actually an implementation of asynchronous I/O
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Driver API design

Post by Brendan »

Hi,
rdos wrote:
Owen wrote:I'm heavily in favor of UDI; yes, its complex, but its also highly flexible and very high performance. I think there is much good to be said for an interface which, when implemented as a wrapper for the platform's native driver interface, was found to produce higher performance drivers than those written directly for the native interface.
I don't think this is possible. Complexity always comes with a cost. It might be possible on single-threaded test applications, but when these are threaded and updated to use a clean interface, they will outperform the complex versions easily.
I'd assume that it'd be easy for UDI to give better performance than the native interface if the native interface isn't very good.

For example, there's many ways to improve disk performance; including queuing requests and performing them out of order (to minimise time wasted repositioning disk heads and/or to do more important request ahead of less important requests); allowing pending requests to be cancelled before they're actually performed (maybe the process that made the request was terminated), detecting sequential reads and implementing read-ahead (which requires some buffering/caching), postponing writes when there's more important reads to do (more buffering/caching, plus the need for flushing caches to disk when necessary), etc.

Of course it's fairly obvious that there's a compromise between "simple" and "performance".
rdos wrote:My native file API looks like this:
Yeah - that looks nice and simple to me...

To be honest, it looks like an API that applications would use when asking a simple kernel to perform file I/O; and doesn't look like an API that a kernel would use to ask a device driver to perform I/O. There are differences.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Kevin
Member
Member
Posts: 1071
Joined: Sun Feb 01, 2009 6:11 am
Location: Germany
Contact:

Re: Driver API design

Post by Kevin »

Brendan wrote:For example, there's many ways to improve disk performance; including queuing requests and performing them out of order (to minimise time wasted repositioning disk heads and/or to do more important request ahead of less important requests); allowing pending requests to be cancelled before they're actually performed (maybe the process that made the request was terminated), detecting sequential reads and implementing read-ahead (which requires some buffering/caching), postponing writes when there's more important reads to do (more buffering/caching, plus the need for flushing caches to disk when necessary), etc.
But that's all stuff that a disk driver can't (or shouldn't) decide. The driver's job is just to access the hardware. Fancy policies involving things like request reordering should probably be implemented at a higher level, so they can be shared across all drivers.
Developer of tyndur - community OS of Lowlevel (German)
User avatar
NickJohnson
Member
Member
Posts: 1249
Joined: Tue Mar 24, 2009 8:11 pm
Location: Sunnyvale, California

Re: Driver API design

Post by NickJohnson »

Kevin wrote:
Brendan wrote:For example, there's many ways to improve disk performance; including queuing requests and performing them out of order (to minimise time wasted repositioning disk heads and/or to do more important request ahead of less important requests); allowing pending requests to be cancelled before they're actually performed (maybe the process that made the request was terminated), detecting sequential reads and implementing read-ahead (which requires some buffering/caching), postponing writes when there's more important reads to do (more buffering/caching, plus the need for flushing caches to disk when necessary), etc.
But that's all stuff that a disk driver can't (or shouldn't) decide. The driver's job is just to access the hardware. Fancy policies involving things like request reordering should probably be implemented at a higher level, so they can be shared across all drivers.
I disagree: would you use the same I/O scheduling for a hard drive, a write-once CD, a flash drive, and a RAIDed drive across NFS? Drivers know much more about the way their devices should be accessed than a high level policy can (without a _lot_ of extra information being passed around).
Kevin
Member
Member
Posts: 1071
Joined: Sun Feb 01, 2009 6:11 am
Location: Germany
Contact:

Re: Driver API design

Post by Kevin »

So instead of having the drivers pass the information the I/O scheduler needs, you would duplicate the whole scheduling code in every single driver? To me that sounds just wrong.
Developer of tyndur - community OS of Lowlevel (German)
User avatar
NickJohnson
Member
Member
Posts: 1249
Joined: Tue Mar 24, 2009 8:11 pm
Location: Sunnyvale, California

Re: Driver API design

Post by NickJohnson »

Well, if you have a microkernel, you would probably have the scheduler in some sort of shared library, which would be neither space-consuming nor redundant. The scheduler would be able to be sufficiently adapted (using function pointers to override builtin policies, probably) for each driver. For a monolithic kernel, I suppose doing that is equivalent to giving information to a global scheduler, so our arguments are sort of the same.
Kevin
Member
Member
Posts: 1071
Joined: Sun Feb 01, 2009 6:11 am
Location: Germany
Contact:

Re: Driver API design

Post by Kevin »

Right, if you use a shared lib, you're basically doing it at a higher level and using a generic scheduler for all of them. So that's really the same thing.
Developer of tyndur - community OS of Lowlevel (German)
rdos
Member
Member
Posts: 3286
Joined: Wed Oct 01, 2008 1:55 pm

Re: Driver API design

Post by rdos »

Combuster wrote:
rdos wrote:This is also solved without asynchronous IO in RDOS. When a thread is waiting for IO from a socket, it can combine this with waits for other objects
That's actually an implementation of asynchronous I/O
Maybe, but when I think about asynchronous IO I think about the Win32 implementation with callbacks and all kinds of nasty things. In my implementation, there is still a single blocking call for IO, which does not define a callback system for progress or premature termination. This instead must be planned in advance by timeouts or signal objects for premature termination.
rdos
Member
Member
Posts: 3286
Joined: Wed Oct 01, 2008 1:55 pm

Re: Driver API design

Post by rdos »

Brendan wrote:To be honest, it looks like an API that applications would use when asking a simple kernel to perform file I/O; and doesn't look like an API that a kernel would use to ask a device driver to perform I/O. There are differences.
Yes, you are correct. It is the higher end interface towards the filesystem. There are several layers in between. The middle layer consists of a virtual filesystem driver that can install different types of filesystems into a common interface. It also provides a lower-level interface for hardware devices. The middle layer also buffers file contents in physical memory for fast retrieval by applications, which by far determines the performance for file-IO, especially for small requests.

As for disk-scheduling, it is mostly up to the filesystem to decide what is allowed in regards to reordering requests. For instance, the FAT driver requires changes to metadata to be performed sequentially, while file data can be reordered and prefectched freely by the driver. Experimental drivers for fail-safe flesystems exclusively required ordered execution of most requests. This is a preformance-safety tradeoff that usually is coded into a particular filesystem implementation.
rdos
Member
Member
Posts: 3286
Joined: Wed Oct 01, 2008 1:55 pm

Re: Driver API design

Post by rdos »

Kevin wrote:So instead of having the drivers pass the information the I/O scheduler needs, you would duplicate the whole scheduling code in every single driver? To me that sounds just wrong.
Neither is optimal. It usually both depends on the operation performed (read vs write) and the type of data (metadata like file/dir entries vs file data) if it is a good idea to use scheduling or not. The effects of reordering (and possible loss) of data is different between directory structure and file-data contents, and this affects performance and reliability. The only piece of code that can make these decisions are filesystem implementations.
rdos
Member
Member
Posts: 3286
Joined: Wed Oct 01, 2008 1:55 pm

Re: Driver API design

Post by rdos »

To be a little more specific. Disc-drivers in RDOS are not passive agents of higher-order calls. They contain a server thread that waits for requests to become available, gets the requests, executes them against the physical drive, and signals their completion to the virtual drive module.

The IDE drive for instance runs a loop that looks like this in pseudocode:

for (;;)
{
count = GetDiscRequestArray(&req);
if (req.IsRead)
{
for (i = 0; i < count; i++)
{
ReadSector();
DiscRequestCompleted(req.ioblock);
}
}
else
{
for (i = 0; i < count; i++)
{
WriteSector();
DiscRequestCompleted(req.ioblock);
}
}
}

The action of signalling contents complete will potentially wake-up blocked threads that wait for the iorequest. The interface for the physical drive is simple, and does not contain any asynchronous IO, the driver does not need to synchronize calls and it does not need to bother about disc scheduling. What it might bother with is to ensure that multiple, consequtive sectors are handled effectively.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Driver API design

Post by Brendan »

Hi,
rdos wrote:
Brendan wrote:To be honest, it looks like an API that applications would use when asking a simple kernel to perform file I/O; and doesn't look like an API that a kernel would use to ask a device driver to perform I/O. There are differences.
Yes, you are correct. It is the higher end interface towards the filesystem. There are several layers in between. The middle layer consists of a virtual filesystem driver that can install different types of filesystems into a common interface. It also provides a lower-level interface for hardware devices. The middle layer also buffers file contents in physical memory for fast retrieval by applications, which by far determines the performance for file-IO, especially for small requests.
Ok..

"applications ----> kernel API --[you are here]--> VFS & file data caches ----> file system layer --[we are here]--> drivers"
rdos wrote:As for disk-scheduling, it is mostly up to the filesystem to decide what is allowed in regards to reordering requests.
Um. Imagine a disk with 4 partitions and 4 different file systems (one in each partition). Who decides where the disk heads should move to next?
rdos wrote:For instance, the FAT driver requires changes to metadata to be performed sequentially, while file data can be reordered and prefectched freely by the driver. Experimental drivers for fail-safe flesystems exclusively required ordered execution of most requests. This is a preformance-safety tradeoff that usually is coded into a particular filesystem implementation.
There's nothing experimental about it - the disk driver (and anything between the disk driver and the file system) has to support some write synchronisation; typically in the form of atomic writes (where any previous writes are flushed, then the requested sequence of writes occur while no other writes are allowed to intervene).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply