Page 2 of 3

Re: Driver API design

Posted: Sat Sep 11, 2010 6:28 am
by rdos
It differs in the sense that FileName can only be an ordinary file in the filesystem, it cannot be a device or any other special file. The formatting options (text-mode) are not supported, rather buffers are treated as bytes only. Handles for different devices cannot be mixed (for instance, a socket handle or serial-port handle cannot be used in place of a file handle). When implementing the usual C-functions, there is a need to add the complexity on top of the native interface.

Also, even though (for historical reasons only), there is an access-mode in opening an file, this is not implemented, but rather also needs to be implemented in the C runtime libary in order to comply with the C standard.

Re: Driver API design

Posted: Sat Sep 11, 2010 6:34 am
by rdos
Combuster wrote:
rdos wrote:Not. Asynchronous IO is only good for people that don't know about threads.
People that still can't deal with asynchronous systems should not be in the industry at all. Try to download something off the internet with the ability to cancel it through the UI. Do you honestly believe that such a task is allowed to be beyond a professional developer's skills?
This is why you do not mixup local file-IO with sockets. A socket interface is necesarily different from a local-file-IO interface, which is why they have different interfaces. This is also solved without asynchronous IO in RDOS. When a thread is waiting for IO from a socket, it can combine this with waits for other objects, like a timer. A second thread can also cancel the request with a signal. There is no need for the complexity of asynchronous IO in order to solve this.

Re: Driver API design

Posted: Sat Sep 11, 2010 6:47 am
by gerryg400
rdos wrote:It differs in the sense that FileName can only be an ordinary file in the filesystem, it cannot be a device or any other special file. The formatting options (text-mode) are not supported, rather buffers are treated as bytes only. Handles for different devices cannot be mixed (for instance, a socket handle or serial-port handle cannot be used in place of a file handle). When implementing the usual C-functions, there is a need to add the complexity on top of the native interface.

Also, even though (for historical reasons only), there is an access-mode in opening an file, this is not implemented, but rather also needs to be implemented in the C runtime libary in order to comply with the C standard.
To me it seems that the only difference between your interface and the Posix interface is that you have removed some features. Of course there is nothing wrong with that. However, you began by saying that you don't like the c file interface. I can't see how you have modified the interface, except for changing the names.

Re: Driver API design

Posted: Sat Sep 11, 2010 6:56 am
by Combuster
rdos wrote:This is also solved without asynchronous IO in RDOS. When a thread is waiting for IO from a socket, it can combine this with waits for other objects
That's actually an implementation of asynchronous I/O

Re: Driver API design

Posted: Sat Sep 11, 2010 9:08 am
by Brendan
Hi,
rdos wrote:
Owen wrote:I'm heavily in favor of UDI; yes, its complex, but its also highly flexible and very high performance. I think there is much good to be said for an interface which, when implemented as a wrapper for the platform's native driver interface, was found to produce higher performance drivers than those written directly for the native interface.
I don't think this is possible. Complexity always comes with a cost. It might be possible on single-threaded test applications, but when these are threaded and updated to use a clean interface, they will outperform the complex versions easily.
I'd assume that it'd be easy for UDI to give better performance than the native interface if the native interface isn't very good.

For example, there's many ways to improve disk performance; including queuing requests and performing them out of order (to minimise time wasted repositioning disk heads and/or to do more important request ahead of less important requests); allowing pending requests to be cancelled before they're actually performed (maybe the process that made the request was terminated), detecting sequential reads and implementing read-ahead (which requires some buffering/caching), postponing writes when there's more important reads to do (more buffering/caching, plus the need for flushing caches to disk when necessary), etc.

Of course it's fairly obvious that there's a compromise between "simple" and "performance".
rdos wrote:My native file API looks like this:
Yeah - that looks nice and simple to me...

To be honest, it looks like an API that applications would use when asking a simple kernel to perform file I/O; and doesn't look like an API that a kernel would use to ask a device driver to perform I/O. There are differences.


Cheers,

Brendan

Re: Driver API design

Posted: Sat Sep 11, 2010 9:47 am
by Kevin
Brendan wrote:For example, there's many ways to improve disk performance; including queuing requests and performing them out of order (to minimise time wasted repositioning disk heads and/or to do more important request ahead of less important requests); allowing pending requests to be cancelled before they're actually performed (maybe the process that made the request was terminated), detecting sequential reads and implementing read-ahead (which requires some buffering/caching), postponing writes when there's more important reads to do (more buffering/caching, plus the need for flushing caches to disk when necessary), etc.
But that's all stuff that a disk driver can't (or shouldn't) decide. The driver's job is just to access the hardware. Fancy policies involving things like request reordering should probably be implemented at a higher level, so they can be shared across all drivers.

Re: Driver API design

Posted: Sat Sep 11, 2010 9:56 am
by NickJohnson
Kevin wrote:
Brendan wrote:For example, there's many ways to improve disk performance; including queuing requests and performing them out of order (to minimise time wasted repositioning disk heads and/or to do more important request ahead of less important requests); allowing pending requests to be cancelled before they're actually performed (maybe the process that made the request was terminated), detecting sequential reads and implementing read-ahead (which requires some buffering/caching), postponing writes when there's more important reads to do (more buffering/caching, plus the need for flushing caches to disk when necessary), etc.
But that's all stuff that a disk driver can't (or shouldn't) decide. The driver's job is just to access the hardware. Fancy policies involving things like request reordering should probably be implemented at a higher level, so they can be shared across all drivers.
I disagree: would you use the same I/O scheduling for a hard drive, a write-once CD, a flash drive, and a RAIDed drive across NFS? Drivers know much more about the way their devices should be accessed than a high level policy can (without a _lot_ of extra information being passed around).

Re: Driver API design

Posted: Sat Sep 11, 2010 10:08 am
by Kevin
So instead of having the drivers pass the information the I/O scheduler needs, you would duplicate the whole scheduling code in every single driver? To me that sounds just wrong.

Re: Driver API design

Posted: Sat Sep 11, 2010 10:16 am
by NickJohnson
Well, if you have a microkernel, you would probably have the scheduler in some sort of shared library, which would be neither space-consuming nor redundant. The scheduler would be able to be sufficiently adapted (using function pointers to override builtin policies, probably) for each driver. For a monolithic kernel, I suppose doing that is equivalent to giving information to a global scheduler, so our arguments are sort of the same.

Re: Driver API design

Posted: Sat Sep 11, 2010 10:28 am
by Kevin
Right, if you use a shared lib, you're basically doing it at a higher level and using a generic scheduler for all of them. So that's really the same thing.

Re: Driver API design

Posted: Sat Sep 11, 2010 10:38 am
by rdos
Combuster wrote:
rdos wrote:This is also solved without asynchronous IO in RDOS. When a thread is waiting for IO from a socket, it can combine this with waits for other objects
That's actually an implementation of asynchronous I/O
Maybe, but when I think about asynchronous IO I think about the Win32 implementation with callbacks and all kinds of nasty things. In my implementation, there is still a single blocking call for IO, which does not define a callback system for progress or premature termination. This instead must be planned in advance by timeouts or signal objects for premature termination.

Re: Driver API design

Posted: Sat Sep 11, 2010 10:54 am
by rdos
Brendan wrote:To be honest, it looks like an API that applications would use when asking a simple kernel to perform file I/O; and doesn't look like an API that a kernel would use to ask a device driver to perform I/O. There are differences.
Yes, you are correct. It is the higher end interface towards the filesystem. There are several layers in between. The middle layer consists of a virtual filesystem driver that can install different types of filesystems into a common interface. It also provides a lower-level interface for hardware devices. The middle layer also buffers file contents in physical memory for fast retrieval by applications, which by far determines the performance for file-IO, especially for small requests.

As for disk-scheduling, it is mostly up to the filesystem to decide what is allowed in regards to reordering requests. For instance, the FAT driver requires changes to metadata to be performed sequentially, while file data can be reordered and prefectched freely by the driver. Experimental drivers for fail-safe flesystems exclusively required ordered execution of most requests. This is a preformance-safety tradeoff that usually is coded into a particular filesystem implementation.

Re: Driver API design

Posted: Sat Sep 11, 2010 11:05 am
by rdos
Kevin wrote:So instead of having the drivers pass the information the I/O scheduler needs, you would duplicate the whole scheduling code in every single driver? To me that sounds just wrong.
Neither is optimal. It usually both depends on the operation performed (read vs write) and the type of data (metadata like file/dir entries vs file data) if it is a good idea to use scheduling or not. The effects of reordering (and possible loss) of data is different between directory structure and file-data contents, and this affects performance and reliability. The only piece of code that can make these decisions are filesystem implementations.

Re: Driver API design

Posted: Sat Sep 11, 2010 11:38 am
by rdos
To be a little more specific. Disc-drivers in RDOS are not passive agents of higher-order calls. They contain a server thread that waits for requests to become available, gets the requests, executes them against the physical drive, and signals their completion to the virtual drive module.

The IDE drive for instance runs a loop that looks like this in pseudocode:

for (;;)
{
count = GetDiscRequestArray(&req);
if (req.IsRead)
{
for (i = 0; i < count; i++)
{
ReadSector();
DiscRequestCompleted(req.ioblock);
}
}
else
{
for (i = 0; i < count; i++)
{
WriteSector();
DiscRequestCompleted(req.ioblock);
}
}
}

The action of signalling contents complete will potentially wake-up blocked threads that wait for the iorequest. The interface for the physical drive is simple, and does not contain any asynchronous IO, the driver does not need to synchronize calls and it does not need to bother about disc scheduling. What it might bother with is to ensure that multiple, consequtive sectors are handled effectively.

Re: Driver API design

Posted: Sat Sep 11, 2010 11:58 am
by Brendan
Hi,
rdos wrote:
Brendan wrote:To be honest, it looks like an API that applications would use when asking a simple kernel to perform file I/O; and doesn't look like an API that a kernel would use to ask a device driver to perform I/O. There are differences.
Yes, you are correct. It is the higher end interface towards the filesystem. There are several layers in between. The middle layer consists of a virtual filesystem driver that can install different types of filesystems into a common interface. It also provides a lower-level interface for hardware devices. The middle layer also buffers file contents in physical memory for fast retrieval by applications, which by far determines the performance for file-IO, especially for small requests.
Ok..

"applications ----> kernel API --[you are here]--> VFS & file data caches ----> file system layer --[we are here]--> drivers"
rdos wrote:As for disk-scheduling, it is mostly up to the filesystem to decide what is allowed in regards to reordering requests.
Um. Imagine a disk with 4 partitions and 4 different file systems (one in each partition). Who decides where the disk heads should move to next?
rdos wrote:For instance, the FAT driver requires changes to metadata to be performed sequentially, while file data can be reordered and prefectched freely by the driver. Experimental drivers for fail-safe flesystems exclusively required ordered execution of most requests. This is a preformance-safety tradeoff that usually is coded into a particular filesystem implementation.
There's nothing experimental about it - the disk driver (and anything between the disk driver and the file system) has to support some write synchronisation; typically in the form of atomic writes (where any previous writes are flushed, then the requested sequence of writes occur while no other writes are allowed to intervene).


Cheers,

Brendan