Driver API design
Re: Driver API design
It differs in the sense that FileName can only be an ordinary file in the filesystem, it cannot be a device or any other special file. The formatting options (text-mode) are not supported, rather buffers are treated as bytes only. Handles for different devices cannot be mixed (for instance, a socket handle or serial-port handle cannot be used in place of a file handle). When implementing the usual C-functions, there is a need to add the complexity on top of the native interface.
Also, even though (for historical reasons only), there is an access-mode in opening an file, this is not implemented, but rather also needs to be implemented in the C runtime libary in order to comply with the C standard.
Also, even though (for historical reasons only), there is an access-mode in opening an file, this is not implemented, but rather also needs to be implemented in the C runtime libary in order to comply with the C standard.
Re: Driver API design
This is why you do not mixup local file-IO with sockets. A socket interface is necesarily different from a local-file-IO interface, which is why they have different interfaces. This is also solved without asynchronous IO in RDOS. When a thread is waiting for IO from a socket, it can combine this with waits for other objects, like a timer. A second thread can also cancel the request with a signal. There is no need for the complexity of asynchronous IO in order to solve this.Combuster wrote:People that still can't deal with asynchronous systems should not be in the industry at all. Try to download something off the internet with the ability to cancel it through the UI. Do you honestly believe that such a task is allowed to be beyond a professional developer's skills?rdos wrote:Not. Asynchronous IO is only good for people that don't know about threads.
Re: Driver API design
To me it seems that the only difference between your interface and the Posix interface is that you have removed some features. Of course there is nothing wrong with that. However, you began by saying that you don't like the c file interface. I can't see how you have modified the interface, except for changing the names.rdos wrote:It differs in the sense that FileName can only be an ordinary file in the filesystem, it cannot be a device or any other special file. The formatting options (text-mode) are not supported, rather buffers are treated as bytes only. Handles for different devices cannot be mixed (for instance, a socket handle or serial-port handle cannot be used in place of a file handle). When implementing the usual C-functions, there is a need to add the complexity on top of the native interface.
Also, even though (for historical reasons only), there is an access-mode in opening an file, this is not implemented, but rather also needs to be implemented in the C runtime libary in order to comply with the C standard.
If a trainstation is where trains stop, what is a workstation ?
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: Driver API design
That's actually an implementation of asynchronous I/Ordos wrote:This is also solved without asynchronous IO in RDOS. When a thread is waiting for IO from a socket, it can combine this with waits for other objects
Re: Driver API design
Hi,
For example, there's many ways to improve disk performance; including queuing requests and performing them out of order (to minimise time wasted repositioning disk heads and/or to do more important request ahead of less important requests); allowing pending requests to be cancelled before they're actually performed (maybe the process that made the request was terminated), detecting sequential reads and implementing read-ahead (which requires some buffering/caching), postponing writes when there's more important reads to do (more buffering/caching, plus the need for flushing caches to disk when necessary), etc.
Of course it's fairly obvious that there's a compromise between "simple" and "performance".
To be honest, it looks like an API that applications would use when asking a simple kernel to perform file I/O; and doesn't look like an API that a kernel would use to ask a device driver to perform I/O. There are differences.
Cheers,
Brendan
I'd assume that it'd be easy for UDI to give better performance than the native interface if the native interface isn't very good.rdos wrote:I don't think this is possible. Complexity always comes with a cost. It might be possible on single-threaded test applications, but when these are threaded and updated to use a clean interface, they will outperform the complex versions easily.Owen wrote:I'm heavily in favor of UDI; yes, its complex, but its also highly flexible and very high performance. I think there is much good to be said for an interface which, when implemented as a wrapper for the platform's native driver interface, was found to produce higher performance drivers than those written directly for the native interface.
For example, there's many ways to improve disk performance; including queuing requests and performing them out of order (to minimise time wasted repositioning disk heads and/or to do more important request ahead of less important requests); allowing pending requests to be cancelled before they're actually performed (maybe the process that made the request was terminated), detecting sequential reads and implementing read-ahead (which requires some buffering/caching), postponing writes when there's more important reads to do (more buffering/caching, plus the need for flushing caches to disk when necessary), etc.
Of course it's fairly obvious that there's a compromise between "simple" and "performance".
Yeah - that looks nice and simple to me...rdos wrote:My native file API looks like this:
To be honest, it looks like an API that applications would use when asking a simple kernel to perform file I/O; and doesn't look like an API that a kernel would use to ask a device driver to perform I/O. There are differences.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Driver API design
But that's all stuff that a disk driver can't (or shouldn't) decide. The driver's job is just to access the hardware. Fancy policies involving things like request reordering should probably be implemented at a higher level, so they can be shared across all drivers.Brendan wrote:For example, there's many ways to improve disk performance; including queuing requests and performing them out of order (to minimise time wasted repositioning disk heads and/or to do more important request ahead of less important requests); allowing pending requests to be cancelled before they're actually performed (maybe the process that made the request was terminated), detecting sequential reads and implementing read-ahead (which requires some buffering/caching), postponing writes when there's more important reads to do (more buffering/caching, plus the need for flushing caches to disk when necessary), etc.
- NickJohnson
- Member
- Posts: 1249
- Joined: Tue Mar 24, 2009 8:11 pm
- Location: Sunnyvale, California
Re: Driver API design
I disagree: would you use the same I/O scheduling for a hard drive, a write-once CD, a flash drive, and a RAIDed drive across NFS? Drivers know much more about the way their devices should be accessed than a high level policy can (without a _lot_ of extra information being passed around).Kevin wrote:But that's all stuff that a disk driver can't (or shouldn't) decide. The driver's job is just to access the hardware. Fancy policies involving things like request reordering should probably be implemented at a higher level, so they can be shared across all drivers.Brendan wrote:For example, there's many ways to improve disk performance; including queuing requests and performing them out of order (to minimise time wasted repositioning disk heads and/or to do more important request ahead of less important requests); allowing pending requests to be cancelled before they're actually performed (maybe the process that made the request was terminated), detecting sequential reads and implementing read-ahead (which requires some buffering/caching), postponing writes when there's more important reads to do (more buffering/caching, plus the need for flushing caches to disk when necessary), etc.
Re: Driver API design
So instead of having the drivers pass the information the I/O scheduler needs, you would duplicate the whole scheduling code in every single driver? To me that sounds just wrong.
- NickJohnson
- Member
- Posts: 1249
- Joined: Tue Mar 24, 2009 8:11 pm
- Location: Sunnyvale, California
Re: Driver API design
Well, if you have a microkernel, you would probably have the scheduler in some sort of shared library, which would be neither space-consuming nor redundant. The scheduler would be able to be sufficiently adapted (using function pointers to override builtin policies, probably) for each driver. For a monolithic kernel, I suppose doing that is equivalent to giving information to a global scheduler, so our arguments are sort of the same.
Re: Driver API design
Right, if you use a shared lib, you're basically doing it at a higher level and using a generic scheduler for all of them. So that's really the same thing.
Re: Driver API design
Maybe, but when I think about asynchronous IO I think about the Win32 implementation with callbacks and all kinds of nasty things. In my implementation, there is still a single blocking call for IO, which does not define a callback system for progress or premature termination. This instead must be planned in advance by timeouts or signal objects for premature termination.Combuster wrote:That's actually an implementation of asynchronous I/Ordos wrote:This is also solved without asynchronous IO in RDOS. When a thread is waiting for IO from a socket, it can combine this with waits for other objects
Re: Driver API design
Yes, you are correct. It is the higher end interface towards the filesystem. There are several layers in between. The middle layer consists of a virtual filesystem driver that can install different types of filesystems into a common interface. It also provides a lower-level interface for hardware devices. The middle layer also buffers file contents in physical memory for fast retrieval by applications, which by far determines the performance for file-IO, especially for small requests.Brendan wrote:To be honest, it looks like an API that applications would use when asking a simple kernel to perform file I/O; and doesn't look like an API that a kernel would use to ask a device driver to perform I/O. There are differences.
As for disk-scheduling, it is mostly up to the filesystem to decide what is allowed in regards to reordering requests. For instance, the FAT driver requires changes to metadata to be performed sequentially, while file data can be reordered and prefectched freely by the driver. Experimental drivers for fail-safe flesystems exclusively required ordered execution of most requests. This is a preformance-safety tradeoff that usually is coded into a particular filesystem implementation.
Re: Driver API design
Neither is optimal. It usually both depends on the operation performed (read vs write) and the type of data (metadata like file/dir entries vs file data) if it is a good idea to use scheduling or not. The effects of reordering (and possible loss) of data is different between directory structure and file-data contents, and this affects performance and reliability. The only piece of code that can make these decisions are filesystem implementations.Kevin wrote:So instead of having the drivers pass the information the I/O scheduler needs, you would duplicate the whole scheduling code in every single driver? To me that sounds just wrong.
Re: Driver API design
To be a little more specific. Disc-drivers in RDOS are not passive agents of higher-order calls. They contain a server thread that waits for requests to become available, gets the requests, executes them against the physical drive, and signals their completion to the virtual drive module.
The IDE drive for instance runs a loop that looks like this in pseudocode:
for (;;)
{
count = GetDiscRequestArray(&req);
if (req.IsRead)
{
for (i = 0; i < count; i++)
{
ReadSector();
DiscRequestCompleted(req.ioblock);
}
}
else
{
for (i = 0; i < count; i++)
{
WriteSector();
DiscRequestCompleted(req.ioblock);
}
}
}
The action of signalling contents complete will potentially wake-up blocked threads that wait for the iorequest. The interface for the physical drive is simple, and does not contain any asynchronous IO, the driver does not need to synchronize calls and it does not need to bother about disc scheduling. What it might bother with is to ensure that multiple, consequtive sectors are handled effectively.
The IDE drive for instance runs a loop that looks like this in pseudocode:
for (;;)
{
count = GetDiscRequestArray(&req);
if (req.IsRead)
{
for (i = 0; i < count; i++)
{
ReadSector();
DiscRequestCompleted(req.ioblock);
}
}
else
{
for (i = 0; i < count; i++)
{
WriteSector();
DiscRequestCompleted(req.ioblock);
}
}
}
The action of signalling contents complete will potentially wake-up blocked threads that wait for the iorequest. The interface for the physical drive is simple, and does not contain any asynchronous IO, the driver does not need to synchronize calls and it does not need to bother about disc scheduling. What it might bother with is to ensure that multiple, consequtive sectors are handled effectively.
Re: Driver API design
Hi,
"applications ----> kernel API --[you are here]--> VFS & file data caches ----> file system layer --[we are here]--> drivers"
Cheers,
Brendan
Ok..rdos wrote:Yes, you are correct. It is the higher end interface towards the filesystem. There are several layers in between. The middle layer consists of a virtual filesystem driver that can install different types of filesystems into a common interface. It also provides a lower-level interface for hardware devices. The middle layer also buffers file contents in physical memory for fast retrieval by applications, which by far determines the performance for file-IO, especially for small requests.Brendan wrote:To be honest, it looks like an API that applications would use when asking a simple kernel to perform file I/O; and doesn't look like an API that a kernel would use to ask a device driver to perform I/O. There are differences.
"applications ----> kernel API --[you are here]--> VFS & file data caches ----> file system layer --[we are here]--> drivers"
Um. Imagine a disk with 4 partitions and 4 different file systems (one in each partition). Who decides where the disk heads should move to next?rdos wrote:As for disk-scheduling, it is mostly up to the filesystem to decide what is allowed in regards to reordering requests.
There's nothing experimental about it - the disk driver (and anything between the disk driver and the file system) has to support some write synchronisation; typically in the form of atomic writes (where any previous writes are flushed, then the requested sequence of writes occur while no other writes are allowed to intervene).rdos wrote:For instance, the FAT driver requires changes to metadata to be performed sequentially, while file data can be reordered and prefectched freely by the driver. Experimental drivers for fail-safe flesystems exclusively required ordered execution of most requests. This is a preformance-safety tradeoff that usually is coded into a particular filesystem implementation.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.