Page 1 of 1

How to identify the source of a disk interrupt

Posted: Mon May 16, 2016 12:25 pm
by kemosparc
Hi,

I have implemented two disk functions, read and write, based on polling.

Now I would like to convert them to use interrupts.

My problem is that I have multiple processes accessing disk at the same point of time and I was wondering if I can identify the source of the interrupt.

Or do I have to queue the disk operations requests coming from system calls and serve them one by one and do not proceed on the next request until the current one if served?

Thanks,
Karim.

Re: How to identify the source of a disk interrupt

Posted: Mon May 16, 2016 12:36 pm
by Schol-R-LEA
I would say that some sort of synchronization and/or multiplexing is needed, at the very least, and if it is a monolithic kernel design you will almost certainly want to queue the requests, with one or more queues per file system driver (which may or may not be one-to-one to the disk drivers, BTW, so you may need to synchronize file system driver access to the disk drivers).

For a classic microkernel, you could have the message-passing system to serve as the queuing mechanism; again, each file system would have it's own message queue, as would each disk driver, though it may be possible to short-circuit the requests to go directly to the disk drivers if there is only one partition per drive and no drive spanning, albeit this would require the disk driver itself to handle the file system management as well.

In an exokernel or hypervisor (or any other system that primarily multiplexes the hardware into virtual machines without providing abstractions), you would have to multiplex the drive access instead, and probably would need to have the applications/virtual domains request exclusive access during writes.

Re: How to identify the source of a disk interrupt

Posted: Mon May 16, 2016 1:25 pm
by Brendan
Hi,
kemosparc wrote:Or do I have to queue the disk operations requests coming from system calls and serve them one by one and do not proceed on the next request until the current one if served?
That's a step in the right direction, but still "improvable". If you have a queue of pending requests, you can optimise the order they're done to improve performance (instead of doing requests in "first come, first served" order). This is called "IO scheduling", and every modern OS does it in one way or another.


Cheers,

Brendan