Interrupt Architecture
Posted: Tue Dec 23, 2014 8:13 pm
Branched From http://forum.osdev.org/viewtopic.php?f= ... 15#p243970 By kmcguire
Not having to do MMIO sounds like it could be an improvement (but see below), however usually you end up having to do more than just check the status so you might end up with other MMIO maybe taking up most of the time to service the device. I could see maybe a device such as a timer benefiting from this when all you needed was the interrupt yet the timer device has other functions.
The only real difference is the hardware message part, but I do not know if this would be any real improvement. It is quite fast already just to raise a logic level to signal the interrupt controller we want to interrupt (which is FAST). Considering a message of some sort would require more than one bit which would increase latency (if serial) for the device to go back to doing what it was doing or it would require more complicated circuits (if parallel) for which would increase heat and cost. You might actually find devices implement a single line to a dedicated chip that ends up doing all the sending of the hardware message and all you gained was more cost and little performance benefit if any. Not to mention the problems of bit skew with parallel so your back to serial which means you just added more latency or more cost with no gains. Also considering the time it takes the CPU to actually jump to the interrupt handler adds enough latency to servicing the device.
It sounds like your gain is having the whole interrupt system simplified from an interface view of a programmer but in turn you actually complicate it on the circuit level adding more latency, more cost, or both. I do not think that is worth doing although I could be wrong. I do like the idea because it sound novel and interesting but it would take more research to really figure it out.
I like this idea, but it sounds like what basically already happens to some extent especially on ARM with full featured interrupt controllers. You actually have to check the interrupt controller to find out what device interrupted which is like it's "sender ID" then most devices have some type of MMIO register or MMIO registers to query the status.Brendan wrote: It actually would be nice to get rid of IRQs and replace them with "hardware messages"; where a hardware message interrupts the currently running code (just like an IRQ would have), but consists of a "sender ID" (to identify the device that sent the IRQ) and some sort of device specific "status dword" (to identify the reason why the device is requesting attention - so the driver can figure out what it needs to do before/without touching the device's registers).
Not having to do MMIO sounds like it could be an improvement (but see below), however usually you end up having to do more than just check the status so you might end up with other MMIO maybe taking up most of the time to service the device. I could see maybe a device such as a timer benefiting from this when all you needed was the interrupt yet the timer device has other functions.
The only real difference is the hardware message part, but I do not know if this would be any real improvement. It is quite fast already just to raise a logic level to signal the interrupt controller we want to interrupt (which is FAST). Considering a message of some sort would require more than one bit which would increase latency (if serial) for the device to go back to doing what it was doing or it would require more complicated circuits (if parallel) for which would increase heat and cost. You might actually find devices implement a single line to a dedicated chip that ends up doing all the sending of the hardware message and all you gained was more cost and little performance benefit if any. Not to mention the problems of bit skew with parallel so your back to serial which means you just added more latency or more cost with no gains. Also considering the time it takes the CPU to actually jump to the interrupt handler adds enough latency to servicing the device.
It sounds like your gain is having the whole interrupt system simplified from an interface view of a programmer but in turn you actually complicate it on the circuit level adding more latency, more cost, or both. I do not think that is worth doing although I could be wrong. I do like the idea because it sound novel and interesting but it would take more research to really figure it out.