Interrupt Architecture

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
User avatar
Pancakes
Member
Member
Posts: 75
Joined: Mon Mar 19, 2012 1:52 pm

Interrupt Architecture

Post by Pancakes »

Branched From http://forum.osdev.org/viewtopic.php?f= ... 15#p243970 By kmcguire
Brendan wrote: It actually would be nice to get rid of IRQs and replace them with "hardware messages"; where a hardware message interrupts the currently running code (just like an IRQ would have), but consists of a "sender ID" (to identify the device that sent the IRQ) and some sort of device specific "status dword" (to identify the reason why the device is requesting attention - so the driver can figure out what it needs to do before/without touching the device's registers).
I like this idea, but it sounds like what basically already happens to some extent especially on ARM with full featured interrupt controllers. You actually have to check the interrupt controller to find out what device interrupted which is like it's "sender ID" then most devices have some type of MMIO register or MMIO registers to query the status.

Not having to do MMIO sounds like it could be an improvement (but see below), however usually you end up having to do more than just check the status so you might end up with other MMIO maybe taking up most of the time to service the device. I could see maybe a device such as a timer benefiting from this when all you needed was the interrupt yet the timer device has other functions.

The only real difference is the hardware message part, but I do not know if this would be any real improvement. It is quite fast already just to raise a logic level to signal the interrupt controller we want to interrupt (which is FAST). Considering a message of some sort would require more than one bit which would increase latency (if serial) for the device to go back to doing what it was doing or it would require more complicated circuits (if parallel) for which would increase heat and cost. You might actually find devices implement a single line to a dedicated chip that ends up doing all the sending of the hardware message and all you gained was more cost and little performance benefit if any. Not to mention the problems of bit skew with parallel so your back to serial which means you just added more latency or more cost with no gains. Also considering the time it takes the CPU to actually jump to the interrupt handler adds enough latency to servicing the device.

It sounds like your gain is having the whole interrupt system simplified from an interface view of a programmer but in turn you actually complicate it on the circuit level adding more latency, more cost, or both. I do not think that is worth doing although I could be wrong. I do like the idea because it sound novel and interesting but it would take more research to really figure it out.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Systems Software Research is Irrelevant, by Rob Pike

Post by Brendan »

Hi,
Pancakes wrote:
Brendan wrote:It actually would be nice to get rid of IRQs and replace them with "hardware messages"; where a hardware message interrupts the currently running code (just like an IRQ would have), but consists of a "sender ID" (to identify the device that sent the IRQ) and some sort of device specific "status dword" (to identify the reason why the device is requesting attention - so the driver can figure out what it needs to do before/without touching the device's registers).
I like this idea, but it sounds like what basically already happens to some extent especially on ARM with full featured interrupt controllers. You actually have to check the interrupt controller to find out what device interrupted which is like it's "sender ID" then most devices have some type of MMIO register or MMIO registers to query the status.

Not having to do MMIO sounds like it could be an improvement (but see below), however usually you end up having to do more than just check the status so you might end up with other MMIO maybe taking up most of the time to service the device. I could see maybe a device such as a timer benefiting from this when all you needed was the interrupt yet the timer device has other functions.

The only real difference is the hardware message part, but I do not know if this would be any real improvement.
If you compare it to MSI (where the device writes a configurable 32-bit value to a configurable address) you'll see the differences are very minor (as far as the device is concerned).
Pancakes wrote:It is quite fast already just to raise a logic level to signal the interrupt controller we want to interrupt (which is FAST).
It's actually quite slow. For example (PIC chips) the CPU gets a signal from the interrupt controller, then CPU asks the interrupt controller which interrupt it is, then PIC tells CPU which interrupt, then software converts that into some sort of device identifier and notifies the device driver, then device driver asks the device what it wants. That's a whole pile of "back-and-forth" that does little more than add additional latency. For IO APICs it's not much better. For ARM (as far as I know) it's worse (because the CPU doesn't automatically ask the interrupt controller which interrupt it was so there's additional software overhead).
Pancakes wrote:Considering a message of some sort would require more than one bit which would increase latency (if serial) for the device to go back to doing what it was doing or it would require more complicated circuits (if parallel) for which would increase heat and cost. You might actually find devices implement a single line to a dedicated chip that ends up doing all the sending of the hardware message and all you gained was more cost and little performance benefit if any.
For MSI, it's already 32 bits of data being sent by the device. That's plenty (e.g. maybe a 16-bit device ID and a 16-bit status).

Mostly, I'd want to improve IRQ latencies (by removing historical baggage) and reduce the hassle for hardware (by removing historical baggage), while making nothing worse (as it's so similar to what hardware is already doing anyway).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Pancakes
Member
Member
Posts: 75
Joined: Mon Mar 19, 2012 1:52 pm

Re: Systems Software Research is Irrelevant, by Rob Pike

Post by Pancakes »

I guess you are right Brendan. I took a closer look and it seems indeed the designs have become quite complex and have strayed away from the more simplistic methods of the old days. I had thought they would have made the interconnect between the interrupt controllers on modern architectures with less latency. However, I think that I realize why.

They are basically making the best decision by improving reuse of modular components which decreases the cost of devices, SoC, and motherboards. For systems that do not need the lowest latency possible you end up with a general purpose type system that is flexible and reduces cost. This hinges on an argument that maybe most system really do not need ultra low latency to servicing interrupts anymore because of buffering and the ability for some things to be moved off onto external devices/blocks/chips.

If a system really did have a device that needed ultra low service latency then maybe it is more cost effective to actually have a the device do most of that work in it's own block. And of course you have firmware/software for that device for allow it to be updated or maybe even it's own RAM sort of like a GPU to do the ultra low latency work.

I think what we are seeing is the natural evolution of the "system". That may be why you are seeing the interrupt controller go onto a bus which already uses messages and also why it does not use a single message because it is more flexible not to.

Like I said after looking closer I think I see why it works the way it currently does. So I actually think even more that your idea might not really bring anything better to current systems.

If you want that ultra low latency then move out of the general purpose cost effective interconnect type architecture and do the work closer to the actual device (or on the actual device).

Your idea of using less messages has merit, but I think you may not gain much by using less messages?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Systems Software Research is Irrelevant, by Rob Pike

Post by Brendan »

Hi,
Pancakes wrote:I guess you are right Brendan. I took a closer look and it seems indeed the designs have become quite complex and have strayed away from the more simplistic methods of the old days. I had thought they would have made the interconnect between the interrupt controllers on modern architectures with less latency. However, I think that I realize why.
At least for 80x86; modern hardware is plagued by backward compatibility. IBM wanted to slap together something for the desktop market, and (thinking it'd be dead in a few years anyway) emptied a trash can containing old chips left over from other projects onto their workbench and joined them together with duct tape; and for some incredibly insane reason we're still putting up with most of the ancient crud 40 years later, even though every single piece of it has been replaced with something better. :lol:

To reduce motherboard size/complexity, reduce pin count, reduce manufacturing costs and to improve speed (e.g. due to problems with parallel like the "bit skew" you mentioned earlier); almost everything has switched to serial links (quick-path, hyper-transport, DMI, PCI Express, etc). To make serial links work you must use messages of some kind. For example, you might send an "assert message" followed by a "de-assert message" to emulate an ancient signal connected to a dedicated pin that no longer exists.
Pancakes wrote:Your idea of using less messages has merit, but I think you may not gain much by using less messages?
Would the gains (cheaper/simpler hardware, simpler software, lower latency IRQ handling for things like 10 gigabit ethernet cards) justify the disadvantages (compatibility)?

In my opinion, (for mainstream 80x86 hardware and software) there should be something like a 10 year compatibility limit where that disadvantage simply doesn't exist. Other people (hardware manufacturers) seem to like the fact that (e.g.) A20 gate still exists in 2014 just because a programmer wrote some dodgy software that relies on "1 MiB wrap-around" in 1979. :)


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Pancakes
Member
Member
Posts: 75
Joined: Mon Mar 19, 2012 1:52 pm

Re: Interrupt Architecture

Post by Pancakes »

Was reading a book published back in 2004 for the ARM32 which mostly diagrams the interrupt controller still directly connected to the CPU interestingly enough. It looks however that most current SoC designs place the interrupt controller onto a message oriented bus of some type.
Post Reply