Hi,
limp wrote:I know that my Southbridge (ICH7) has both PIT and HPET timers and that PIT and HPET0 are using the same IRQ line (IRQ0 on PIC / INTI2 on I/O APIC). You said that:
Brendan wrote:
Otherwise, if the PIT exists and is using IRQ0 then you'd have to make sure HPET is using a different interrupt (and then mask the PIT's IRQ in the PIC and/or IO APIC), and this is worse because it's a little more hassle and you end up with an IRQ line that's wasted by the slow/obsolete/deprecated PIT.
What do you mean by that? How can I use a different interrupt for HPET (as it is hardwired on PIC's IRQ0)?
You also said that:
Brendan wrote:
If the "LegacyReplacement" bit is set then you can ignore the PIT and don't have to worry about it (which makes things easier). Otherwise, if the PIT exists....
Does this mean that I can set the "LegacyReplacement" bit
only if PIT doesn't exist? Doesn't make much sense.
Think of it like this:
Code: Select all
if(BIOS_or_firmware_left_the_LegacyReplacement_bit_set() ) {
/* Can't assume a PIT exists */
just_use_HPET();
} else {
/* PIT is mostly useless anyway */
just_use_HPET();
}
For determining which IRQ you can use; if you're using PIC then I don't really know and you may have no choice other than to use the IRQ0 and IRQ8 (and to make sure the PIT and RTC, if they exist, don't try to use these IRQs and cause conflicts). If you're using IO APICs then you get to choose which IO APIC input to use (and can use MSI if it's supported instead); but you still have to make sure nothing else is using the IRQ/s and causing conflicts (which includes the PIT and RTC, but also includes any other "edge triggered" IRQ, as they can't be shared like "level triggered" PCI IRQs can).
limp wrote:One other thing is that I want to exclusively use PIC and Local APIC but not I/O APIC.
In this case, if I setup PIC and LAPIC controllers and don't even consider I/O APIC, would I have a problem?
That should work. The problem is that the PIC isn't very good for a variety of reasons (see below).
limp wrote:That is, if I mask PIT's IRQ on the PIC, is there any case that a PIT interrupt would be triggered by the I/O APIC?
It's possible to configure both the PIT and IO APIC to send IRQs to the CPU (and I strongly recommend against doing that). For backward compatibility, the default state of the IO APIC is "all IRQs disabled", so if you don't touch it you don't have to worry.
limp wrote:The HPET's FSB routing capability sounds quite interesting to me.
MSI (used to reduce IRQ sharing on PCI) and the HPET's "FSB routing" capability require IO APICs.
limp wrote:Do you happen to know if I would get faster interrupts in this case? Do you consider it as a method with lower latency / greater level of determinism compared to standard routing? What about the acknowledgement of the IRQ? Is it needed? In PIC, we're sending an EOI instruction, what about in FSB IRQs?
There's a lot of things here. First, all legacy/ISA devices are slow - due to original ISA bus timing, any IO port access to a legacy/ISA device costs 1 us. This includes PIC, PIT, serial ports, parallel ports, the PS2/keyboard controller, the ISA DMA chips, etc. For example, to send the EOI to the slave PIC and the master PIC, it will cost you 2 us (or about 6000 cycles on a modern CPU). When you've got modern PCI devices (e.g. a video card doing a vertical retrace IRQ 60 times per second, plus an ethernet card under load, plus USB and SATA disk controllers, etc) sending the EOIs to the PIC chip/s can add up to a performance problem. Worse, PCI IRQs are almost always routed to the slave PIC, so you have to use 2 IO port accesses for the (master and slave) EOIs. For an example, regardless of how fast the CPU is, at about 500000 IRQs per second (through the slave PIC) you'll spend 100% of CPU time doing EOIs alone.
For IO APICs, interrupts from the local APIC and MSI, you still have to send an EOI. However, in this case the EOI is sent by doing a write to the local APIC (which is built into the CPU), and an EOI probably costs about 10 cycles (instead of about 3000 cycles).
The next thing to consider is IRQ sharing. If an IRQ is shared by 4 devices, then you need to run 4 different IRQ handers each time that IRQ occurs (and you can assume that the IRQ will happen a lot more often when there's 4 IRQs connected to it too). That adds up to more overhead. For modern computers there's lots of PCI devices (for example, the computer I'm using now has 8 USB controllers, 2 SATA controllers, 2 video cards, 2 audio cards and 2 ethernet cards and some other stuff - a minimum of 16 PCI IRQs) and the PIC chips typically only have 4 inputs left over for PCI. With 16 PCI IRQs and 4 PIC chip inputs, you end up with an average of 4 devices sharing each of those IRQs (but that's only an average - you could have 3 IRQs with 2 devices sharing each of them, and 12 devices sharing one IRQ). For IO APICs there's typically 24 inputs (where about 12 of them are used for legacy/ISA) which leaves you with about 12 IRQs for 16 devices. That's a lot less IRQ sharing (and a lot less overhead caused by IRQ sharing). It's still not quite "good enough" for modern systems though, which is why PCI added support for MSI (and then made it mandatory for PCI-X devices). With MSI, devices don't use IO APIC inputs, and you could have 220 devices all using separate interrupts without any IRQ sharing (if all devices support MSI).
Finally there's IRQ balancing. In modern computers (with multiple CPUs) you can spread the IRQ handling overhead across multiple CPUs; so you don't end up with one CPU constantly being pounded. You can also use a special "send to lowest priority CPU" mode so that IRQs don't interrupt important stuff (e.g. very high priority tasks; CPUs that are in critical sections, etc). Of course you have to use IO APICs for that. With IO APICs and MSI you also get a lot more control over IRQ priorities.
How much difference each of these things make depends on a lot of things - how many devices, how many IRQs per second, how many CPUs, how well implemented the software is, etc.
Cheers,
Brendan