Hi,
Two questions about PCI devices:
1. Is it possible to change the IRQ numbers assigned to PCI devices by the BIOS? Is it as simple
as writing a proper value to the proper PCI config space register (just like with BARs), or is there more to it?
I am working in legacy PIC mode, so my question is about the interrupts as seen by the PIC.
I've found conflicting information about this, so maybe someone has relevant experience? Please
note I don't much care about obscure legacy 486/586 or Pentium II/III class systems anyway, since
my kernel targets newer "P4 class and up" CPUs anyway due to the use of SSE2 instructions and NX bit.
EDIT: before someone points it out, I have read the Wiki where it says that this field is writable, but
i did find contrary information elsewhere, hence my question about Your experience with this.
2. Since I am working in legacy PIC mode, why am I seeing IRQ numbers like 255 or other high values (67? 145?)?
Obviously that happens on real hardware only, and most devices have reasonable values, but a few (2-3) do not.
Just to point out, these are pci-ex devices, and my PCI driver is not pci-ex aware yet. But afaik theese should
behave like normal PCI devices.
EDIT: And now this one is a beauty, and I can't find anything about it anywhere.
3. Can a device with a 64bit memory BAR map more than 4GB? In other words,
can a memory range indicated by a 64bit BAR be LARGER than 4GB? It would
seem absurd that a device might want that much address space, but then
I can imagine a graphics card with 8GB dedicated ram, or a Tesla with
even more. Or would that have to be split into separate ranges?
Thanks for any insight.
Cheers,
Theesem
[SOLVED] PCI : IRQ lines for PCI devices / BAR sizes
-
- Member
- Posts: 31
- Joined: Thu Mar 20, 2014 2:22 pm
- Location: London, UK
[SOLVED] PCI : IRQ lines for PCI devices / BAR sizes
Last edited by theesemtheesem on Wed Apr 02, 2014 8:02 am, edited 1 time in total.
Re: PCI question: IRQ lines for PCI devices / PCI BAR sizes
Hi,
Let's start from the start. For a PCI bus there are 4 "PCI IRQs". In PCI configuration space there's a (read-only) "Interrupt Pin" field, which says which interrupt line the device uses at the "PCI slot". To reduce the chance of different devices using the same PCI IRQ, PCI slots are wired in a strange way - the first IRQ at the first slot might be connected to PCI IRQ A, the first IRQ at the second slot might be connected to PCI IRQ B and so on. If you know how PCI slots are hard-wired you can determine which PCI IRQ/s a device is connected to. In general, you don't know how the PCI slots are hard-wired.
This gives us 4 PCI IRQs at the PCI host controller. Of course you can have 2 or more PCI host controllers where each one has 4 PCI IRQs; and some chipsets cheat and have additional PCI IRQs for built-in devices without looking like there's 2 or more PCI host controllers. In any case, whatever PCI IRQs there are get hard-wired to IO APIC inputs. This means that if you know how PCI slots are hard-wired and know how the PCI host controller/s are hard-wired, then you can determine how devices are mapped to PCI IRQs and how PCI IRQs are mapped to IO APIC inputs. In general, you don't know how any of this is hard-wired.
In addition to being connected to the IO APIC; the PCI IRQs are also connected to PIC chips for backward compatibility. ISA devices are edge-triggered and can't do IRQ sharing, and may be connected to any of the PIC chip's inputs. Because there's no sane way to determine which PIC inputs the ISA devices use (specifically, ISA cards plugged into ISA slots that weren't built into the motherboard); there's no sane way to hard-wire PCI IRQs to the PIC chip inputs. For this reason there's a "PCI IRQ to PIC chip router". Typically, for each PCI IRQ there's a set of PIC chip inputs that can be selected in the "PCI IRQ to PIC chip router"; and firmware and software can change that. The firmware does what it can (typically with BIOS options to determine which PIC chip IRQs to use in old systems with ISA slots, and with a default configuration for newer systems that don't have to worry about ISA slots). Of course different motherboards have different "PCI IRQ to PIC chip routers"; and if you know where it is and how to configure it (which you usually don't) then it's unlikely you're going to do a better job of configuring it than the firmware did anyway.
In PCI configuration space there is an "Interrupt Line" field. This is a read-write field that does nothing - software can store any value it wants in this field and it makes no difference. By convention, (on 80x86) firmware uses this field to tell the OS which PIC input the device is connected to, so that the OS can just look at this field and doesn't need to know anything about how about how PCI slots are connected to PCI IRQs or how PCI IRQs are mapped/routed to PIC inputs.
Cheers,
Brendan
For most systems it's possible; but it's not as simple as writing a proper value to the PCI configuration space register.theesemtheesem wrote:1. Is it possible to change the IRQ numbers assigned to PCI devices by the BIOS? Is it as simple
as writing a proper value to the proper PCI config space register (just like with BARs), or is there more to it?
I am working in legacy PIC mode, so my question is about the interrupts as seen by the PIC.
I've found conflicting information about this, so maybe someone has relevant experience? Please
note I don't much care about obscure legacy 486/586 or Pentium II/III class systems anyway, since
my kernel targets newer "P4 class and up" CPUs anyway due to the use of SSE2 instructions and NX bit.
EDIT: before someone points it out, I have read the Wiki where it says that this field is writable, but
i did find contrary information elsewhere, hence my question about Your experience with this.
Let's start from the start. For a PCI bus there are 4 "PCI IRQs". In PCI configuration space there's a (read-only) "Interrupt Pin" field, which says which interrupt line the device uses at the "PCI slot". To reduce the chance of different devices using the same PCI IRQ, PCI slots are wired in a strange way - the first IRQ at the first slot might be connected to PCI IRQ A, the first IRQ at the second slot might be connected to PCI IRQ B and so on. If you know how PCI slots are hard-wired you can determine which PCI IRQ/s a device is connected to. In general, you don't know how the PCI slots are hard-wired.
This gives us 4 PCI IRQs at the PCI host controller. Of course you can have 2 or more PCI host controllers where each one has 4 PCI IRQs; and some chipsets cheat and have additional PCI IRQs for built-in devices without looking like there's 2 or more PCI host controllers. In any case, whatever PCI IRQs there are get hard-wired to IO APIC inputs. This means that if you know how PCI slots are hard-wired and know how the PCI host controller/s are hard-wired, then you can determine how devices are mapped to PCI IRQs and how PCI IRQs are mapped to IO APIC inputs. In general, you don't know how any of this is hard-wired.
In addition to being connected to the IO APIC; the PCI IRQs are also connected to PIC chips for backward compatibility. ISA devices are edge-triggered and can't do IRQ sharing, and may be connected to any of the PIC chip's inputs. Because there's no sane way to determine which PIC inputs the ISA devices use (specifically, ISA cards plugged into ISA slots that weren't built into the motherboard); there's no sane way to hard-wire PCI IRQs to the PIC chip inputs. For this reason there's a "PCI IRQ to PIC chip router". Typically, for each PCI IRQ there's a set of PIC chip inputs that can be selected in the "PCI IRQ to PIC chip router"; and firmware and software can change that. The firmware does what it can (typically with BIOS options to determine which PIC chip IRQs to use in old systems with ISA slots, and with a default configuration for newer systems that don't have to worry about ISA slots). Of course different motherboards have different "PCI IRQ to PIC chip routers"; and if you know where it is and how to configure it (which you usually don't) then it's unlikely you're going to do a better job of configuring it than the firmware did anyway.
In PCI configuration space there is an "Interrupt Line" field. This is a read-write field that does nothing - software can store any value it wants in this field and it makes no difference. By convention, (on 80x86) firmware uses this field to tell the OS which PIC input the device is connected to, so that the OS can just look at this field and doesn't need to know anything about how about how PCI slots are connected to PCI IRQs or how PCI IRQs are mapped/routed to PIC inputs.
The value 255 means "unknown" or "no connection", and most likely means that the PCI device either doesn't use an IRQ or isn't connected to any PIC chip input. Values 16 to 254 are reserved and shouldn't happen.theesemtheesem wrote:2. Since I am working in legacy PIC mode, why am I seeing IRQ numbers like 255 or other high values (67? 145?)?
Obviously that happens on real hardware only, and most devices have reasonable values, but a few (2-3) do not.
Just to point out, these are pci-ex devices, and my PCI driver is not pci-ex aware yet. But afaik theese should
behave like normal PCI devices.
Yes - otherwise it'd be a 32-bit BAR.theesemtheesem wrote:3. Can a device with a 64bit memory BAR map more than 4GB? In other words,
can a memory range indicated by a 64bit BAR be LARGER than 4GB?
The problem is backward compatibility (e.g. people trying to use devices on 32-bit systems or on 64-bit systems running 32-bit OSs). Because of this PCI devices tend to be limited to 512 MiB of space. Of course if you've got 8 devices that all want 512 MiB of space then a 32-bit system is still going to have trouble; and in that case I'd expect a BIOS to ignore half of the devices (e.g. only configure 2 of the 8 video cards) and let the OS worry about configuring the others for 64-bit.theesemtheesem wrote:It would seem absurd that a device might want that much address space, but then I can imagine a graphics card with 8GB dedicated ram, or a Tesla with even more. Or would that have to be split into separate ranges?
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
-
- Member
- Posts: 31
- Joined: Thu Mar 20, 2014 2:22 pm
- Location: London, UK
Re: PCI question: IRQ lines for PCI devices / PCI BAR sizes
Hi,
Thanks Brendan for a very comprehensive answer. Just to clarify:
So despite this field being writable it really does not matter what I put into it, yes? In other words, whatever I write into that field
will just get ignored and no change will happen?
size i'm asking about. Can a device map more than 4GB using one 64bit BAR?. Not map ABOVE 4GB in terms of address, but map MORE than 4GB in term of size (i.e occupy a range of 0x200000000 to 0x400000000)?
Cheers,
Theesem
Thanks Brendan for a very comprehensive answer. Just to clarify:
In PCI configuration space there is an "Interrupt Line" field. This is a read-write field that does nothing - software can store any value it wants in this field and it makes no difference. By convention, (on 80x86) firmware uses this field to tell the OS which PIC input the device is connected to, so that the OS can just look at this field and doesn't need to know anything about how about how PCI slots are connected to PCI IRQs or how PCI IRQs are mapped/routed to PIC inputs.
So despite this field being writable it really does not matter what I put into it, yes? In other words, whatever I write into that field
will just get ignored and no change will happen?
OK, that makes sense. As I said, these are PCI-ex devices. I have tested my PCI code on two machines, both have PCI-ex graphics cards, and on both these cards have IRQ255, but then as far as I know pci-ex devices should use message signalled interrupts and not legacy PCI 'pin interrupts', so that makes sense. The strange values 67 and 145 show up on a laptop, and are assigned to chipset components (67 to Intel Firmware Hub Device, 145 to Intel Sandy Bridge Memory Controller). Funny thing is these don't have any IRQs assigned when I check them in Windows device manager, so as You said these values are just some garbage.The value 255 means "unknown" or "no connection", and most likely means that the PCI device either doesn't use an IRQ or isn't connected to any PIC chip input. Values 16 to 254 are reserved and shouldn't happen.
I understand all that, but somehow that still does not answer my question. What I want to know is if it is possible to map, let's say 8GB, using a single 64bit BAR. BARs have their address (which uses two 32bit bars to make one 64bit BAR) and size. And it's theYes - otherwise it'd be a 32-bit BAR.
(...)
The problem is backward compatibility (e.g. people trying to use devices on 32-bit systems or on 64-bit systems running 32-bit OSs). Because of this PCI devices tend to be limited to 512 MiB of space. Of course if you've got 8 devices that all want 512 MiB of space then a 32-bit system is still going to have trouble; and in that case I'd expect a BIOS to ignore half of the devices (e.g. only configure 2 of the 8 video cards) and let the OS worry about configuring the others for 64-bit.
size i'm asking about. Can a device map more than 4GB using one 64bit BAR?. Not map ABOVE 4GB in terms of address, but map MORE than 4GB in term of size (i.e occupy a range of 0x200000000 to 0x400000000)?
Cheers,
Theesem
Re: PCI question: IRQ lines for PCI devices / PCI BAR sizes
Hi,
In practice, this would cause compatibility problems (it'd be impossible for 32-bit systems to use the device) so device manufacturers don't use huge memory mapped IO areas (even though they could if they wanted to, and even though they probably will eventually).
Cheers,
Brendan
Correct.theesemtheesem wrote:In PCI configuration space there is an "Interrupt Line" field. This is a read-write field that does nothing - software can store any value it wants in this field and it makes no difference. By convention, (on 80x86) firmware uses this field to tell the OS which PIC input the device is connected to, so that the OS can just look at this field and doesn't need to know anything about how about how PCI slots are connected to PCI IRQs or how PCI IRQs are mapped/routed to PIC inputs.
So despite this field being writable it really does not matter what I put into it, yes? In other words, whatever I write into that field
will just get ignored and no change will happen?
Even for bridges (which use a "type = 1" configuration space format and not the "type = 0" format that normal devices use) the "Interrupt Line" field (which still exists) shouldn't have dodgy values. I suspect that you'll find the "Interrupt Pin" field is 0x00 (there are no IRQs anyway) and that the firmware violates the PCI specs (which say that firmware should set "Interrupt Line" to 255 if there's no IRQs).theesemtheesem wrote:OK, that makes sense. As I said, these are PCI-ex devices. I have tested my PCI code on two machines, both have PCI-ex graphics cards, and on both these cards have IRQ255, but then as far as I know pci-ex devices should use message signalled interrupts and not legacy PCI 'pin interrupts', so that makes sense. The strange values 67 and 145 show up on a laptop, and are assigned to chipset components (67 to Intel Firmware Hub Device, 145 to Intel Sandy Bridge Memory Controller). Funny thing is these don't have any IRQs assigned when I check them in Windows device manager, so as You said these values are just some garbage.The value 255 means "unknown" or "no connection", and most likely means that the PCI device either doesn't use an IRQ or isn't connected to any PIC chip input. Values 16 to 254 are reserved and shouldn't happen.
In theory, it's possible for a device to have a huge (e.g. 2 TiB) memory mapped IO area with a single 64-bit BAR; and also possible for a (64-bit) OS to configure and use that huge memory mapped IO area.theesemtheesem wrote:I understand all that, but somehow that still does not answer my question. What I want to know is if it is possible to map, let's say 8GB, using a single 64bit BAR. BARs have their address (which uses two 32bit bars to make one 64bit BAR) and size. And it's theYes - otherwise it'd be a 32-bit BAR.
(...)
The problem is backward compatibility (e.g. people trying to use devices on 32-bit systems or on 64-bit systems running 32-bit OSs). Because of this PCI devices tend to be limited to 512 MiB of space. Of course if you've got 8 devices that all want 512 MiB of space then a 32-bit system is still going to have trouble; and in that case I'd expect a BIOS to ignore half of the devices (e.g. only configure 2 of the 8 video cards) and let the OS worry about configuring the others for 64-bit.
size i'm asking about. Can a device map more than 4GB using one 64bit BAR?. Not map ABOVE 4GB in terms of address, but map MORE than 4GB in term of size (i.e occupy a range of 0x200000000 to 0x400000000)?
In practice, this would cause compatibility problems (it'd be impossible for 32-bit systems to use the device) so device manufacturers don't use huge memory mapped IO areas (even though they could if they wanted to, and even though they probably will eventually).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
-
- Member
- Posts: 31
- Joined: Thu Mar 20, 2014 2:22 pm
- Location: London, UK
Re: PCI question: IRQ lines for PCI devices / PCI BAR sizes
Hi,
Once again thanks to Brendan for his insight.
Thanks!
Cheers,
Andrew
Once again thanks to Brendan for his insight.
So, while theoretically possible, it is not done in practice and I should not concern myself with it. Good!In theory, it's possible for a device to have a huge (e.g. 2 TiB) memory mapped IO area with a single 64-bit BAR; and also possible for a (64-bit) OS to configure and use that huge memory mapped IO area.
In practice, this would cause compatibility problems (it'd be impossible for 32-bit systems to use the device) so device manufacturers don't use huge memory mapped IO areas (even though they could if they wanted to, and even though they probably will eventually).
Thanks!
Cheers,
Andrew
Re: [SOLVED] PCI : IRQ lines for PCI devices / BAR sizes
What, no? You should totally concern yourself with it.
It's really simple. You write a helper function that lets you know where the BAR is located and its size (and whether it is 32-bit or 64-bit or such). You then check if it is fully mappable on your particular processor architecture (for instance, a 64-bit BAR might still be in accessible memory on a 32-bit system). If the check fails (is above or cross the 4 GiB boundary on a 32-bit system), then you print an error message to the system log and simply don't use the device. (Alternatively, it may be possible to remap it, though I don't have any experience with that).
It's really simple. You write a helper function that lets you know where the BAR is located and its size (and whether it is 32-bit or 64-bit or such). You then check if it is fully mappable on your particular processor architecture (for instance, a 64-bit BAR might still be in accessible memory on a 32-bit system). If the check fails (is above or cross the 4 GiB boundary on a 32-bit system), then you print an error message to the system log and simply don't use the device. (Alternatively, it may be possible to remap it, though I don't have any experience with that).
-
- Member
- Posts: 31
- Joined: Thu Mar 20, 2014 2:22 pm
- Location: London, UK
Re: [SOLVED] PCI : IRQ lines for PCI devices / BAR sizes
Hi,
I mean 'not concern myself with the size of the BAR possibly being huge', not 'not concern myself with the device at all'.
Obviously if the device want some memory located above 4GB and the OS is 32 bit it should be remapped or not used at all.
I only meant not concerning myself with the possibility that a device will request 21738GB or 1TB or just 4.1 GB of address space, as it is very unlikely. That was in terms of "should I make provisions in the driver for BAR sizes above 32bit and is that common", not in terms of "i don't want to support devices that have the capability to be mapped above the 4gb barrier".
Cheers,
Andrew
I mean 'not concern myself with the size of the BAR possibly being huge', not 'not concern myself with the device at all'.
Obviously if the device want some memory located above 4GB and the OS is 32 bit it should be remapped or not used at all.
I only meant not concerning myself with the possibility that a device will request 21738GB or 1TB or just 4.1 GB of address space, as it is very unlikely. That was in terms of "should I make provisions in the driver for BAR sizes above 32bit and is that common", not in terms of "i don't want to support devices that have the capability to be mapped above the 4gb barrier".
Cheers,
Andrew
Re: [SOLVED] PCI : IRQ lines for PCI devices / BAR sizes
Hi,
If a 64-bit OS can't handle a device with a (e.g.) 512 GiB BAR, then that 64-bit OS is broken. There are no sane excuses.
For a 32-bit OSs, you need to be able to map the device's area into the physical address space (with BAR), and then map it into a virtual address space (with paging). If the OS supports PAE then the physical address space is a minimum of 64 GiB, and if its a micro-kernel (each device driver has its own virtual address space) then supporting many devices each with 1 GiB BARs should be easy. For other cases (no PAE and only 32-bit physical addresses, and/or monolithic kernels where everything has to be squeezed into the same kernel space) you'll be lucky to support a single device with a 512 MiB BAR.
Cheers,
Brendan
Devices with large BARs might not exist today, but will exist eventually (possibly in the next 5 years, and possibly before your OS would've been usable).theesemtheesem wrote:I mean 'not concern myself with the size of the BAR possibly being huge', not 'not concern myself with the device at all'.
Obviously if the device want some memory located above 4GB and the OS is 32 bit it should be remapped or not used at all.
I only meant not concerning myself with the possibility that a device will request 21738GB or 1TB or just 4.1 GB of address space, as it is very unlikely. That was in terms of "should I make provisions in the driver for BAR sizes above 32bit and is that common", not in terms of "i don't want to support devices that have the capability to be mapped above the 4gb barrier".
If a 64-bit OS can't handle a device with a (e.g.) 512 GiB BAR, then that 64-bit OS is broken. There are no sane excuses.
For a 32-bit OSs, you need to be able to map the device's area into the physical address space (with BAR), and then map it into a virtual address space (with paging). If the OS supports PAE then the physical address space is a minimum of 64 GiB, and if its a micro-kernel (each device driver has its own virtual address space) then supporting many devices each with 1 GiB BARs should be easy. For other cases (no PAE and only 32-bit physical addresses, and/or monolithic kernels where everything has to be squeezed into the same kernel space) you'll be lucky to support a single device with a 512 MiB BAR.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.