Hello All,
I have a few questions regarding PCI Host Bridges and the PCI Config Space and I was hoping someone here would be able to offer some help.
Firstly looking at these images from an Intel Xeon CPU datasheet (http://www.intel.co.uk/content/dam/www/ ... -vol-2.pdf)-
It shows CPUBUSNO 0 and CPUBUSNO 1. CPUBUSNO 0 is always PCI bus 0 in the PCI Config Space, whereas CPUBUSNO 1 can change, but on the example I was looking at it was PCI bus 255.
When I look at the ACPI DSDT for the system it shows 2 PCI Host bridges, one that accepts buses 0 - 254 and the other that accepts bus 255, which makes sense.
First question, would I be right in assuming that it looks like the 2 Host bridges have not been mapped into the PCI config space? Device 0 on bus 0 is mapped to the DMI2 host bridge and there is nothing on Bus 255.
Second question, taken from the OSDev page regarding PCI enumeration -
"The final step is to handle systems with multiple PCI host controllers correctly. Start by checking if the device at bus 0, device 0 is a multi-function device. If it's not a multi-function device, then there is only one PCI host controller and bus 0, device 0, function 0 will be the PCI host controller responsible for bus 0. If it is a multifunction device, then bus 0, device 0, function 0 will be the PCI host controller responsible for bus 0; bus 0, device 0, function 1 will be the PCI host controller responsible for bus 1, etc (up to the number of functions supported)."
Can you always be sure that B0:D0:F0 is going to be the PCI Host Bridge? Using the above example it isn't, could it not be mapped to any device on bus 0?
Third question, if you have a multi PCI Host Bridge system, like the Xeon example, even if both the bridges are mapped into the config space, would the second host bridge (that produces bus 255) be located at B0:D0? would it not be at B255:D0 if anything. The OSDev page says that a second PCI host bridge would be at bus 0, device 0, function 1 and be responsible for bus 1? What if bus 1 was connected to bus 0, that doesn't seem to make much sense, and you'd be limited to 8 PCI buses.
If anyone can help clear things up for me it would be greatly appreciated.
Kind Regards,
Hallam
PCI Host Bridges and the PCI Config Space
Re: PCI Host Bridges and the PCI Config Space
Hello,
Anyone?...
Regards,
Hallam
Anyone?...
Regards,
Hallam
Re: PCI Host Bridges and the PCI Config Space
Unfortunately, I doubt many people on this site have ever developed any PCI bus code for a Xeon system, or any system with multiple PCI busses.
If you actually have a physical system with two PCI busses, and the second bus handles bus number 255, then the wiki page is obviously wrong. However, keep in mind that a Xeon system is not technically an x86 compatible system. This may explain the difference, although the wiki being wrong is also distinctly possible.
Edit: forget what I said about the x86 compatibility... I was thinking about the Itanium... :/
If you actually have a physical system with two PCI busses, and the second bus handles bus number 255, then the wiki page is obviously wrong. However, keep in mind that a Xeon system is not technically an x86 compatible system. This may explain the difference, although the wiki being wrong is also distinctly possible.
Edit: forget what I said about the x86 compatibility... I was thinking about the Itanium... :/
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Re: PCI Host Bridges and the PCI Config Space
Hi,
As far as I know, for the advice about systems with multiple host controllers the OSdev wiki page was correct for older machines. However, in general the PCI specs don't say anything useful about PCI host controllers and the OSdev wiki page is only reflecting "common usage on 80x86 at the time" and not something enforced by any actual standard.
Now; for your specific case...
This is likely to be the single most complex arrangement that has ever existed (e.g. the sort of thing that someone with a good working knowledge of older chipsets could expect to study for 6 months before they fully understand everything this hardware actually does). There are 4 things I'd want to point out:
a) The first step in understanding any of this is to read Alice's Adventures in Wonderland. The hardware is literally an illusion pretending to be an illusion, that's emulating a third illusion for the sake of backward compatibility.
b) If you have a look at everything on "CPUBUSNO(1)", there are no normal PCI devices - it's all low level chip specific stuff. I would assume that Intel has deliberately "hidden" the entire bus from general purpose code (e.g. by not having a pretend PCI host bridge for it mapped into PCI configuration space so that general purpose code doesn't find it); so that only "chip specific code" (e.g. firmware) will find any of it (and it won't break/confuse old OSs like Windows 95). More specifically; I expect Intel have (mis)used PCI configuration space for this simply because they were too lazy to invent a whole new "QPI configuration space" mechanism; and none of the stuff involved with "CPUBUSNO(1)" has anything to do with PCI (as defined by PCISIG) in the first place.
c) Both "CPUBUSNO(0)" and "CPUBUSNO(1)", and all devices shown in those diagrams, are all part of the physical CPU. For a system with (e.g.) a 4-socket motherboard and 4 physical CPUs you'd have four different "CPUBUSNO(0)" and four different ""CPUBUSNO(1)"; and in that case you'd probably end up with the first chip's "CPUBUSNO(1)" pretending to be on PCI bus 252, the second chip's "CPUBUSNO(1)" pretending to be on PCI bus 253, etc. Of course I do mean "pretending to be" - there are no actual PCI buses involved.
d) ACPI and it's DSDT is just a set of lies that the firmware tells the Windows to make Windows happy. These lies need to be plausible, but beyond that they don't necessarily reflect the actual hardware and can't be trusted (unless your software happens to handle all hardware in the same way that Windows does, and therefore your software needs to be told the same lies).
Cheers,
Brendan
As far as I know, for the advice about systems with multiple host controllers the OSdev wiki page was correct for older machines. However, in general the PCI specs don't say anything useful about PCI host controllers and the OSdev wiki page is only reflecting "common usage on 80x86 at the time" and not something enforced by any actual standard.
Now; for your specific case...
This is likely to be the single most complex arrangement that has ever existed (e.g. the sort of thing that someone with a good working knowledge of older chipsets could expect to study for 6 months before they fully understand everything this hardware actually does). There are 4 things I'd want to point out:
a) The first step in understanding any of this is to read Alice's Adventures in Wonderland. The hardware is literally an illusion pretending to be an illusion, that's emulating a third illusion for the sake of backward compatibility.
b) If you have a look at everything on "CPUBUSNO(1)", there are no normal PCI devices - it's all low level chip specific stuff. I would assume that Intel has deliberately "hidden" the entire bus from general purpose code (e.g. by not having a pretend PCI host bridge for it mapped into PCI configuration space so that general purpose code doesn't find it); so that only "chip specific code" (e.g. firmware) will find any of it (and it won't break/confuse old OSs like Windows 95). More specifically; I expect Intel have (mis)used PCI configuration space for this simply because they were too lazy to invent a whole new "QPI configuration space" mechanism; and none of the stuff involved with "CPUBUSNO(1)" has anything to do with PCI (as defined by PCISIG) in the first place.
c) Both "CPUBUSNO(0)" and "CPUBUSNO(1)", and all devices shown in those diagrams, are all part of the physical CPU. For a system with (e.g.) a 4-socket motherboard and 4 physical CPUs you'd have four different "CPUBUSNO(0)" and four different ""CPUBUSNO(1)"; and in that case you'd probably end up with the first chip's "CPUBUSNO(1)" pretending to be on PCI bus 252, the second chip's "CPUBUSNO(1)" pretending to be on PCI bus 253, etc. Of course I do mean "pretending to be" - there are no actual PCI buses involved.
d) ACPI and it's DSDT is just a set of lies that the firmware tells the Windows to make Windows happy. These lies need to be plausible, but beyond that they don't necessarily reflect the actual hardware and can't be trusted (unless your software happens to handle all hardware in the same way that Windows does, and therefore your software needs to be told the same lies).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: PCI Host Bridges and the PCI Config Space
The best way to handle PCI is to do a search at boot-time and remember all the devices installed. Don't bother with bridges, doing configurations (BIOS does those in typical systems) or ACPI. You only need ACPI to figure out IRQs, unless you do detection which probably can work too. If a PCI function has MSI support, use that instead of IRQs.
Re: PCI Host Bridges and the PCI Config Space
Hello All, thanks for the responses.
Sorry for the late reply!...
SpyderTL, yes I think the wiki page may be wrong, "bus 0, device 0, function 1 will be the PCI host controller responsible for bus 1" doesn't seem to make much sense.
Brandon, I'll reply via PM.
rdos, thanks for the advice.
Regards,
Hallam
Sorry for the late reply!...
SpyderTL, yes I think the wiki page may be wrong, "bus 0, device 0, function 1 will be the PCI host controller responsible for bus 1" doesn't seem to make much sense.
Brandon, I'll reply via PM.
rdos, thanks for the advice.
Regards,
Hallam