Page 1 of 2

Sandy Bridge logical view

Posted: Sat Dec 22, 2012 1:18 pm
by cianfa72
Hi,

I'm a newbie & beginner in this forum...

I've a laptop based on Core i5 - 2450M processor (Sandy Bridge) running Win 7....now from WinDBG I see

lkd> !pci
PCI Segment 0 Bus 0
00:0 8086:0104.09 Cmd[0006:.mb...] Sts[2090:c....] Intel Host Bridge SubID:144d:c606
01:0 8086:0101.09 Cmd[0007:imb...] Sts[0010:c....] Intel PCI-PCI Bridge 0->0x1-0x1
02:0 8086:0126.09 Cmd[0407:imb...] Sts[0090:c....] Intel VGA Compatible Controller SubID:144d:c606
16:0 8086:1c3a.04 Cmd[0006:.mb...] Sts[0010:c....] Intel Other Communications Controller SubID:144d:c606
1a:0 8086:1c2d.04 Cmd[0006:.mb...] Sts[0290:c....] Intel USB2 Controller SubID:144d:c606
1b:0 8086:1c20.04 Cmd[0006:.mb...] Sts[0010:c....] Intel Class:4:3:0 SubID:144d:c606
1c:0 8086:1c10.b4 Cmd[0006:.mb...] Sts[0010:c....] Intel PCI-PCI Bridge 0->0x2-0x2
1c:3 8086:1c16.b4 Cmd[0007:imb...] Sts[0010:c....] Intel PCI-PCI Bridge 0->0x3-0x3
1d:0 8086:1c26.04 Cmd[0006:.mb...] Sts[0290:c....] Intel USB2 Controller SubID:144d:c606
1f:0 8086:1c49.04 Cmd[0007:imb...] Sts[0210:c....] Intel ISA Bridge SubID:144d:c606
1f:2 8086:1c03.04 Cmd[0007:imb...] Sts[02b0:c6...] Intel Class:1:6:1 SubID:144d:c606
1f:3 8086:1c22.04 Cmd[0003:im....] Sts[0280:.....] Intel SMBus Controller SubID:144d:c606

So, from my understanding, it seems that logically (from a configuration software perspective) it does exist a PCI bus 0 spanning the Sandy Bridge processor and PCH (i.e all the system). For example DRAM Integrated Memory Controller (IMC) into processor is the PCI Bus 0, Device 0, Function 0...

So these newer architectures (e.g. Sandy Bridge) share the same logical view of the older ones ? (based on processor + MCH chipsets)

Thanks

Re: Sandy Bridge logical view

Posted: Sun Dec 23, 2012 8:39 am
by cianfa72
Any idea ? Thx

Re: Sandy Bridge logical view

Posted: Mon Dec 24, 2012 1:29 am
by Brendan
Hi,
cianfa72 wrote:So these newer architectures (e.g. Sandy Bridge) share the same logical view of the older ones ? (based on processor + MCH chipsets)
For an extremely high level logical view (e.g. "a box that does stuff"), an ancient Z80 system is the same as a modern Sandy Bridge system. For an extremely low level logical view (e.g. a huge logic diagram showing NAND gates) a Nehalem system is very different to a Sandy Bridge system.

For differences caused by building the memory controller into the CPU (and nothing else), that may or may not effect an OS/kernel; the only significant difference is NUMA, which only exists when there's 2 or more physical chips (and therefore wouldn't exist in a "single multi-core chip" laptop).


Cheers,

Brendan

Re: Sandy Bridge logical view

Posted: Mon Dec 24, 2012 3:06 am
by cianfa72
Thanks for answer

Take this opportunity to ask the experts some details....

IIUC the Host Bridge/DRAM controller (bus 0 device 0 function 0) from a logical point of view acts as an initiator/target PCI interface on "virtual PCI bus 0" side (initiator or bus master when it has to forward CPU access to PCI memory mapped devices, target or slave when PCI bus mastering devices initiate a DMA in system RAM).

Now I expect to find BARs entries used to decode DMA access to main RAM (i.e. when HostBridge/DRAM controller acts as a bus 0 PCI target)...but I do not see them...

lkd> !pci 100 0 0 0

PCI Configuration Space (Segment:0000 Bus:00 Device:00 Function:00)
Common Header:
00: VendorID 8086 Intel Corporation
02: DeviceID 0104
04: Command 0006 MemSpaceEn BusInitiate
06: Status 2090 CapList FB2BCapable InitiatorAbort
08: RevisionID 09
09: ProgIF 00
0a: SubClass 00 Host Bridge
0b: BaseClass 06 Bridge Device
0c: CacheLineSize 0000
0d: LatencyTimer 00
0e: HeaderType 00
0f: BIST 00
10: BAR0 00000000
14: BAR1 00000000
18: BAR2 00000000
1c: BAR3 00000000
20: BAR4 00000000
24: BAR5 00000000
28: CBCISPtr 00000000
2c: SubSysVenID 144d
2e: SubSysID c606
30: ROMBAR 00000000
34: CapPtr e0
3c: IntLine 00
3d: IntPin 00
3e: MinGnt 00
3f: MaxLat 00
Device Private:
40: fed19001 00000000 fed10001 00000000
50: 00000211 00000019 af90000f ab000001
60: f8000005 00000000 fed18001 00000000
70: ff800000 00000000 ff800c00 0000007f
80: 11111130 00111111 0000001a 00000000
90: 00000001 00000001 4fd00001 00000001
a0: 00000001 00000001 4fe00001 00000001
b0: aba00001 ab800001 ab000001 afa00001
c0: 00000000 00000000 00000000 00000000
d0: 00000000 00000000 00000000 00000000
e0: 010c0009 e280619e 14000090 00000000
f0: 01000000 00000000 00060fb8 00000000
Capabilities:
e0: CapID 09 Vendor Specific Capability
e1: NextPtr 00


What I'm missing ? Thx

Re: Sandy Bridge logical view

Posted: Mon Dec 24, 2012 5:00 am
by Brendan
Hi,
cianfa72 wrote:IIUC the Host Bridge/DRAM controller (bus 0 device 0 function 0) from a logical point of view acts as an initiator/target PCI interface on "virtual PCI bus 0" side (initiator or bus master when it has to forward CPU access to PCI memory mapped devices, target or slave when PCI bus mastering devices initiate a DMA in system RAM).

Now I expect to find BARs entries used to decode DMA access to main RAM (i.e. when HostBridge/DRAM controller acts as a bus 0 PCI target)...but I do not see them...
Think of it as 3 separate things. The first thing is the something that determines if a memory access should go to the DRAM or if it should got to the PCI bus. The second thing is the memory controller that handles reads/writes to DRAM. The third thing is the PCI host controller.

Normally, the first thing (the piece that determines if a memory access should go to DRAM or to PCI, which is not a PCI device in any way), pretends to be a PCI device just so that there's a sane way to configure it (e.g. using a "pretend device's PCI configuration space"). The second thing has almost no configuration and whatever configuration it does have tends to be slapped into the same "pretend device's PCI configuration space" as the first thing. The third thing (the actual PCI host controller) has its own PCI configuration space like you'd expect, but this is separate to the "pretend device's PCI configuration space".

Of course because the memory controller is not a PCI device at all (and is just pretending so that it can have PCI configuration space); the memory controller's configuration uses the part of the PCI configuration space that you've labelled "Device Private" and most of the stuff in the "Common Header" area only exists for compatibility and isn't actually used (e.g. things like the PCI Command Register and PCI Status Register are probably documented as "everything hardwired to disabled", none of the BARs would be used for anything, etc).

If you download the datasheet/s for the specific CPU, you'll find all the information about the PCI configuration space for these pieces. This includes all the important stuff in that "Device Private" area.


Cheers,

Brendan

Re: Sandy Bridge logical view

Posted: Mon Dec 24, 2012 7:57 am
by cianfa72
quoting from datasheet http://www.intel.com/content/dam/doc/da ... asheet.pdf

"The Host interface positively decodes an address towards DRAM if the incoming address is less than the value programmed in TOLUD register"

In my 4GB laptop the available system memory (DRAM) is ab000000=2736 MB (subtracting from the TOLUD value the stolen graphic memory + TSEG), so host interface accesses in the range 0x0 - 0xab000000 are positively decoded towards DRAM....

Now, coming back to your post, IIUC the third "thing" (PCI host controller) is bus 0 device 1 function 0

lkd> !pci 100 0 1 0

PCI Configuration Space (Segment:0000 Bus:00 Device:01 Function:00)
Common Header:
00: VendorID 8086 Intel Corporation
02: DeviceID 0101
04: Command 0007 IOSpaceEn MemSpaceEn BusInitiate
06: Status 0010 CapList
08: RevisionID 09
09: ProgIF 00
0a: SubClass 04 PCI-PCI Bridge
0b: BaseClass 06 Bridge Device
0c: CacheLineSize 0010 BurstDisabled
0d: LatencyTimer 00
0e: HeaderType 81
0f: BIST 00
10: BAR0 00000000
14: BAR1 00000000
18: PriBusNum 00
19: SecBusNum 01
1a: SubBusNum 01
1b: SecLatencyTmr 00
1c: IOBase 30
1d: IOLimit 30
1e: SecStatus 0000
20: MemBase e000
22: MemLimit e0f0
24: PrefMemBase b001
26: PrefMemLimit c1f1
28: PrefBaseHi 00000000
2c: PrefLimitHi 00000000
30: IOBaseHi 0000
32: IOLimitHi 0000
34: CapPtr 88
38: ROMBAR 00000000
3c: IntLine 10
3d: IntPin 01
3e: BridgeCtrl 0000
Device Private:
40: 00000000 00000000 00000000 00000000
50: 00000000 00000000 00000000 00000000
60: 00000000 00000000 00000000 00000000
70: 00000000 00000000 00000000 0a000000
80: c8039001 00000008 0000800d c606144d
90: 0000a005 00000000 00000000 00000000
a0: 01420010 00008000 00000000 02212d02
b0: 11010052 000c2580 00480000 00000000
c0: 00000000 00000800 00000000 00000000
d0: 00000002 00000000 00000000 00000000
e0: 00000000 00000000 00000000 00000000
f0: 00000000 00010000 00000000 00100000
Capabilities:
88: CapID 0d Subsystem ID Capability
89: NextPtr 80
8c: SubVendorID 144d
8e: SubSystemID c606
.......

Now in my mind a piece is missing: accesses coming from DMI interface (initiated by a DMA bus mastering device) how are managed by the DRAM controller ? Does exist specific registers for that ?

Thx

Re: Sandy Bridge logical view

Posted: Mon Dec 24, 2012 9:10 am
by Brendan
Hi,
cianfa72 wrote:Now in my mind a piece is missing: accesses coming from DMI interface (initiated by a DMA bus mastering device) how are managed by the DRAM controller ? Does exist specific registers for that ?
Let's have a diagram!

Code: Select all

     ___________
    |           |
    | CPU cores |
    |___________|
         |
     ____|______________      _________________
    |                   |    |                 |
    | Memory Controller |----| DRAM Controller |--- DRAM chip/s
    |___________________|    |_________________|
           |
     ______|______________
    |                     |
    | PCI Host Controller |
    |_____________________|
           |
     ______|_____
    |            |
    | PCI Bus/es |--- PCI devices
    |____________|
There is only one way for accesses to get from PCI devices to DRAM chips.

Also note that that DRAM isn't mapped to the physical address space directly, and the memory controller can do some mangling (e.g. mapping the RAM that would've been just below 4 GiB to somewhere above 4 GiB). This "physical address to RAM chip address" conversion has to be done, so PCI can't bypass the memory controller and talk directly to the DRAM controller.

Note: for more complex systems (NUMA), the memory controller has a third option - the access can be sent to DRAM, or sent to the PCI host controller, or sent to a different CPU/chip's memory controller; and for some CPUs/chipsets (I'm not sure about yours) there's an IOMMU layer inserted between the memory controller and PCI host controller. This means that "device addresses" from PCI devices get mangled by the IOMMU into "physical addresses", which might be forwarded from one memory controller to another, and then mangled a bit more by the memory controller into "RAM chip addresses".


Cheers,

Brendan

Re: Sandy Bridge logical view

Posted: Mon Dec 24, 2012 10:11 am
by cianfa72
But in the case of IOMMU (as you said inserted between PCI host controller and Memory Controller) the mangling is performed only in upstream direction (from bus mastering PCI device initiating DMA accesses towards system RAM) ?

Re: Sandy Bridge logical view

Posted: Wed Jan 02, 2013 3:26 pm
by cianfa72
Is that right ?

Thanks for your help...

Re: Sandy Bridge logical view

Posted: Wed Jan 02, 2013 3:40 pm
by Brendan
Hi,
cianfa72 wrote:But in the case of IOMMU (as you said inserted between PCI host controller and Memory Controller) the mangling is performed only in upstream direction (from bus mastering PCI device initiating DMA accesses towards system RAM) ?
Yes.


Cheers,

Brendan

Re: Sandy Bridge logical view

Posted: Thu Jan 03, 2013 2:48 pm
by cianfa72
Thanks Brendan....just to complete the "big picture" I'd ask about two points:

first: reading "PCI Express Architecture" I learned about Root Complex details: it does exist a Host/PCI bridge and one or more "virtual PCI-PCI" bridges (one for each PCI Express root port in the Root Complex)

From Intel specs of my chipset (see the link above) the "Uncore" include a Device 0, a multifunction Device 1 and a Device 2 (IGD: Integrated Graphic Device). Now coming back to your diagram I believe that:

Device 0 implement the first two "things" (Memory controller + DRAM controller) + Host/PCI bridge (using a "pretend device's PCI configuration space");
Device 1's functions implement "virtual PCI-PCI" bridges (Are these the PCI Host controllers did you refer to in this thread ?)

second: from specs it seem "positive decoding" is involved in decoding memory ranges either to DRAM or PCI (I thought all ranges decoded to PCI or DMI should be only "subtractively decoded")

Does it make sense ? :shock:

Re: Sandy Bridge logical view

Posted: Thu Jan 03, 2013 7:16 pm
by Brendan
Hi,
cianfa72 wrote:From Intel specs of my chipset (see the link above) the "Uncore" include a Device 0, a multifunction Device 1 and a Device 2 (IGD: Integrated Graphic Device). Now coming back to your diagram I believe that:

Device 0 implement the first two "things" (Memory controller + DRAM controller) + Host/PCI bridge (using a "pretend device's PCI configuration space");
Device 1's functions implement "virtual PCI-PCI" bridges (Are these the PCI Host controllers did you refer to in this thread ?)
To me it looks like:
  • Bus 0, device 0, function 0 = Memory controller and DRAM controller
    Bus 0, device 1, functions 0 to 2 = PCI host bridges (probably connected to PCI slots on motherboard)
    Bus 0, device 6, functions 0 = PCI host bridge (probably used internally for onboard video only)
    Bus X, device 2, function 0 = onboard video (I'd assume bus is determined by corresponding host bridge "secondary bus" register)
cianfa72 wrote:second: from specs it seem "positive decoding" is involved in decoding memory ranges either to DRAM or PCI (I thought all ranges decoded to PCI or DMI should be only "subtractively decoded")
Subtractive decoding is "everything that wasn't positively decoded". For an example, consider this:

Code: Select all

if( (address > start1) && (address < end1) ) {
    // Positively decoded, send to "somewhere1"
} else if( (address > start2) && (address < end2) ) {
    // Positively decoded, send to "somewhere2"
} else {
    // Subtractively decoded, send to "somewhere3"
}
Basically, the Memory controller positively decodes accesses to send to the DRAM controller; then the PCI host controllers positively decode accesses to send to the corresponding PCI buses; then whatever is left over is the "subtractively decoded" stuff sent to DMI.


Cheers,

Brendan

Re: Sandy Bridge logical view

Posted: Fri Jan 04, 2013 12:18 pm
by cianfa72
Brendan wrote: To me it looks like:
  • Bus 0, device 0, function 0 = Memory controller and DRAM controller
    Bus 0, device 1, functions 0 to 2 = PCI host bridges (probably connected to PCI slots on motherboard)
    Bus 0, device 6, functions 0 = PCI host bridge (probably used internally for onboard video only)
    Bus X, device 2, function 0 = onboard video (I'd assume bus is determined by corresponding host bridge "secondary bus" register)
quoting Intel specs

"2.4 Processor Register Introduction
The processor contains two sets of software accessible registers, accessed using the Host processor I/O address space — Control registers and internal configuration registers

• Control registers are I/O mapped into the processor I/O space, which control access to PCI and PCI Express configuration space (see Section 2.4.1)

• Internal configuration registers residing within the processor are partitioned into three logical device register sets (“logical” since they reside within a single physical device). The first register set is dedicated to Host Bridge functionality (that is, DRAM configuration, other chipset operating parameters and optional features). The second register block is dedicated to Host-PCI Express Bridge functions (controls PCI Express interface configurations and operating parameters). The third register block is for the internal graphics functions"

So (Memory controller + DRAM controller) device should be the "Host Bridge" device in Intel description...

Register blocks dedicated to Host-PCI Express Bridge functions should be the "virtual" PCI-PCI bridges (see for example the following description taken from the same Intel manual:

"2.10.2 DID6—Device Identification Register
This register, combined with the Vendor Identification register, uniquely identifies any PCI device.
..........
B/D/F/Type: 0/6/0/PCI
Address Offset: 2–3h
Reset Value: 010Dh
Access: RO-FW
Size: 16 bits
Bit Attr Reset
Value
RST/
PWR Description
15:0 RO-FW 010Dh Uncore
Device Identification Number MSB (DID_MSB)
Identifier assigned to the processor root port (virtual PCI-to-PCI bridge, PCI Express Graphics port)"

As final point, as you can see, onboard video - IGD (Device 2 function 0) resides on bus 0 (it has not attached behind a PCI bridge on a secondary bus)

lkd> !pci 2 ff
PCI Segment 0 Bus 0
00:0 8086:0104.09 Cmd[0006:.mb...] Sts[2090:c....] Intel Host Bridge SubID:144d:c606
01:0 8086:0101.09 Cmd[0007:imb...] Sts[0010:c....] Intel PCI-PCI Bridge 0->0x1-0x1
02:0 8086:0126.09 Cmd[0407:imb...] Sts[0090:c....] Intel VGA Compatible Controller SubID:144d:c606
16:0 8086:1c3a.04 Cmd[0006:.mb...] Sts[0010:c....] Intel Other Communications Controller SubID:144d:c606
1a:0 8086:1c2d.04 Cmd[0006:.mb...] Sts[0290:c....] Intel USB2 Controller SubID:144d:c606
1b:0 8086:1c20.04 Cmd[0006:.mb...] Sts[0010:c....] Intel Class:4:3:0 SubID:144d:c606
1c:0 8086:1c10.b4 Cmd[0006:.mb...] Sts[0010:c....] Intel PCI-PCI Bridge 0->0x2-0x2
1c:3 8086:1c16.b4 Cmd[0007:imb...] Sts[0010:c....] Intel PCI-PCI Bridge 0->0x3-0x3
1d:0 8086:1c26.04 Cmd[0006:.mb...] Sts[0290:c....] Intel USB2 Controller SubID:144d:c606
1f:0 8086:1c49.04 Cmd[0007:imb...] Sts[0210:c....] Intel ISA Bridge SubID:144d:c606
1f:2 8086:1c03.04 Cmd[0007:imb...] Sts[02b0:c6...] Intel Class:1:6:1 SubID:144d:c606
1f:3 8086:1c22.04 Cmd[0003:im....] Sts[0280:.....] Intel SMBus Controller SubID:144d:c606
PCI Segment 0 Bus 0x2
00:0 8086:08ae.00 Cmd[0006:.mb...] Sts[0010:c....] Intel Other Network Controller SubID:8086:1005
PCI Segment 0 Bus 0x3
00:0 10ec:8168.06 Cmd[0407:imb...] Sts[0010:c....] Realtek Ethernet Controller SubID:144d:c606

Do you agree with this description ? Thanks for your patience !

Re: Sandy Bridge logical view

Posted: Fri Jan 04, 2013 3:45 pm
by Brendan
Hi,
cianfa72 wrote:
Brendan wrote: To me it looks like:
  • Bus 0, device 0, function 0 = Memory controller and DRAM controller
    Bus 0, device 1, functions 0 to 2 = PCI host bridges (probably connected to PCI slots on motherboard)
    Bus 0, device 6, functions 0 = PCI host bridge (probably used internally for onboard video only)
    Bus X, device 2, function 0 = onboard video (I'd assume bus is determined by corresponding host bridge "secondary bus" register)
quoting Intel specs

"2.4 Processor Register Introduction
The processor contains two sets of software accessible registers, accessed using the Host processor I/O address space — Control registers and internal configuration registers

• Control registers are I/O mapped into the processor I/O space, which control access to PCI and PCI Express configuration space (see Section 2.4.1)

• Internal configuration registers residing within the processor are partitioned into three logical device register sets (“logical” since they reside within a single physical device). The first register set is dedicated to Host Bridge functionality (that is, DRAM configuration, other chipset operating parameters and optional features). The second register block is dedicated to Host-PCI Express Bridge functions (controls PCI Express interface configurations and operating parameters). The third register block is for the internal graphics functions"

So (Memory controller + DRAM controller) device should be the "Host Bridge" device in Intel description...
Intel's terminology is bad/confusing (too many different things called "host bridge"), and isn't the terminology I've been using (and isn't the terminology I'd recommend anyone else uses).
cianfa72 wrote:As final point, as you can see, onboard video - IGD (Device 2 function 0) resides on bus 0 (it has not attached behind a PCI bridge on a secondary bus)
I can see "power on" default values, but I can't see what the firmware would set any of these registers to during boot. For example; for all of the "Host-PCI Express Bridge" the secondary bus number is 0 (until the firmware sets it to something sane). I mostly just assumed that one of those "Host-PCI Express Bridge" would be used for the inbuilt graphics, and that the inbuilt graphics was "bus 0" because that's the power on default (and not because it's sane).

Of course I don't know for sure and I'm just guessing - I don't have the computer with this chip to play with.

Also note that I have been ignoring your "!pci" dumps because I don't know what this utility is or if it's reliable, and because the information it displays doesn't correspond to things documented in the datasheet. For example, "bus 0, device 1, function 1" isn't mentioned in any of them. This tells me that either the "!pci" utility is dodgy or the hardware is not what the datasheet describes.


Cheers,

Brendan

Re: Sandy Bridge logical view

Posted: Sat Jan 05, 2013 8:03 am
by cianfa72
I can see "power on" default values, but I can't see what the firmware would set any of these registers to during boot. For example; for all of the "Host-PCI Express Bridge" the secondary bus number is 0 (until the firmware sets it to something sane). I mostly just assumed that one of those "Host-PCI Express Bridge" would be used for the inbuilt graphics, and that the inbuilt graphics was "bus 0" because that's the power on default (and not because it's sane).
"power-on" registers's default values, IIUC, are the values that we can read in the "Reset value" column of Intel manual....right ?
AFAIK BIOS at start up enumerates the overall system setting up the chipset registers's values accordingly and then when OS boot up (Win 7 in this case) I suppose it will not modify them at all..
Also note that I have been ignoring your "!pci" dumps because I don't know what this utility is or if it's reliable, and because the information it displays doesn't correspond to things documented in the datasheet. For example, "bus 0, device 1, function 1" isn't mentioned in any of them. This tells me that either the "!pci" utility is dodgy or the hardware is not what the datasheet describes.
!pci is an extension debugging command available in Windows local kernel debugging (lkd). I don't know the details but i suppose it does a search in the hierarchy showing PCI device's configuration space....
I believe that some bus/device/fuctions (e.g. bus 0, device 1, function 1) are not mentioned because of DEVEN configuration register setting (see the following quote from Intel manual)

"2.5.13 DEVEN—Device Enable Register
This register allows for enabling/disabling of PCI devices and functions that are within the processor package. In the following table the bit definitions describe the behavior of all combinations of transactions to devices controlled by this register. All the bits in this register are Intel TXT Lockable.

B/D/F/Type: 0/0/0/PCI
Address Offset: 54–57h
Reset Value: 0000209Fh
Access: RW-L, RO, RW
Size: 32 bits
BIOS Optimal Default 000000h
Bit Attr Reset
Value
RST/
PWR Description
31:15 RO 0h Reserved
14 RO 0h Reserved
13 RW-L 1b Uncore
PEG60 Enable (D6F0EN)
0 = Disabled. Bus 0 Device 6 Function 0 is disabled and hidden.
1 = Enabled. Bus 0 Device 6 Function 0 is enabled and visible.
This bit will be set to 0b and remain 0b if PEG60 capability is disabled.
12:8 RO 0h Reserved
7 RO 0h Reserved
6:5 RO 0h Reserved
4 RW-L 1b Uncore
Internal Graphics Engine (D2EN)
0 = Disabled. Bus 0 Device 2 is disabled and hidden
1 = Enabled. Bus 0 Device 2 is enabled and visible
This bit will be set to 0b and remain 0b if Device 2 capability is disabled.
3 RW-L 1b Uncore
PEG10 Enable (D1F0EN)
0 = Disabled. Bus 0 Device 1 Function 0 is disabled and hidden.
1 = Enabled. Bus 0 Device 1 Function 0 is enabled and visible.
This bit will be set to 0b and remain 0b if PEG10 capability is disabled.
2 RW-L 1b Uncore
PEG11 Enable (D1F1EN)
0 = Disabled. Bus 0 Device 1 Function 1 is disabled and hidden.
1 = Enabled. Bus 0 Device 1 Function 1 is enabled and visible.
This bit will be set to 0b and remain 0b if:
• PEG11 is disabled by strap (PEG0CFGSEL)
1 RW-L 1b Uncore
PEG12 Enable (D1F2EN)
0 = Disabled. Bus 0 Device 1 Function 2 is disabled and hidden.
1 = Enabled. Bus 0 Device 1 Function 2 is enabled and visible.
This bit will be set to 0b and remain 0b if:
• PEG12 is disabled by strap (PEG0CFGSEL)
0 RO 1b Uncore
Host Bridge (D0EN)
Bus 0 Device 0 Function 0 may not be disabled and is therefore hardwired to 1."

The configuration space for bus 0, device 0, function 0 offset 54–57h:

Code: Select all

lkd> !pci 100 0 0 0
PCI Configuration Space (Segment:0000 Bus:00 Device:00 Function:00)
Common Header:
    00: VendorID       8086 Intel Corporation
    02: DeviceID       0104
    04: Command        0006 MemSpaceEn BusInitiate 
    06: Status         2090 CapList FB2BCapable InitiatorAbort 
    08: RevisionID     09
    09: ProgIF         00
    0a: SubClass       00 Host Bridge
    0b: BaseClass      06 Bridge Device
    0c: CacheLineSize  0000
    0d: LatencyTimer   00
    0e: HeaderType     00
    0f: BIST           00
   ............
Device Private:
    40: fed19001 00000000 fed10001 00000000
    50: 00000211 00000019 af90000f ab000001
    .......
By the way, to me it is a bit strange that the NVIDIA video card (shown by Windows Device Manager as bus 1, device 0, function 0) is not shown at all in the PCI hierarchy (it should be behind 0/1/0 Host-PCI Express bridge as shown by Device Manager)
NVIDIA video card as shown by Windows Device Manager
NVIDIA video card as shown by Windows Device Manager
Any idea :?: thanks !