Hi, I guess I should introduce myself first, this being my first actual post on the forums. I've been a programmer for about ten years now, but I've never really been satisfied with the way things are done. I have been working on designing an OS recently, which is significantly different than any other OS out there. I'd rather not go into details at this point, but I think it will present a significant step forward in the way we think about the role of an OS, and program design in general.
Recently I've been looking into how I should go about implementing support for PCI devices. I found the most informative resources other than the official PCI specs to be this page and this page. However, a few questions come to mind.
First, is it safe to assume that the information on PCI BIOS functions on both pages is both current, and also reasonably compatible with legacy versions of the PCI standards? By the way, the layout of x86 interrupts and IO ports is quite the mess, isn't it?
Second, and more importantly is there a way to read information from the PCI BIOS via memory mapped IO instead of using IO ports? It's not really a necessity, but it would be very convenient from an implementation perspective. For security reasons, it would be nice to keep PCI functionality modularized in a virtual memory space which only provides access to the PCI subsystem and nothing else. I'm aware you can accomplish the same security by controlling access to I/O ports outside of Ring 0, but making everything memory mapped would make it a lot more straightforward to actually implement device functionality. At least, in my current design it would.
Third, if memory mapped IO to the PCI subsystem is not possible, how exactly do you control access to IO ports? I haven't been able to dig up much information on this subject..
Memory Mapped PCI Subsystem Access
Re: Memory Mapped PCI Subsystem Access
Hi,
In this case, it's easy to have fall-backs - for e.g. you have some sort of kernel option that the end-user can use to tell the OS how to access PCI configuration space that's used when the PCI BIOS isn't present or doesn't work.
However, if you get the documentation for a recent Intel chipset it will show you how it works (as long as you know the "base address" for the memory mapped PCI configuration space area/s). To find the base address you're meant to search for an ACPI table with the identifier "MCFG". The layout of this ACPI table is in the (expensive) PCI specifications; but it's possible to figure it out from other sources (e.g. the Linux kernel source code). To save everyone some hassle, here it is:
Note: This means that (in theory) you can have 256 PCI buses with a separate memory mapped area for each bus (and a separate 16 byte entry in the ACPI table for each area). In practice, I'd assume that most sane computers just have one memory mapped area for all PCI buses (and one 16 byte entry in the ACPI table). I have no idea what the "PCI Segment Group Number" is for (or what the "_SEG" object in ACPI AML code is used for). Also, host controllers that use memory mapped PCI configuration space must also support "PCI configuration space access mechanism #1" (and it's possible to use both at the same time).
Cheers,
Brendan
It's safe to assume that using the PCI BIOS is both slow and annoying, especially if the OS is running in protected mode (or long mode). It's better to write some functions in your kernel that can be used to access PCI configuration space directly (e.g. with I/O ports); and only use the "Installation Check" BIOS function once during boot (to determine if you need to use "PCI configuration space access mechanism #1" or "PCI configuration space access mechanism #2").Xzyx987X wrote:First, is it safe to assume that the information on PCI BIOS functions on both pages is both current, and also reasonably compatible with legacy versions of the PCI standards?
In this case, it's easy to have fall-backs - for e.g. you have some sort of kernel option that the end-user can use to tell the OS how to access PCI configuration space that's used when the PCI BIOS isn't present or doesn't work.
Yes, but it's not supported on older hardware - it's part of "PCI Express", and also increases the size of PCI configuration space (e.g. 4 KiB of configuration space for each device). Unfortunately information on how this works is a little hard to find because PCI SIG expect you to pay a lot of cash just to see the relevant specifications.Xzyx987X wrote:Second, and more importantly is there a way to read information from the PCI BIOS via memory mapped IO instead of using IO ports?
However, if you get the documentation for a recent Intel chipset it will show you how it works (as long as you know the "base address" for the memory mapped PCI configuration space area/s). To find the base address you're meant to search for an ACPI table with the identifier "MCFG". The layout of this ACPI table is in the (expensive) PCI specifications; but it's possible to figure it out from other sources (e.g. the Linux kernel source code). To save everyone some hassle, here it is:
Code: Select all
Offset Size Description
0x0000 4 bytes ACPI table signature (must be "MCFG")
0x0004 4 bytes Length of ACPI table in bytes
0x0008 1 byte Table revision
0x0009 1 byte Table checksum
0x000A 6 bytes OEM ID
0x0010 8 bytes Manufacturer model ID
0x0018 4 bytes OEM revision
0x001C 4 bytes Vendor ID for utility that created table
0x0020 4 bytes Revision of utility that created table
0x0024 8 bytes Reserved
0x002C Varies List of (one or more) 16 byte entries (see below)
Offset Size Description
0x0000 8 bytes Base address for memory mapped configuration space
0x0008 2 bytes PCI Segment Group Number (should match a "_SEG" object in the ACPI AML code if non-zero)
0x000A 1 byte Starting bus number
0x000B 1 byte Ending bus number
0x000C 4 bytes Reserved
"PCI confuration space access mechanism #1" and "PCI confuration space access mechanism #2" (which is rare, and obsolete now) are both described in old versions of the PCI specification (that can be downloaded for free).Xzyx987X wrote:Third, if memory mapped IO to the PCI subsystem is not possible, how exactly do you control access to IO ports? I haven't been able to dig up much information on this subject..
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Memory Mapped PCI Subsystem Access
Okay, so I don't suppose you would know where I can get a list of all the PCI related IO ports? Or is it just ports 0x0CF8 and 0x0CFC that I have to worry about?Brendan wrote:It's safe to assume that using the PCI BIOS is both slow and annoying, especially if the OS is running in protected mode (or long mode). It's better to write some functions in your kernel that can be used to access PCI configuration space directly (e.g. with I/O ports); and only use the "Installation Check" BIOS function once during boot (to determine if you need to use "PCI configuration space access mechanism #1" or "PCI configuration space access mechanism #2").
Would there actually be any cases where you would have a functional PCI bus with no PCI BIOS implemented? Seems like one of those bridges you could cross after you came upon it. Incidentally, my OS does not have a "kernel" per say. To put it more accurately, there is no one area of the OS you could section off and call a "kernel".Brendan wrote:In this case, it's easy to have fall-backs - for e.g. you have some sort of kernel option that the end-user can use to tell the OS how to access PCI configuration space that's used when the PCI BIOS isn't present or doesn't work.
No kidding. Open standard my @$$.Brendan wrote:Yes, but it's not supported on older hardware - it's part of "PCI Express", and also increases the size of PCI configuration space (e.g. 4 KiB of configuration space for each device). Unfortunately information on how this works is a little hard to find because PCI SIG expect you to pay a lot of cash just to see the relevant specifications.
Do you really have to search the whole memory space to find the table? Isn't there a way to narrow it down?Brendan wrote:However, if you get the documentation for a recent Intel chipset it will show you how it works (as long as you know the "base address" for the memory mapped PCI configuration space area/s). To find the base address you're meant to search for an ACPI table with the identifier "MCFG".
What is to format of the "memory mapped configuration space" the entries point to?Brendan wrote:The layout of this ACPI table is in the (expensive) PCI specifications; but it's possible to figure it out from other sources (e.g. the Linux kernel source code). To save everyone some hassle, here it is:
Note: This means that (in theory) you can have 256 PCI buses with a separate memory mapped area for each bus (and a separate 16 byte entry in the ACPI table for each area). In practice, I'd assume that most sane computers just have one memory mapped area for all PCI buses (and one 16 byte entry in the ACPI table). I have no idea what the "PCI Segment Group Number" is for (or what the "_SEG" object in ACPI AML code is used for).Code: Select all
Offset Size Description 0x0000 4 bytes ACPI table signature (must be "MCFG") 0x0004 4 bytes Length of ACPI table in bytes 0x0008 1 byte Table revision 0x0009 1 byte Table checksum 0x000A 6 bytes OEM ID 0x0010 8 bytes Manufacturer model ID 0x0018 4 bytes OEM revision 0x001C 4 bytes Vendor ID for utility that created table 0x0020 4 bytes Revision of utility that created table 0x0024 8 bytes Reserved 0x002C Varies List of (one or more) 16 byte entries (see below) Offset Size Description 0x0000 8 bytes Base address for memory mapped configuration space 0x0008 2 bytes PCI Segment Group Number (should match a "_SEG" object in the ACPI AML code if non-zero) 0x000A 1 byte Starting bus number 0x000B 1 byte Ending bus number 0x000C 4 bytes Reserved
Yea, and you would of course need to use that method if your were dealing with a PC with only vanilla PCI support. It seems like in my case the easiest way to deal with this would be to write two separate "/Device/Bus" implementations, one for vanilla PCI, and one for PCI express. That way I could at least make full use of memory mapped IO on hardware with the support.Brendan wrote:Also, host controllers that use memory mapped PCI configuration space must also support "PCI configuration space access mechanism #1" (and it's possible to use both at the same time).
I think you may have misunderstood what I was asking. What I wanted to know was how do you securely control what has access to IO ports and what doesn't outside of Ring 0.Brendan wrote:"PCI confuration space access mechanism #1" and "PCI confuration space access mechanism #2" (which is rare, and obsolete now) are both described in old versions of the PCI specification (that can be downloaded for free).
Re: Memory Mapped PCI Subsystem Access
Hi,
Note: One set of functions for "PCI mechanism #1", one set of functions for "PCI mechanism #2", and a third set of functions for "memory mapped". Maybe more if you want to support "paravirtualization".
Cheers,
Brendan
For accessing PCI configuration space, you only have to worry about I/O ports 0x0CF8 and 0x0CFC. For each PCI device's I/O ports you'd need to read the device's BARs (in PCI configuration space). The same goes for memory mapped PCI devices (read which areas the devices uses for memory mapped I/O from it's BARs).Xzyx987X wrote:Okay, so I don't suppose you would know where I can get a list of all the PCI related IO ports? Or is it just ports 0x0CF8 and 0x0CFC that I have to worry about?Brendan wrote:It's safe to assume that using the PCI BIOS is both slow and annoying, especially if the OS is running in protected mode (or long mode). It's better to write some functions in your kernel that can be used to access PCI configuration space directly (e.g. with I/O ports); and only use the "Installation Check" BIOS function once during boot (to determine if you need to use "PCI configuration space access mechanism #1" or "PCI configuration space access mechanism #2").
In general (for a high quality OS), it's a bad idea to assume that the BIOS doesn't have bugs.Xzyx987X wrote:Would there actually be any cases where you would have a functional PCI bus with no PCI BIOS implemented? Seems like one of those bridges you could cross after you came upon it.Brendan wrote:In this case, it's easy to have fall-backs - for e.g. you have some sort of kernel option that the end-user can use to tell the OS how to access PCI configuration space that's used when the PCI BIOS isn't present or doesn't work.
You'd need to search the area from 0x000F0000 to 0x000FFFFF and the first 1 KiB of the EBDA until you find the "ACPI Root System Description Pointer". Once you've found this it will tell you where the ACPI tables are. Then you'd need to find an ACPI table that has the identifier "MCFG". It might be good to read about finding/using ACPI tables in your nearest ACPI specification.Xzyx987X wrote:Do you really have to search the whole memory space to find the table? Isn't there a way to narrow it down?Brendan wrote:However, if you get the documentation for a recent Intel chipset it will show you how it works (as long as you know the "base address" for the memory mapped PCI configuration space area/s). To find the base address you're meant to search for an ACPI table with the identifier "MCFG".
If you get the documentation for a recent Intel chipset it will show you how it works (as long as you know the "base address" for the memory mapped PCI configuration space area/s).Xzyx987X wrote:What is to format of the "memory mapped configuration space" the entries point to?Brendan wrote:Note: This means that (in theory) you can have 256 PCI buses with a separate memory mapped area for each bus (and a separate 16 byte entry in the ACPI table for each area). In practice, I'd assume that most sane computers just have one memory mapped area for all PCI buses (and one 16 byte entry in the ACPI table). I have no idea what the "PCI Segment Group Number" is for (or what the "_SEG" object in ACPI AML code is used for).
I'd use indirect calls (assembly) or function pointers (C). Figure out which access mechanism/s you can use, then set the function pointer/s, then use the function that these function pointers point to when you want to access PCI configuration space.Xzyx987X wrote:Yea, and you would of course need to use that method if your were dealing with a PC with only vanilla PCI support. It seems like in my case the easiest way to deal with this would be to write two separate "/Device/Bus" implementations, one for vanilla PCI, and one for PCI express. That way I could at least make full use of memory mapped IO on hardware with the support.Brendan wrote:Also, host controllers that use memory mapped PCI configuration space must also support "PCI configuration space access mechanism #1" (and it's possible to use both at the same time).
Note: One set of functions for "PCI mechanism #1", one set of functions for "PCI mechanism #2", and a third set of functions for "memory mapped". Maybe more if you want to support "paravirtualization".
First, set IOPL=0. Then use either:Xzyx987X wrote:I think you may have misunderstood what I was asking. What I wanted to know was how do you securely control what has access to IO ports and what doesn't outside of Ring 0.Brendan wrote:"PCI confuration space access mechanism #1" and "PCI confuration space access mechanism #2" (which is rare, and obsolete now) are both described in old versions of the PCI specification (that can be downloaded for free).
- "emulated I/O port access via. GPF exception handler", or
- the I/O permission bitmap in the TSS, or
- provide an API that Ring 3 code can use to ask Ring 0 code to do the I/O port access on it's behalf
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Memory Mapped PCI Subsystem Access
Could you point me to a specific document?Brendan wrote:If you get the documentation for a recent Intel chipset it will show you how it works (as long as you know the "base address" for the memory mapped PCI configuration space area/s).
In my current design, the idea is to use the PCI Bus interface to generate a generic PCI device object, which then has a specific implementation of the device "layered" on top of it. Since the generic PCI device doesn't do much more than acting as a wrapper for the PCI configuration space, you might as well keep the implementations separate.Brendan wrote:I'd use indirect calls (assembly) or function pointers (C). Figure out which access mechanism/s you can use, then set the function pointer/s, then use the function that these function pointers point to when you want to access PCI configuration space.
Anyway, thanks a lot for the help. This is more than enough to get me rolling.
Re: Memory Mapped PCI Subsystem Access
Hi,
Any good OS developers would be able to point you to a specific document. That's because the ability to find information is an extremely important skill, that must be learned in order to become a good OS developer....
Cheers,
Brendan
Yes, I could.Xzyx987X wrote:Could you point me to a specific document?Brendan wrote:If you get the documentation for a recent Intel chipset it will show you how it works (as long as you know the "base address" for the memory mapped PCI configuration space area/s).
Any good OS developers would be able to point you to a specific document. That's because the ability to find information is an extremely important skill, that must be learned in order to become a good OS developer....
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.