Hi!
My MP output gives me 8 similar entries for the PCI IRQ configuration. The I/O Interrupt Assignment Entries that confuse me are:
Bus:0 Dev:8 int_A I/O INTIN:0Bh
Bus:0 Dev:9 int_A I/O INTIN:0Bh
Bus:0 Dev:10 int_A I/O INTIN:0Bh
Bus:0 Dev:11 int_A I/O INTIN:0Bh
Bus:0 Dev:12 int_A I/O INTIN:0Bh
Bus:0 Dev:13 int_A I/O INTIN:0Bh
Bus:0 Dev:14 int_A I/O INTIN:0Bh
Bus:0 Dev:15 int_A I/O INTIN:0Bh
Now I am considering two things:
1º Error in the MP parse/print routine
2º PCI IRQs are not configured yet
While not discart it at all, I doubt about the first, becouse the same routine prints correctly the ISA IRQ entries.
All the devices and functions I find during the PCI scaning, are not above 4-5 in total(depending on hardware configuration). And there is only bus:0 in the system.
Has anyone got similar output?
Understanding PCI IRQ MP output...
Re: Understanding PCI IRQ MP output...
Ok. First I had two ideas:
1º To load first the drivers to the simple devices on PCI bus and put in a temporal table all the interrupts they will need to work, for last to load the PCI-ISA Bridge driver and route and ordenate regard the priority all the interrupts in the system.
2º To load first PCI-ISA Bridge driver and create some functions that drivers loaded latter can use to assign interrupts they need and possibly to reorder all the remap-stuff regard the priority of the new interrupt been added.
The second will be much more dificult to me than the first. But will provide some interface for latter use. When an device resume from power-save state, have I to configure it one more time? If not, I probably will choice the first option.
And, in the virtual machine, I have in the PCI-ISA Bridge a function for the APM, I have ACPI present too, but still havn't explored it. Is all this power management needed in an virtualized OS? I mean, isn't all this management done by the host system and therefore the virtualized APM became meaningless?
1º To load first the drivers to the simple devices on PCI bus and put in a temporal table all the interrupts they will need to work, for last to load the PCI-ISA Bridge driver and route and ordenate regard the priority all the interrupts in the system.
2º To load first PCI-ISA Bridge driver and create some functions that drivers loaded latter can use to assign interrupts they need and possibly to reorder all the remap-stuff regard the priority of the new interrupt been added.
The second will be much more dificult to me than the first. But will provide some interface for latter use. When an device resume from power-save state, have I to configure it one more time? If not, I probably will choice the first option.
And, in the virtual machine, I have in the PCI-ISA Bridge a function for the APM, I have ACPI present too, but still havn't explored it. Is all this power management needed in an virtualized OS? I mean, isn't all this management done by the host system and therefore the virtualized APM became meaningless?