Page 1 of 1

VT-d virtualization (vCPU system address map)

Posted: Fri Nov 15, 2013 5:12 am
by cianfa72
Hi folks,

I'm playing with virtualization technologies in particular with Intel VT-d (Directed I/O).

As far as i understand, basically, VT-d implements IOMMU logic in the northbridge (integrated in the processor package in these days...) IOMMU allows I/O devices (e.g. PCIe NIC card) to perform DMA in the guest physical address space (GPA) directly (basically guest OS's drivers program the dma-capable devices with the target guest physical address and then it is in charge of IOMMU to perform the GPA -> Host Physical Address (HPA) translation ). Hypervisor (VMM) has to set up specific device based's IOMMU page tables

Now my doubt is about the "virtual CPU" system address map (system address map as viewed by the virtual CPU inside VM) when a device is assigned to a VM. Devices assigned to a VM (e.g. a PCIe NIC card) should map their registers into "virtual CPU" I/O or memory space (I/O port or MMIO) i guess

How does the Hypervisor manage it ? The VM assigned device will continue to exist in the "host cpu" system address map, too ?

thanks

Re: VT-d virtualization (vCPU system address map)

Posted: Fri Nov 15, 2013 5:24 pm
by Brendan
Hi,
cianfa72 wrote:How does the Hypervisor manage it ?
For a real computer, the physical address ranges (and/or IO port ranges) used by a PCI device is determined by the value/s in that device's BARs (Base Address Registers) in PCI configuration space. For whole system emulation, your hypervisor would emulate any changes that the guest makes to its virtual PCI configuration space by adjusting the IOMMU to setup and change how the device is mapped into the virtual machine.
cianfa72 wrote:The VM assigned device will continue to exist in the "host cpu" system address map, too ?
It won't cease to exist; but you can't expect good things to happen when both guest and host are trying to use/control the same device at the same time. Mostly; the host knows that the device is being used by the virtual machine (or was assigned to the virtual machine) and so the host doesn't touch the device at all.

The alternative is giving the virtual machine emulated devices. For example, the hyper-visor might emulate a PIT chip or an ethernet card or hard disk controller or a video card; and might use the host OS's timing, networking, file IO or GUI to make the emulated devices work; and the host OS might use a real PIT chip, real ethernet card, real hard disk controller or real video card to provide the APIs/services that the hypervisor uses.


Cheers,

Brendan

Re: VT-d virtualization (vCPU system address map)

Posted: Sat Nov 16, 2013 6:28 am
by cianfa72
Brendan wrote: For whole system emulation, your hypervisor would emulate any changes that the guest makes to its virtual PCI configuration space by adjusting the IOMMU to setup and change how the device is mapped into the virtual machine.
Brendan
As far as I know IOMMU is involved just only in DMA to perform translation between bus-relative address issued by dma-capable device and physical system address space (as said before dma-capable device's driver programs the device with a bus-relative target address)

If that's correct, I believe only the CPU MMU (using shadow page tables or EPT/NPT technology) can offer to guest OS a different mapped view (e.g. memory-mapped I/O ranges) of underlying devices as mapped in the host system address space

Does it make sense ?

Re: VT-d virtualization (vCPU system address map)

Posted: Sun Nov 17, 2013 7:12 am
by Brendan
Hi,
cianfa72 wrote:
Brendan wrote: For whole system emulation, your hypervisor would emulate any changes that the guest makes to its virtual PCI configuration space by adjusting the IOMMU to setup and change how the device is mapped into the virtual machine.
Brendan
As far as I know IOMMU is involved just only in DMA to perform translation between bus-relative address issued by dma-capable device and physical system address space (as said before dma-capable device's driver programs the device with a bus-relative target address)

If that's correct, I believe only the CPU MMU (using shadow page tables or EPT/NPT technology) can offer to guest OS a different mapped view (e.g. memory-mapped I/O ranges) of underlying devices as mapped in the host system address space

Does it make sense ?
Yes (it makes sense), and I think you're right. Your hypervisor would emulate any changes that the guest makes to its virtual PCI configuration space by adjusting the CPU's MMU/paging structures to setup and change how the device is mapped into the virtual machine.


Cheers,

Brendan