VMDirectPath allows for passthrough of installed PCI devices to the guest operating systems. To be able to use VMDirectPath, the host processor (and the motherboard) needs to support IOMMU (I/O memory management unit). Intel calls this feature as VT-d (i believe AMD calls it as AMD IOMMU).
Its pretty simple to configure with VMware ESXi, enable the feature in BIOS of physical host and then via vSphere client,
- In tree view select your ‘Host’–>Select ‘Configuration’ tab
- Under ‘Hardware’ Pane–>Select ‘Advanced Settings’
- Via the ‘Configure Passthrough’ link, enable the PCI devices which you want to use as passthrough devices.
- After enabling the PCI devices for passthrough, go ahead and add a PCI device to VM hardware.
Important points to remember:
- To disable PCI device passthrough, after disabling an ESXi host has to be rebooted.
- Once a PCI device passthrough to a VM has been configured then that device cannot be used by the ESXi host.
Usecases for VMDirectPath (Directed-IO):
- Allowing use of physical PCI devices which are supported by the guest operating system but not supported by VMware ESXi for virtualization.
E.g. Physical NIC cards, Fax/GSM (PCI/USB) Modems, FC-HBA
- VMware ESXi does not provide a 10 gigabit emulated interface, hence PCI passthrough is a good option when you want to provide a 10 gigabit interface within a guest OS. Especially useful if you have not installed VMware tools or cannot use the VMXNET3 para-virtual adapters.
- A maximun of upto two passthrough devices per VM.
- Generally, passthrough devices cannot be shared between multiple VMs. Some PCI devices may support advanced sharing features (via IOMMU) which may allow for sharing of such devices with multiple VMs on the same host.
- To support PCI passthrough devices, VM hardware version has to at least 7 (not sure need to verify this). So I guess it is only supported with ESXi 4.x and above.
- Enabling PCI passthrough would also VMs to be configured with a full memory reservation.
- A VM enabled with PCI passthrough device cannot vMotion (migrate across hosts).
Out of box, ESXi (4.1 and above) should do DMA & device interupt re-mapping. Sometimes (again dependent on processor and bios/firmware versions of M/B and physical PCI devices) the interrupt mapping does not behave good with the hypervisor. And this can cause an unresponsive/slow VM or hypervisor. You can try the things noted in this VMware KB#1030265 article.
Some additional reading: Configuration Examples and Troubleshooting for VMDirectPath.
Instead of doing a passthrough of a FC-HBA, a virtual HBA (FC) could also be generated and added to a VM. This is possible only if the physical FC-HBA providing the fabric connectivity supports NPIV. You can generate a WWN for your VM by editing the VM setting and going to the options tab and selecting ‘Fibre Channel NPIV’. I have never used this, however I guess one should be able to do LUN masking / zoning using the generated virtual WWN assigned to the VM.