VMDirectPath with vSphere

VMDirectPath allows for passthrough of installed PCI devices to the guest operating systems. To be able to use VMDirectPath, the host processor (and the motherboard) needs to support IOMMU (I/O memory management unit). Intel calls this feature as VT-d (i believe AMD calls it as AMD IOMMU).

Its pretty simple to configure with VMware ESXi, enable the feature in BIOS of physical host and then via vSphere client,

  1. In tree view select your ‘Host’–>Select ‘Configuration’ tab
  2. Under ‘Hardware’ Pane–>Select ‘Advanced Settings’
  3. Via the ‘Configure Passthrough’ link, enable the PCI devices which you want to use as passthrough devices.
  4. After enabling the PCI devices for passthrough, go ahead and add a PCI device to VM hardware.

Important points to remember:

  1. To disable PCI device passthrough, after disabling an ESXi host has to be rebooted.
  2. Once a PCI device passthrough to a VM has been configured then that device cannot be used by the ESXi host.

Usecases for VMDirectPath (Directed-IO):

  1. Allowing use of physical PCI devices which are supported by the guest operating system but not supported by VMware ESXi for virtualization.
    E.g. Physical NIC cards, Fax/GSM (PCI/USB) Modems, FC-HBA
  2. VMware ESXi does not provide a 10 gigabit emulated interface, hence PCI passthrough is a good option when you want to provide a 10 gigabit interface within a guest OS. Especially useful if you have not installed VMware tools or cannot use the VMXNET3 para-virtual adapters.

Limitations:

  1. A maximun of upto two passthrough devices per VM.
  2. Generally, passthrough devices cannot be shared between multiple VMs. Some PCI devices may support advanced sharing features (via IOMMU) which may allow for sharing of such devices with multiple VMs on the same host.
  3. To support PCI passthrough devices, VM hardware version has to at least 7 (not sure need to verify this). So I guess it is only supported with ESXi 4.x and above.
  4. Enabling PCI passthrough would also VMs to be configured with a full memory reservation.
  5. A VM enabled with PCI passthrough device cannot vMotion (migrate across hosts).

Gotcha:
Out of box, ESXi (4.1 and above) should do DMA & device interupt re-mapping. Sometimes (again dependent on processor and bios/firmware versions of M/B and physical PCI devices) the interrupt mapping does not behave good with the hypervisor. And this can cause an unresponsive/slow VM or hypervisor. You can try the things noted in this VMware KB#1030265 article.

Some additional reading: Configuration Examples and Troubleshooting for VMDirectPath.

Instead of doing a passthrough of a FC-HBA, a virtual HBA (FC) could also be generated and added to a VM. This is possible only if the physical FC-HBA providing the fabric connectivity supports NPIV. You can generate a WWN for your VM by editing the VM setting and going to the options tab and selecting ‘Fibre Channel NPIV’. I have never used this, however I guess one should be able to do LUN masking / zoning using the generated virtual WWN assigned to the VM.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s