Thumb rules for memory allocation across virtual machines

Here are some thumb rules (best practices) for capacity planning of memory (RAM) allocation within your virtual infrastructure. These rules typically work 90 times of 100, however there would be exceptions:

  1. If you have over committed your available physical memory on the hypervisor, then the maximum memory configured on any single virtual machine should ideally be less than 40% of the total physical memory. The total memory configured on a single virtual machine in exceptional cases may exceed up to 60%.
  2. Over commitment should typically be not more 30% of the total physical memory on aggregate virtual machine basis. Remember only powered on virtual machines are considered for over-commitment.
  3. On a per virtual machine basis never over commit more than the available physical memory on the hypervisor.
  4. Its a good idea to reserve 30% of virtual machine memory for server virtualization projects, or increase it to accommodate active memory.
  5. Ensure memory assigned is vNUMA balanced.

On physical machines its a good practice to buy all the physical RAM from one single vendor with identical specifications. Physical RAM should be purchased in configuration to populate all the DIMM slots across all the physical sockets.

Thumb rules for vCPU allocation

Here are some thumb rules (best practices) for capacity planning of virtual CPU (vCPU) allocation within your virtual infrastructure. These rules typically work 90 times of 100, however there would be exceptions:

  1. Don’t configure any single VM with more vCPUs than the total number of available physical cores on a machine. Also while allocating don’t consider hyper-threads (hyper-threading feature on Intel hosts).
  2. If you plan for over commitment of CPU then wherever possible do not assign more vCPUs than the number of cores found on a single physical socket (exceptions exist). Example: rather than assigning a 2 virtual socket, single core virtual CPU for a virtual machine, it would be better to assign a single virtual socket, dual core virtual CPU.
  3. Map physical NUMA to virtual NUMA. Avoid using a VM with wide vNUMA.
  4. Hypervisor (VMKernel) also has a overhead so ensure at any given point in time there is at-least one free physical cores to schedule the Hypervisor. So this will also effect what we stated in point 1, when considering maximum number of physical cores reduce the number by 1 for a single socket system and by 2 on a multi-socket systems (this is to consider for the Hypervisor CPU Overhead)
  5. For server virtualization projects the vCPU ratio to physical cores should not exceed more than 12/13 on per physical core basis. You can easily start with an allocation of about 7 vCPUs per physical core.
  6. For virtual desktop environment projects the vCPU ratio should not exceed more than 18/20 for each physical core. You can easily start with an allocation of about 12 vCPUs per physical core.
  7. Whenever possible consider not to over allocate vCPUs than what is required by your application or virtual machine. For example if possible to assign a single vCPU to your virtual machine rather 2 vCPUs. Over-commitment at a hypervisor level is quite different from over-commitment at a virtual machine level and the two things should not be confused with.

When considering virtualizing CPU intensive applications (presently running in physical), some things to remember:

  • Assuming you have enabled Hyper-threading in both physical and virtual infrastructure. Remember that a single vCPU corresponds to a single hyper-thread in virtual world whereas in the physical world one core corresponds to two hyper-threads. So if you assign your virtual machine with the same number of vCPU as the number of physical cores you had for that application, then you are actually assigning half the CPU resources to the application/virtual machine instance in virtual world. As such your application under load will perform at 50% of the physical instance. Hence it would be better double the number of vCPUs assigned to your virtual machine.

What are Virtual Machine Templates?

In vSphere, Virtual Machine Templates are:

  • Virtual Machines that cannot be powered on
  • Virtual Machines that cannot be modified

Templates are like gold images, you create a template and deploy multiple VMs from them. Templates configuration files use an extension VMTX (virtual machines configuration files have an extension VMX). There are two easy ways to create a template:

  1. Clone an existing virtual machine (VM) to a template.
    – Creates a copy of an existing VM and registers it as a template.
  2. Convert an existing VM to a template.
    – Unregisters existing VM and registers it as a template.

Workflow for modifying a template:

  1. First convert the template to VM
  2. Optionally, make changes to the VM hardware (increase RAM, disk size etc.)
  3. Power-on the VM and install any new applications or updates
  4. After installing the application and updates, shutdown the VM
  5. Now convert the VM back to a template

Deploying Virtual Machines from Templates and Guest Customization:

When virtual machines are deployed from the template, they will identical to the template, things such as Windows-SID and hostnames would same. If using static IPs, those would also be same. This can create software and network conflicts.

To avoid such network or software conflicts, its recommended to customize the VM Guest during the deployment process. For Guest OS customization of virtual machines deployed from templates, vCenter requires the following:

  • If the guest OS is Windows, you need VMware Tools and Sysprep utils to be installed within the templates:
    • You will need to copy Sysprep tools to vCenter for Windows 2000, XP & 2003. Starting with Windows Vista onward, Sysprep tools are part of the base OS install. Where to copy Sysprep utils, read the following KB#1005593 article
  • For Linux VMs/Templates, along with VMware toools, you will also need Perl to be installed within your templates.

Best Practice: Always create templates for powered off virtual machines. Do not clone templates from a powered on virtual machines.

Some KB articles about Snapshots

  • KB#1002929: Creating snapshots in a different location than default virtual machine directory
  • KB#1004343: Determining if a virtual machine is using snapshots
  • KB#1006392: Unable to use Snapshots or perform a backup on virtual machines configured with bus-sharing
  • KB#1007849: Consolidating snapshots
  • KB#1012384: Creating a snapshot for an ESX/ESXi virtual machine fails with the error: File is larger than maximum file size supported
  • KB#1015180:Understanding virtual machine snapshots in VMware ESXi and ESX
  • KB#1025279: Best practices for virtual machine snapshots in the VMware environment
  • KB#1007969: Resolving the CID mismatch error: The parent virtual disk has been modified since the child was created
  • KB#1018457: Attaching an RDM with snapshots to a virtual machine
  • KB#1026353: Recreating a missing virtual disk (VMDK) descriptor file for delta disks
  • KB#1027429: Deleting a snapshot during a virtual machine restore process using VMware Virtual Disk Development Kit fails with the error: The parent disk has been modified since the child disk has been created

Scripted ESXi Installation

I know there are many blog articles out there that deal with this subject. But this is basically for my reference, hence documenting it here.

I wanted to have a an automated way of installing ESXi quickly, as I end with unclean ESXi installs after testing some customer environments or setups. So I decided to leverage vSphere kickstart installation to make the process easy.

My kickstart file looks like the following:

vmaccepteula
# Following clear all partitions on the local disks
clearpart --alldrives --overwritevmfs
install --firstdisk --overwritevmfs
# Following will create a VMFS on the second local drive
partition esx1:local --ondisk=mpx.vmhba1:C0:T1:L0
network --bootproto=static --device=vmnic0 --addvmportgroup=1 --ip=192.168.1.1 --netmask=255.255.255.0 --hostname=esx1.foobirds.local --gateway=192.168.1.254 --nameserver=192.168.1.253
rootpw vmware
paranoid
reboot

# following code gets executed only on firstboot after the install
%firstboot --interpreter=busybox
# Now if your kickstart file is located on a NFS share, the NFS mount
# remains on the freshly installed ESXi, the nfs mount name that remains
# is "remote-install-location"
# Following code removes (stale) nfs mount after install
dsName=`esxcli storage nfs list | awk 'END{print $1}'`
esxcli storage nfs remove -v=${dsName}

If you want to do a scripted  install where kickstart file resides on a NFS share, then:

Boot with the ESXi installer CD and use the following command at the boot prompt after pressing ‘Shift +O’:

instead of:

runweasal

use the following command line:

mboot.c32 -c boot.cfg ks=nfs://192.168.1.3/nfs/ks/ks.cfg

If you want to create a custom ISO image of ESXi installer along with embedded kickstart file, follow instructions as mentioned in vSphere documention.

Copy your kickstart file to the root of the ESXi installer and recreate the ISO as documented in the vSphere documentation. Once done, to ensure that ESXi install happens as per the specified kickstart file, edit the following:

boot.cfg

And then modify the

kernelopt

To read as

kernelopt=runweasel ks=cdrom:/ESX_KS.CFG

Important: One gotcha that you should be aware of is that, although you may have entered the kickstart file name correctly, but still the installer may fail to find the kickstart file. This is a known issue and is documented in this KB#1026373 article along with a work around.

For a detailed information about kickstart file options and commands, read this KB#2004582 article.

That’s it.