Find aggregate allocated VM CPU and Memory per host basis

Here’s a quick and dirty one-liner to find the total allocated CPU & Memory per ESXi host basis. This is useful when you want to get a quickly get your over-commitment ratios.

$TotalMem=0; Get-VMHost myESXiHost1 | Get-VM | ?{$_.PowerState -match 'PoweredOn'} | %{$TotalMem=$TotalMem+$_.MemoryGB};Write-Host $TotalMem
$TotalCPU=0; Get-VMHost myESXiHost2 | Get-VM | ?{$_.PowerState -match 'PoweredOn'} | %{$TotalCPU = $TotalCPU+$_.numCPU};Write-Host $TotalCPU
Advertisements

Thumb rules for memory allocation across virtual machines

Here are some thumb rules (best practices) for capacity planning of memory (RAM) allocation within your virtual infrastructure. These rules typically work 90 times of 100, however there would be exceptions:

  1. If you have over committed your available physical memory on the hypervisor, then the maximum memory configured on any single virtual machine should ideally be less than 40% of the total physical memory. The total memory configured on a single virtual machine in exceptional cases may exceed up to 60%.
  2. Over commitment should typically be not more 30% of the total physical memory on aggregate virtual machine basis. Remember only powered on virtual machines are considered for over-commitment.
  3. On a per virtual machine basis never over commit more than the available physical memory on the hypervisor.
  4. Its a good idea to reserve 30% of virtual machine memory for server virtualization projects, or increase it to accommodate active memory.
  5. Ensure memory assigned is vNUMA balanced.

On physical machines its a good practice to buy all the physical RAM from one single vendor with identical specifications. Physical RAM should be purchased in configuration to populate all the DIMM slots across all the physical sockets.

Thumb rules for vCPU allocation

Here are some thumb rules (best practices) for capacity planning of virtual CPU (vCPU) allocation within your virtual infrastructure. These rules typically work 90 times of 100, however there would be exceptions:

  1. Don’t configure any single VM with more vCPUs than the total number of available physical cores on a machine. Also while allocating don’t consider hyper-threads (hyper-threading feature on Intel hosts).
  2. If you plan for over commitment of CPU then wherever possible do not assign more vCPUs than the number of cores found on a single physical socket (exceptions exist). Example: rather than assigning a 2 virtual socket, single core virtual CPU for a virtual machine, it would be better to assign a single virtual socket, dual core virtual CPU.
  3. Map physical NUMA to virtual NUMA. Avoid using a VM with wide vNUMA.
  4. Hypervisor (VMKernel) also has a overhead so ensure at any given point in time there is at-least one free physical cores to schedule the Hypervisor. So this will also effect what we stated in point 1, when considering maximum number of physical cores reduce the number by 1 for a single socket system and by 2 on a multi-socket systems (this is to consider for the Hypervisor CPU Overhead)
  5. For server virtualization projects the vCPU ratio to physical cores should not exceed more than 12/13 on per physical core basis. You can easily start with an allocation of about 7 vCPUs per physical core.
  6. For virtual desktop environment projects the vCPU ratio should not exceed more than 18/20 for each physical core. You can easily start with an allocation of about 12 vCPUs per physical core.
  7. Whenever possible consider not to over allocate vCPUs than what is required by your application or virtual machine. For example if possible to assign a single vCPU to your virtual machine rather 2 vCPUs. Over-commitment at a hypervisor level is quite different from over-commitment at a virtual machine level and the two things should not be confused with.

When considering virtualizing CPU intensive applications (presently running in physical), some things to remember:

  • Assuming you have enabled Hyper-threading in both physical and virtual infrastructure. Remember that a single vCPU corresponds to a single hyper-thread in virtual world whereas in the physical world one core corresponds to two hyper-threads. So if you assign your virtual machine with the same number of vCPU as the number of physical cores you had for that application, then you are actually assigning half the CPU resources to the application/virtual machine instance in virtual world. As such your application under load will perform at 50% of the physical instance. Hence it would be better double the number of vCPUs assigned to your virtual machine.

What are Virtual Machine Templates?

In vSphere, Virtual Machine Templates are:

  • Virtual Machines that cannot be powered on
  • Virtual Machines that cannot be modified

Templates are like gold images, you create a template and deploy multiple VMs from them. Templates configuration files use an extension VMTX (virtual machines configuration files have an extension VMX). There are two easy ways to create a template:

  1. Clone an existing virtual machine (VM) to a template.
    – Creates a copy of an existing VM and registers it as a template.
  2. Convert an existing VM to a template.
    – Unregisters existing VM and registers it as a template.

Workflow for modifying a template:

  1. First convert the template to VM
  2. Optionally, make changes to the VM hardware (increase RAM, disk size etc.)
  3. Power-on the VM and install any new applications or updates
  4. After installing the application and updates, shutdown the VM
  5. Now convert the VM back to a template

Deploying Virtual Machines from Templates and Guest Customization:

When virtual machines are deployed from the template, they will identical to the template, things such as Windows-SID and hostnames would same. If using static IPs, those would also be same. This can create software and network conflicts.

To avoid such network or software conflicts, its recommended to customize the VM Guest during the deployment process. For Guest OS customization of virtual machines deployed from templates, vCenter requires the following:

  • If the guest OS is Windows, you need VMware Tools and Sysprep utils to be installed within the templates:
    • You will need to copy Sysprep tools to vCenter for Windows 2000, XP & 2003. Starting with Windows Vista onward, Sysprep tools are part of the base OS install. Where to copy Sysprep utils, read the following KB#1005593 article
  • For Linux VMs/Templates, along with VMware toools, you will also need Perl to be installed within your templates.

Best Practice: Always create templates for powered off virtual machines. Do not clone templates from a powered on virtual machines.

Enumerating GuestId supported by VMware ESXi

Today when googling for various GuestId supported by VMware ESXi, I found a cool way that someone posted to “serverfault”. Here is a piece of PowerCLI code to do exactly that:

[System.Enum]::GetNames([VMware.Vim.VirtualMachineGuestOsIdentifier])

For reference, here is the original link where I found this:

http://serverfault.com/questions/597145/finding-guestid-in-offline-documentation

 

vCenter Install Issues on Windows 2008

Last week, I was installing vCenter Server 5.5 on a Windows 2008 virtual machine and it used to fail. It seems that vCenter SSO would not install correctly on a machine which has multiple NICs. (Interestingly the vCenter 5.1b installed without any issues on the same setup.)

Here is the exact scenario. My (virtual) machine in question had 3 NICs.

NIC1 10.40.40.111 (NATed Network with internet access)
NIC2 10.40.41.111 (Internal Network)
NIC3 10.40.42.111 (Internal Network)

The issue was, I wanted the vCenter SSO (and vCenter and other components) to bind to NIC2 which connected to an internal network. My DNS server also resides on the same network, and the DNS forward lookups and reverse lookups were correctly configured.But still vCenter SSO installation would always fail. During the install of SSO, I used to select the “hostname” instead of IP Address, and the vCenter SSO would get bound to wrong IP/NIC combo. I changed the adapter priorities but with no success. Insetad of using the hostname, I also tried using the IP Address but with no luck.

Finally, I disabled both the adapters which I did not want to be selected by vCenter SSO, and then did the entire vCenter Server installation. After installation, I checked the access to the vCenter via the Webclient, added a couple of hosts and it all worked. After this, I then re-enabled the disabled NICs and the vCenter Server continued to play nicely.

Moral of the story, if you have a windows machine (physical/virtual) with multiple NICs to be used as a vCenter Server. During the vCenter installation disable the NICs that you do not want vCenter Server to bind to. Complete the install, reboot the machine, and re-enable the disabled NICs.

What really makes me think is during vCenter SSO install, why can’t the install wizard pop a dialog to the user to select the appropriate NIC card for network binding? The current installer behavior is not at all user friendly. This is a classic usability issue that you must avoid. Over engineering for auto-selection of a NIC during install, can be easily fixed by a simple dialog box. Less code to write and audit, gives you less bugs. Keep it simple.

This issue was observed on vCenter 5.5 build#1312299. I was using the vCenter installer ISO for setting up (VMware-VIMSetup-all-5.5.0-1312299.iso). I believe there were a few KB articles that talked about this. Unable to find those articles now.

And yes, I was not using a simple install, i was using a component based installed.

Well and I also want to thank my friend and colleague Atul Bothe, who actually identified a workaround to this issue. Thanks Atul! 🙂