Find aggregate allocated VM CPU and Memory per host basis

Here’s a quick and dirty one-liner to find the total allocated CPU & Memory per ESXi host basis. This is useful when you want to get a quickly get your over-commitment ratios.

$TotalMem=0; Get-VMHost myESXiHost1 | Get-VM | ?{$_.PowerState -match 'PoweredOn'} | %{$TotalMem=$TotalMem+$_.MemoryGB};Write-Host $TotalMem
$TotalCPU=0; Get-VMHost myESXiHost2 | Get-VM | ?{$_.PowerState -match 'PoweredOn'} | %{$TotalCPU = $TotalCPU+$_.numCPU};Write-Host $TotalCPU

Thumb rules for memory allocation across virtual machines

Here are some thumb rules (best practices) for capacity planning of memory (RAM) allocation within your virtual infrastructure. These rules typically work 90 times of 100, however there would be exceptions:

  1. If you have over committed your available physical memory on the hypervisor, then the maximum memory configured on any single virtual machine should ideally be less than 40% of the total physical memory. The total memory configured on a single virtual machine in exceptional cases may exceed up to 60%.
  2. Over commitment should typically be not more 30% of the total physical memory on aggregate virtual machine basis. Remember only powered on virtual machines are considered for over-commitment.
  3. On a per virtual machine basis never over commit more than the available physical memory on the hypervisor.
  4. Its a good idea to reserve 30% of virtual machine memory for server virtualization projects, or increase it to accommodate active memory.
  5. Ensure memory assigned is vNUMA balanced.

On physical machines its a good practice to buy all the physical RAM from one single vendor with identical specifications. Physical RAM should be purchased in configuration to populate all the DIMM slots across all the physical sockets.

Thumb rules for vCPU allocation

Here are some thumb rules (best practices) for capacity planning of virtual CPU (vCPU) allocation within your virtual infrastructure. These rules typically work 90 times of 100, however there would be exceptions:

  1. Don’t configure any single VM with more vCPUs than the total number of available physical cores on a machine. Also while allocating don’t consider hyper-threads (hyper-threading feature on Intel hosts).
  2. If you plan for over commitment of CPU then wherever possible do not assign more vCPUs than the number of cores found on a single physical socket (exceptions exist). Example: rather than assigning a 2 virtual socket, single core virtual CPU for a virtual machine, it would be better to assign a single virtual socket, dual core virtual CPU.
  3. Map physical NUMA to virtual NUMA. Avoid using a VM with wide vNUMA.
  4. Hypervisor (VMKernel) also has a overhead so ensure at any given point in time there is at-least one free physical cores to schedule the Hypervisor. So this will also effect what we stated in point 1, when considering maximum number of physical cores reduce the number by 1 for a single socket system and by 2 on a multi-socket systems (this is to consider for the Hypervisor CPU Overhead)
  5. For server virtualization projects the vCPU ratio to physical cores should not exceed more than 12/13 on per physical core basis. You can easily start with an allocation of about 7 vCPUs per physical core.
  6. For virtual desktop environment projects the vCPU ratio should not exceed more than 18/20 for each physical core. You can easily start with an allocation of about 12 vCPUs per physical core.
  7. Whenever possible consider not to over allocate vCPUs than what is required by your application or virtual machine. For example if possible to assign a single vCPU to your virtual machine rather 2 vCPUs. Over-commitment at a hypervisor level is quite different from over-commitment at a virtual machine level and the two things should not be confused with.

When considering virtualizing CPU intensive applications (presently running in physical), some things to remember:

  • Assuming you have enabled Hyper-threading in both physical and virtual infrastructure. Remember that a single vCPU corresponds to a single hyper-thread in virtual world whereas in the physical world one core corresponds to two hyper-threads. So if you assign your virtual machine with the same number of vCPU as the number of physical cores you had for that application, then you are actually assigning half the CPU resources to the application/virtual machine instance in virtual world. As such your application under load will perform at 50% of the physical instance. Hence it would be better double the number of vCPUs assigned to your virtual machine.

What are Virtual Machine Templates?

In vSphere, Virtual Machine Templates are:

  • Virtual Machines that cannot be powered on
  • Virtual Machines that cannot be modified

Templates are like gold images, you create a template and deploy multiple VMs from them. Templates configuration files use an extension VMTX (virtual machines configuration files have an extension VMX). There are two easy ways to create a template:

  1. Clone an existing virtual machine (VM) to a template.
    – Creates a copy of an existing VM and registers it as a template.
  2. Convert an existing VM to a template.
    – Unregisters existing VM and registers it as a template.

Workflow for modifying a template:

  1. First convert the template to VM
  2. Optionally, make changes to the VM hardware (increase RAM, disk size etc.)
  3. Power-on the VM and install any new applications or updates
  4. After installing the application and updates, shutdown the VM
  5. Now convert the VM back to a template

Deploying Virtual Machines from Templates and Guest Customization:

When virtual machines are deployed from the template, they will identical to the template, things such as Windows-SID and hostnames would same. If using static IPs, those would also be same. This can create software and network conflicts.

To avoid such network or software conflicts, its recommended to customize the VM Guest during the deployment process. For Guest OS customization of virtual machines deployed from templates, vCenter requires the following:

  • If the guest OS is Windows, you need VMware Tools and Sysprep utils to be installed within the templates:
    • You will need to copy Sysprep tools to vCenter for Windows 2000, XP & 2003. Starting with Windows Vista onward, Sysprep tools are part of the base OS install. Where to copy Sysprep utils, read the following KB#1005593 article
  • For Linux VMs/Templates, along with VMware toools, you will also need Perl to be installed within your templates.

Best Practice: Always create templates for powered off virtual machines. Do not clone templates from a powered on virtual machines.

What is vCenter Single SignOn?

  • It is very similar to an Active Directory Domain Architecture
  • It is an authentication broker
  • It configures a vsphere.local domain (in vSphere 5.1 the local domain was called as “system-domain”)

The Single Sign-on (SSO) Architecture part of vCenter in vSphere 5.5 has following features:

  1. It uses a Kerberos type (token based) authentication mechanism
  2. It uses a Secure Token Protocol for Authentication
  3. You can create one-way trust relationships with existing Windows Active Directory Domains or OpenLDAP domains
  4. You can have multiple such trust relationships defined. Being able to define multiple trust relationships is very useful in a cloud enabled era.

One of the important things that you should remember is (and hopefully that would remove a lot a confusion) this vCenter Single Sign-on infrastructure is only used for authenticating users/groups to vSphere Infrastructure and applications that integrate with vCenter. It does not provide authentication services for desktops or other desktop/user applications. Or to say more correctly it is not a replacement for Active Directory Domain. In fact it works as a complementary solution to authenticate Active Directory users/groups to vSphere infrastructure.

In Single Sign-on infrastructure the default Single Sign-on administrator user is administrator@vsphere.local. This user is an administrator on both the vsphere.local Single Sign-on domain and the vCenter Server inventory.

  • On  a Windows based vCenter Server System, you set the password for this user  (administrator@vsphere.local) during the installation of Single Sign-on.
  • On a vCenter Server Appliance (Linux based virtual appliance) the administrator@vsphere.local get created and defined during the initialization/configuration of the appliance.

What are Virtual Appliances?

Virtual Appliances are portable virtual machines. One can export a virtual machine as a virtual appliance, and then you can also import the virtual appliance back as a virtual machine.

Virtual Appliances are available either as a folder of files – OVF (Open Virtualization format) or as a single (tarball) file – OVA (Open Virtualization archive).

Why use Virtual Appliances?

Since you already have a existing virtual Infrastructure, you can use the same for running virtual appliances. Virtual Appliances are essentially a VM which is pre-installed with a Operating System (OS) and an application. Virtual Appliances are built is such a way that you just import the appliance and start using it with a minimal network and application configuration.

Advantages of Virtual Appliances:

Both physical and virtual will require a approval from the finance team. However after the finance approval, implementing a virtual appliance will require no more approvals.

  1. No need for approval from the data-center team for rack space
  2. No need for approval from the networking team for free ports on network switches
  3. No need for approval for power requirement
  4. No need for approval for air conditioning or cooling needs

All these approvals basically increase your deployment time to anything about 4-6 weeks, whereas if using a virtual appliance that comes down to about 2 hours. Other advantages of virtual appliances include:

  1. Standard off-the-shelf server hardware used for running virtual appliances
  2. Reduces AMC costs as one less hardware vendor to manage
  3. Easy to standardize on a single hardware vendor
  4. Improves return on your investment in hardware infrastructure

You can find several free and paid appliances available from various vendors. VMware has something called as VMware Virtual Appliance Marketplace. Several free Linux based opensource virtual appliances are available from Turnkey Linux. You can also build your own virtual appliances using the SUSE Studio. You can also find a free Linux based L3 switch appliance at VyOS. You can also find the excellent Monowall Firewall as a virtual appliance for vSphere.

Overall I believe virtual appliances are here to stay and the easy of management and deployment is what makes them a very attractive form factor.