Windows 2003 VM Guest customization no longer supported

Today I deployed a Windows 2003 VM in a test infrastructure. I did not want to spend too much time installing and configuring the OS hence thought of using Windows 2003 instead of something like a Windows 10 or Windows 2016. ‘

Why  I still use and love to deploy Windows 2003 in my test environments? The reason is very simple:

  • Windows 2003 works very well in a nested environment
  • Windows 2003 virtual machine has a small  footprint does not require a lot of RAM nor does it require a lot of disk space.
  • Finally the important part; Windows 2003 was supported by VMware for guest customization under vCenter

Now here is twist in the story, I was trying the guest customization of Windows 2003 in vCenter 6.7 and guess what it failed. I did a bit of investigation why this was happening. I never expected VMware would stop supporting Windows 2003 guest customization under vCenter 6.7, but that’s exactly what the story is. VMware no longer supports Windows 2003 guest customization starting from vCenter 6.7

Here is the support document from VMware, that states Windows 2003 is no longer supported under vCenter 6.7:

https://partnerweb.vmware.com/programs/guestOS/guest-os-customization-matrix.pdf

 

 

Advertisements

Thumb rules for vCPU allocation

Here are some thumb rules (best practices) for capacity planning of virtual CPU (vCPU) allocation within your virtual infrastructure. These rules typically work 90 times of 100, however there would be exceptions:

  1. Don’t configure any single VM with more vCPUs than the total number of available physical cores on a machine. Also while allocating don’t consider hyper-threads (hyper-threading feature on Intel hosts).
  2. If you plan for over commitment of CPU then wherever possible do not assign more vCPUs than the number of cores found on a single physical socket (exceptions exist). Example: rather than assigning a 2 virtual socket, single core virtual CPU for a virtual machine, it would be better to assign a single virtual socket, dual core virtual CPU.
  3. Map physical NUMA to virtual NUMA. Avoid using a VM with wide vNUMA.
  4. Hypervisor (VMKernel) also has a overhead so ensure at any given point in time there is at-least one free physical cores to schedule the Hypervisor. So this will also effect what we stated in point 1, when considering maximum number of physical cores reduce the number by 1 for a single socket system and by 2 on a multi-socket systems (this is to consider for the Hypervisor CPU Overhead)
  5. For server virtualization projects the vCPU ratio to physical cores should not exceed more than 12/13 on per physical core basis. You can easily start with an allocation of about 7 vCPUs per physical core.
  6. For virtual desktop environment projects the vCPU ratio should not exceed more than 18/20 for each physical core. You can easily start with an allocation of about 12 vCPUs per physical core.
  7. Whenever possible consider not to over allocate vCPUs than what is required by your application or virtual machine. For example if possible to assign a single vCPU to your virtual machine rather 2 vCPUs. Over-commitment at a hypervisor level is quite different from over-commitment at a virtual machine level and the two things should not be confused with.

When considering virtualizing CPU intensive applications (presently running in physical), some things to remember:

  • Assuming you have enabled Hyper-threading in both physical and virtual infrastructure. Remember that a single vCPU corresponds to a single hyper-thread in virtual world whereas in the physical world one core corresponds to two hyper-threads. So if you assign your virtual machine with the same number of vCPU as the number of physical cores you had for that application, then you are actually assigning half the CPU resources to the application/virtual machine instance in virtual world. As such your application under load will perform at 50% of the physical instance. Hence it would be better double the number of vCPUs assigned to your virtual machine.