vCenter Install Issues on Windows 2008

Last week, I was installing vCenter Server 5.5 on a Windows 2008 virtual machine and it used to fail. It seems that vCenter SSO would not install correctly on a machine which has multiple NICs. (Interestingly the vCenter 5.1b installed without any issues on the same setup.)

Here is the exact scenario. My (virtual) machine in question had 3 NICs.

NIC1 10.40.40.111 (NATed Network with internet access)
NIC2 10.40.41.111 (Internal Network)
NIC3 10.40.42.111 (Internal Network)

The issue was, I wanted the vCenter SSO (and vCenter and other components) to bind to NIC2 which connected to an internal network. My DNS server also resides on the same network, and the DNS forward lookups and reverse lookups were correctly configured.But still vCenter SSO installation would always fail. During the install of SSO, I used to select the “hostname” instead of IP Address, and the vCenter SSO would get bound to wrong IP/NIC combo. I changed the adapter priorities but with no success. Insetad of using the hostname, I also tried using the IP Address but with no luck.

Finally, I disabled both the adapters which I did not want to be selected by vCenter SSO, and then did the entire vCenter Server installation. After installation, I checked the access to the vCenter via the Webclient, added a couple of hosts and it all worked. After this, I then re-enabled the disabled NICs and the vCenter Server continued to play nicely.

Moral of the story, if you have a windows machine (physical/virtual) with multiple NICs to be used as a vCenter Server. During the vCenter installation disable the NICs that you do not want vCenter Server to bind to. Complete the install, reboot the machine, and re-enable the disabled NICs.

What really makes me think is during vCenter SSO install, why can’t the install wizard pop a dialog to the user to select the appropriate NIC card for network binding? The current installer behavior is not at all user friendly. This is a classic usability issue that you must avoid. Over engineering for auto-selection of a NIC during install, can be easily fixed by a simple dialog box. Less code to write and audit, gives you less bugs. Keep it simple.

This issue was observed on vCenter 5.5 build#1312299. I was using the vCenter installer ISO for setting up (VMware-VIMSetup-all-5.5.0-1312299.iso). I believe there were a few KB articles that talked about this. Unable to find those articles now.

And yes, I was not using a simple install, i was using a component based installed.

Well and I also want to thank my friend and colleague Atul Bothe, who actually identified a workaround to this issue. Thanks Atul! 🙂

Enabling Intel NIC (82579LM) on Intel S1200BT under ESXi 5.1

In my home lab I recently bought Intel S1200BTS motherboard with Intel Xeon E3-1220V2 processor. The board processor work perfectly with vSphere 5.1, however there is a small problem with the default install on vSphere. The board comes with 2 on board NICs. One is 82574L and the second one is 82579LM. The 82574L is correctly detected by ESXi but the 82579LM is not hence this NIC cannot be used out of the box in vSphere 5.1.

Here is a quick way to enable Intel NIC 82579LM on vSphere 5.1. To enable this you will need to download a new custom compiled driver with support for this card. Now I could have compiled the driver myself but I thought why not google if someone has already compiled it for this board 😉 and lo I found such a driver for vSphere 5.1.

Important: the following steps replace the default VMware driver for Intel NICs. If you don’t want to do that, stop here. Again this should not be used for production ESXi servers, useful only for homelab environments.

Here are the steps to enable the Intel NIC 82579LM:

  1. Download the alternate driver for Intel NIC e1000 (82579LM) here. (http://shell.peach.ne.jp/~aoyama/wordpress/download/net-e1000e-2.1.4.x86_64.vib)
  2. Connect to your ESXi box using the vSphere client.
  3. Using the datastore browser on your vSphere client, upload the alternate Intel NIC Driver to your datastore.
  4. Enable SSH on your ESXi box.
  5. Connect to your ESXi box using a SSH client (putty)
  6. Move the custom Intel driver package to /tmp directory:
    cd /tmp
    mv /vmfs/volumes/my-datastore/net-e1000e-2.1.4.x86_64.vib /tmp
    
  7. For installing the vib package put the ESXi in maintenance mode:
    esxcli system maintenanceMode set -e true -t 0
    
  8. Set the host acceptance level to CommunitySupported:
    esxcli software acceptance set --level=CommunitySupported
    
  9. Install the vib package:
    esxcli software vib install -v /tmp/net-e1000e-2.1.4.x86_64.vib
    
  10. Exit the ESXi from maintenance mode.
    esxcli system maintenanceMode set -e false -t 0
    
  11. Thats all, after reboot, just verify if your NIC card was detected.

PS: I would like to mention that, the real hardwork of compiling a new NIC driver for ESXi was done by Daisuke Aoyama.

Building a vSphere 5.0 Home Lab – Part 2

In the previous post we discussed about hardware requirements. Based on earlier discussions, from time to time I have been evaluating, the best and cheapest options for building a home lab. In this post we are going to discuss the hardware that will allow us to build a vSphere lab. Here are some of my recommendations:

Motherboard:  Intel Desktop Board DH67CL
We selected this motherboard because:

  • Uses the cheap 1333Mhz DDR3 RAM with support up to 32GB RAM. Additionally it comes with 4 memory DIMM slots.
  • Has out-of-box vSphere5 support for the on-board Intel SATA controller.
  • Comes with 3 PCI slots, allows easy expansion for RAID card and/or NIC card.
  • Comes with Intel NIC based on chipset 82579V. Although this is currently unusable under vSphere5 since unsupported by VMware, but I am sure VMware will support this NIC in upcoming future updates.
  • Comes with a DVI/HMDI interface so saves you some more on graphics card.

Processor: Intel Core i5-2400 Processor (6M Cache, 3.10 GHz)
The said processor supports Intel VT and comes with four cores meets all our requirements. The processor is also compatible with our selected motherboard (DH67CL).

RAM: As already discussed we want to have loads of RAM. Buy a 8GB-1333Mhz-DDR3 DIMMs of Transcend / Kingston or Strontium RAM. If you want 16GB RAM you will need to buy 2 DIMMs of 8GB each else if you can afford buy 4 DIMMs.

Network Card:The on-board NIC that comes with the Intel Motherboard DH67CL is an Intel card with chipset 82579V. However presently vSphere ESXi 5.0 does not support this chipset. Hence to be able to successfully install vSphere ESXi 5.0, you will need to install a supported NIC on the motherboard. The motherboard already comes with 3 PCI slots so install the card in anyone of these and you are good to go. I recommend using a standard Intel PRO/1000 NIC. This NIC should be available anywhere in India and is typically costing ~Rs.700/-.

Hard Disk: Any SATA2/SATA3 disks are OK for us. Pick your own brand. Ensure that you pick a 500GB disk since we will need to buy 3 disks to keep costs low. BTW if you are tight on budget a single disk will also do.

DVD Writer: Buy a SATA DVD writer again any brand would do. We will need this to install ESXi and also sometimes to burn DVDs/CDs from within the VMs.

USB Stick: Buy a 4GB USB stick something which is small. The idea is if its small it won’t protrude too much on the backside of our cabinet. I prefer the use the ” Sandisk Cruzer Fit 4 GB Pen Drive“.

Cabinet & Power Supply: When using a cabinet accessibility is the key. Also it becomes pretty hot in India and unless you have installed an AC, you would want something big, roomy and airy with good cooling, something like a Chieftec. Also I would suggest buying a good power supply, which is also silent. Personally I prefer “Antec 450W Power Supply (VP450P)“. Antec power supplies are super silent.

Gigabit Network Switch: You will need a gigabit Ethernet switch to connect your ESXi host and you laptop or desktop. Though pricey I prefer ASUS RT-N16 gigabit switch. It’s a gigabit wireless router with the ability to install a custom firmware such as “Tomato USB” on it. Installing a custom firmware will allow you to test out tagged VLAN related scenarios within your home lab. Again if you are on a budget you may buy a cheaper unmanaged switch.

All the above is available with Flipkart. However for RAM I would suggest you try your local vendor. Flipkart does not stock 8GB DDR3 DIMMs from Transcend or Kingston :-).

Setting up the Home Lab:

  1. Before we begin, power-on your box and go to the BIOS settings. Ensure you can see all the installed RAM. Also verify that “Intel VT” has been enabled. I believe the “Intel VT” option would be somewhere under the Security options in the BIOS settings.
  2. Once you have assembled the box, download the “vSphere ESXi 5.0” ISO from VMware site. Register for a free hypervisor license.
  3. Plug-in the USB in your brand new server and install ESXi on the USB stick. Configure it with an appropriate IP address.
  4. After install is complete, reboot the box and go back to BIOS, this time configure you box to boot from the USB. Verify the USB boots successfully.
  5. Now download the vSphere Client from VMware website and install in on your desktop or laptop.
  6. Connect to your ESXi using the vSphere Client. Once you have reached this level everything else can be done using the GUI.
  7. You may also need to download are VMware vCenter Server Appliance.

Building a vSphere 5.0 Home Lab – Part 1

Being certified vSphere Instructor, I have been doing vSphere trainings for some time now. One complaint participants generally have is “how do we gain expertise on the vSphere platform”. Participants who attend these trainings are typically seeking for a career enhancement or role change. I always end up saying practice to gain confidence and build expertise. It’s easier said than done. Practicing vSphere requires a large lab setup and many do not have access to lab setups at work. That’s where a home lab comes to your rescue. Here’s NOT a quick guide to building your home lab. 🙂

Though not rocket science, the challenge is in understanding the hardware compatibility requirements for vSphere. Failure to do so will make you end up selecting the wrong hardware. This is the focus for my blog post, i.e. how to keep costs low and still build a fully working home lab for practicing vSphere 5.0 scenarios.

First things first, let us identify requirements for our home lab:

    • vSphere has been supporting SATA drives to be used as datastores for some time now. So most motherboards that support SATA are good to start with.
    • Motherboards nowadays also come with an on-board NIC card.  However vSphere does not support every network card (NIC) that is out there. vSphere is built for reliability and hence supports only a small subset of NICs available out there. Many Intel & Broadcom chipset based NICs are supported, but not all. If you have a NIC card other than Intel or Broadcom just check if that card is supported on the VMware HCL. Of late VMware has also started supported NICs from other vendors. If you have supported NIC then you are in luck and may not need to do anything further except just go ahead and install vSphere. I prefer to use Intel Pro1000 NICs since they are cheap, easily available & well supported.
    • When we setup a home lab, we are actually going to run virtual ESXi servers under a physical ESXi host (i.e. ESXi-on-ESXi). Such a setup is often known as a vPOD. We would also like to run VMs (nested VMs) on these virtual ESXi hosts. When we consider these requirements, it turns out that disk IO bandwidth is often the bottleneck in such setups. Although it is not necessary, having multiple disks (spindles) would reduce the bottleneck and make such a vPOD setup more usable at home. I typically recommend having at least three 500GB SATA disks each connected to a separate SATA port and mounted as a separate VMFS datastore on the physical ESXi host. At bare minimum two are recommended if you can afford it buy three disks. Again not necessary but if you can afford buy a supported RAID controller and connect the disks in RAID0/RAID5 mode. This would save you some configuration trouble and improve performance.
    • vPOD setup also requires substantial amount of RAM, I would recommend anything between 16GB to 32GB. More the RAM better the overall performance of the vPOD. Also more RAM would allow you to run additional vSphere setups, VMs and other complementary appliances such as vMA or a Windows VM for learning Powershell.
    • Again from vPOD perspective more the number CPU cores on your physical setup better the performance of your vPOD setup. I would recommend going with a physical CPU with at least 4 cores.
    • USB stick: Instead of installing ESXi on a physical disk, we will install ESXi on a USB stick, that will save us some disk space and also allow us to upgrade easily to the next version of vSphere, whenever that is released.

That’s all for now folks. If you have been reading until now, lets follow up on this in the next post.