Building a vSphere 5.0 Home Lab – Part 2

In the previous post we discussed about hardware requirements. Based on earlier discussions, from time to time I have been evaluating, the best and cheapest options for building a home lab. In this post we are going to discuss the hardware that will allow us to build a vSphere lab. Here are some of my recommendations:

Motherboard:  Intel Desktop Board DH67CL
We selected this motherboard because:

  • Uses the cheap 1333Mhz DDR3 RAM with support up to 32GB RAM. Additionally it comes with 4 memory DIMM slots.
  • Has out-of-box vSphere5 support for the on-board Intel SATA controller.
  • Comes with 3 PCI slots, allows easy expansion for RAID card and/or NIC card.
  • Comes with Intel NIC based on chipset 82579V. Although this is currently unusable under vSphere5 since unsupported by VMware, but I am sure VMware will support this NIC in upcoming future updates.
  • Comes with a DVI/HMDI interface so saves you some more on graphics card.

Processor: Intel Core i5-2400 Processor (6M Cache, 3.10 GHz)
The said processor supports Intel VT and comes with four cores meets all our requirements. The processor is also compatible with our selected motherboard (DH67CL).

RAM: As already discussed we want to have loads of RAM. Buy a 8GB-1333Mhz-DDR3 DIMMs of Transcend / Kingston or Strontium RAM. If you want 16GB RAM you will need to buy 2 DIMMs of 8GB each else if you can afford buy 4 DIMMs.

Network Card:The on-board NIC that comes with the Intel Motherboard DH67CL is an Intel card with chipset 82579V. However presently vSphere ESXi 5.0 does not support this chipset. Hence to be able to successfully install vSphere ESXi 5.0, you will need to install a supported NIC on the motherboard. The motherboard already comes with 3 PCI slots so install the card in anyone of these and you are good to go. I recommend using a standard Intel PRO/1000 NIC. This NIC should be available anywhere in India and is typically costing ~Rs.700/-.

Hard Disk: Any SATA2/SATA3 disks are OK for us. Pick your own brand. Ensure that you pick a 500GB disk since we will need to buy 3 disks to keep costs low. BTW if you are tight on budget a single disk will also do.

DVD Writer: Buy a SATA DVD writer again any brand would do. We will need this to install ESXi and also sometimes to burn DVDs/CDs from within the VMs.

USB Stick: Buy a 4GB USB stick something which is small. The idea is if its small it won’t protrude too much on the backside of our cabinet. I prefer the use the ” Sandisk Cruzer Fit 4 GB Pen Drive“.

Cabinet & Power Supply: When using a cabinet accessibility is the key. Also it becomes pretty hot in India and unless you have installed an AC, you would want something big, roomy and airy with good cooling, something like a Chieftec. Also I would suggest buying a good power supply, which is also silent. Personally I prefer “Antec 450W Power Supply (VP450P)“. Antec power supplies are super silent.

Gigabit Network Switch: You will need a gigabit Ethernet switch to connect your ESXi host and you laptop or desktop. Though pricey I prefer ASUS RT-N16 gigabit switch. It’s a gigabit wireless router with the ability to install a custom firmware such as “Tomato USB” on it. Installing a custom firmware will allow you to test out tagged VLAN related scenarios within your home lab. Again if you are on a budget you may buy a cheaper unmanaged switch.

All the above is available with Flipkart. However for RAM I would suggest you try your local vendor. Flipkart does not stock 8GB DDR3 DIMMs from Transcend or Kingston :-).

Setting up the Home Lab:

  1. Before we begin, power-on your box and go to the BIOS settings. Ensure you can see all the installed RAM. Also verify that “Intel VT” has been enabled. I believe the “Intel VT” option would be somewhere under the Security options in the BIOS settings.
  2. Once you have assembled the box, download the “vSphere ESXi 5.0” ISO from VMware site. Register for a free hypervisor license.
  3. Plug-in the USB in your brand new server and install ESXi on the USB stick. Configure it with an appropriate IP address.
  4. After install is complete, reboot the box and go back to BIOS, this time configure you box to boot from the USB. Verify the USB boots successfully.
  5. Now download the vSphere Client from VMware website and install in on your desktop or laptop.
  6. Connect to your ESXi using the vSphere Client. Once you have reached this level everything else can be done using the GUI.
  7. You may also need to download are VMware vCenter Server Appliance.

Building a vSphere 5.0 Home Lab – Part 1

Being certified vSphere Instructor, I have been doing vSphere trainings for some time now. One complaint participants generally have is “how do we gain expertise on the vSphere platform”. Participants who attend these trainings are typically seeking for a career enhancement or role change. I always end up saying practice to gain confidence and build expertise. It’s easier said than done. Practicing vSphere requires a large lab setup and many do not have access to lab setups at work. That’s where a home lab comes to your rescue. Here’s NOT a quick guide to building your home lab. 🙂

Though not rocket science, the challenge is in understanding the hardware compatibility requirements for vSphere. Failure to do so will make you end up selecting the wrong hardware. This is the focus for my blog post, i.e. how to keep costs low and still build a fully working home lab for practicing vSphere 5.0 scenarios.

First things first, let us identify requirements for our home lab:

    • vSphere has been supporting SATA drives to be used as datastores for some time now. So most motherboards that support SATA are good to start with.
    • Motherboards nowadays also come with an on-board NIC card.  However vSphere does not support every network card (NIC) that is out there. vSphere is built for reliability and hence supports only a small subset of NICs available out there. Many Intel & Broadcom chipset based NICs are supported, but not all. If you have a NIC card other than Intel or Broadcom just check if that card is supported on the VMware HCL. Of late VMware has also started supported NICs from other vendors. If you have supported NIC then you are in luck and may not need to do anything further except just go ahead and install vSphere. I prefer to use Intel Pro1000 NICs since they are cheap, easily available & well supported.
    • When we setup a home lab, we are actually going to run virtual ESXi servers under a physical ESXi host (i.e. ESXi-on-ESXi). Such a setup is often known as a vPOD. We would also like to run VMs (nested VMs) on these virtual ESXi hosts. When we consider these requirements, it turns out that disk IO bandwidth is often the bottleneck in such setups. Although it is not necessary, having multiple disks (spindles) would reduce the bottleneck and make such a vPOD setup more usable at home. I typically recommend having at least three 500GB SATA disks each connected to a separate SATA port and mounted as a separate VMFS datastore on the physical ESXi host. At bare minimum two are recommended if you can afford it buy three disks. Again not necessary but if you can afford buy a supported RAID controller and connect the disks in RAID0/RAID5 mode. This would save you some configuration trouble and improve performance.
    • vPOD setup also requires substantial amount of RAM, I would recommend anything between 16GB to 32GB. More the RAM better the overall performance of the vPOD. Also more RAM would allow you to run additional vSphere setups, VMs and other complementary appliances such as vMA or a Windows VM for learning Powershell.
    • Again from vPOD perspective more the number CPU cores on your physical setup better the performance of your vPOD setup. I would recommend going with a physical CPU with at least 4 cores.
    • USB stick: Instead of installing ESXi on a physical disk, we will install ESXi on a USB stick, that will save us some disk space and also allow us to upgrade easily to the next version of vSphere, whenever that is released.

That’s all for now folks. If you have been reading until now, lets follow up on this in the next post.