ESX(i) as a VM – vESX

Posted: 1st mei 2015 by Bert in Uncategorized
Reacties uitgeschakeld voor ESX(i) as a VM – vESX

Either ESX or ESXi installable can be installed within a VM on ESXi, but there are a couple of points to note:

  • Some tweaks to the vmx files are needed to be able to start VMs within vESX
  • To demonstrate clustering & HA, each VM will need 4GB RAM and 4 vNICs
  • Two vSwitches on the host are needed specifically for the lab, but these don’t need any pNICs as connectivity will be through the virtual router – but they must be configured to accept promiscuous mode
  • Each vESX needs some disk space for a local datastore

Spreading things over different physical disks will improve disk intensive tasks line cloning VMs, which can get tiresome thrashing a single drive.

The actual installation is covered below – first we need the virtual infrastructure running.

OpenFiler

OpenFiler provides iSCSI storage that can be accessed by both vESX’s (for vMotion and HA) and pESX. VM settings are,

  • OS type Other Linux (64-bit)
  • 1 vCPU, 512MB
  • 2 E1000 vNICs
  • Whatever storage you want to give it. I went for 2x 250GB thin provisioned volumes on different physical disks (to speed up cloning operations).

TechHead has an excellent article on getting OpenFiler up and running already as an iSCSI target. The NICs will need an IP address from each subnet accordingly – 192.168.10.15 and 192.168.20.15 in my lab.

Domain Controller

A Windows DC isn’t really needed, but I’ve added one as the course covers the Windows AD integration to provide access rights (roles). vCentreServer also can’t be installed on a domain controller, so I needed two Windows Server VMs. Minimum spec is 1vCPU, 512MB RAM and 1 vNIC (on the vSphereLab LAN). I tend to use 16GB thin provisioned volumes for Server 2003 system volumes.

The DC can be used to provide DHCP and DNS services on the Lab LAN – my subnet configs are given above. Once DNS is running for the LAB domain (vspherelab.co.uk in my case), a forwarder can be added to DNS on the non-Lab LAN for the domain vspherelab.co.uk via 192.168.10.20 (the lab DC), and similarly the other way around, so everything can find and talk to everything else.

No DHCP is required on the iSCSI LAN as there will only be a few devices on it anyway.

A lot of effort can be spared by disabling the Windows Firewall and Internet Explorer’s Enhanced Security Configuration (hidden in add/remove programs) on every Windows VM.

vCentre Server

vmware don’t seem to offer a ‘student’ license so getting vSphere means registering for the 60-day evaluation license and grabbing what you can. Annoyingly certain parts of the software covered in the course aren’t available either, like the new Data Recovery Appliance.

The minimum spec for the Windows 2003 Server VM for vCentreServer is 1vCPU, 2GB RAM and 1 vNIC (on the vSphereLab LAN). The installation will include SQL Server Express, hence the additional RAM requirement.

Installation of vCentreServer itself is straightforward and wizard driven.  The Guided Consolidation, Update Manager and Converter should also be installed.

VMware vSphere ESX Test Lab

Installing ESX in a VM (vESX)

This is straightforward – create the VM, make the setting adjustments, connect the ISO to the DVD drive and follow the instructions. I installed ESXi for simplicity and because it will run with 1 vCPU. Once installed add the IP address, DNS server (the Lab DC), and default gateway – that’s it.

VM settings are,

  • OS type: Other 64-bit
  • 1 vCPU (can use 2 if you prefer, but it can kill the performance of other VMs)
  • 4GB RAM
  • 4 E1000 vNICs (two on iSCSI vSwitch and two on Lab vSwitch)
  • At least 5GB HDD – plus whatever you want for a local datastore within each vESX

VMware vSphere ESX Test Lab

A few settings need to be adjusted,

  • Expose the NX/XD flag
  • In Configuration Parameters (see image), add a row with name: monitor_control.restrict_backdoor and value: true
  • Set CPU/MMU Virtualisation to Hardware (CPU & MMU)

VMware vSphere ESX Test Lab

In case you were wondering, Jim Mattson has posted an excellent description of the restrict_backdoor function on the vmware community forums.  Without this line, VMs within vESX (nested VMs) can’t be started.

If configuring a 2 vCPU vESX, add a row so that the processor is detected a 1x dual core with name: cpuid.coresPerSocket and value: 2.

Connecting the vESX to OpenFiler

TechHead has this covered here. If your ESX VM is not finding the iSCSI targets, check:

  • that both the OpenFiler and vmkernel IP addresses on the iSCSI LAN can be pinged
  • that promiscuous mode is enabled on the iSCSI and Lab vSwitches within pESXi
  • that the OpenFiler has an ‘allow’ Network Access Rule for the iSCSI subnet.

Controlling the Workloads

Performance of nested VMs is fine for test purposes, but naturally the box overall will start to struggle when pushed too far. My machine typically has 10 to 13 VMs running (three or four being nested) and the physical CPU load when all are idle is around 1.3GHz with 7GB RAM used. The biggest limitation is the RAM capacity – once into the last gigabyte, ESX will activate the balloon driver and performance dives, mainly due to the resulting disk contention.

Fortunately even the free ESXi provides resource pools which can help protect any more important VMs running on the pESXi from being bogged down by the weight of the test environment. Also, for important VMs a RAM reservation of 35% of the allocated RAM will avoid vswapping, which absolutely destroys performance (at least when using SATA disks).

In Summary

Almost everything covered in the official ICM course can be replicated at home with one quad-core ML115G5, the only real exceptions being Fault Tolerance (which starts to work but continually re-starts the guests) and the components that vmware won’t let you get without paying for them, like Data Recovery.

Performance of nested VMs is surprisingly good for lab purposes, each layer of ESX seemingly providing about 10% overhead in raw CPU performance terms, and running virtualised shared storage with OpenFiler enables a virtual high-availability (HA) cluster to be built.

In order to test HA with a reasonable number of slots, each vESX needs around 4GB of RAM. The weight of the overall test environment can be controlled in pESXi with a resource pool, but the 8GB capacity of the ML115G5 can be limiting. This does however result in some good heavy-stress performance analysis work being possible, which couldn’t be demonstrated in the real course lab because the machines couldn’t be loaded enough.

Notes

  • Crucial currently have 2GB ECC modules for the ML115G5 at about £24 each
  • To experiment with FT, set Binary Translation mode by adding a row to the Configuration Parameters of the actual VM you want to test (i.e. a Windows VM running in a vESX) with name: replay.allowBTOnly and value: true
  • The VCP410 exam is available now, but delegates must have been on an accredited ICM course to be eligible to gain VCP status.

Comments are closed.