Nutanix Configuration with vSphere 5.5

Over the past few weeks, I’ve been involved in a fast paced VDI project that is planned to scale up to 10K seats.

nosan
Of  late, I’ve not had much involvement with VDI projects and have been focussing more on Private Cloud projects, however this project quickly got my attention as Nutanix were the chosen vendor for the solution.
Given this constraint, the design process was easy, made even easier with the first phase use case.
For those not already aware of Nutanix, here is a great video explaining How Nutanix Works and gives a great insight into the offering.
I wanted to share some of the configuration changes we made to during the build phase and to the vSphere 5.5 platform.

Nutanix Build

First off, Nutanix have recently changed the way they ship their blocks to site. They used to be shipped with a specific flavour of VMware ESXi, however with added support for KVM and Hyper-V, as well as there being a number of different ESXi versions used in the workplace, they found customers always wanting to change the out of the box software. Nutanix now installs the Nutanix Operating System (NOS) controller virtual machine (CVM) and a KVM hypervisor at the factory before shipping. If you want to use a different hypervisor (such as VMware ESXi) nodes must be re-imaged on site and the NOS CVM needs to be reinstalled. Sound daunting? Well, it’s not really.
Nutanix will provide you with two tools named Orchestrator (Node imaging tool) and Pheonix (Nutanix Installer ISO). Once you have these files and your chosen hypervisor ISO, you’ll need to download and install Oracle VM VirtualBox to get underway. This process is very well documented so i’m not going to replay that here, however I would suggest:
  1. Ensure the laptop/desktop you are configuring the Nutanix block from has IPv6 enabled. IPv6 is used for the initial cluster initialization process.
  2. If configuring multiple Nutanix blocks, only image 4 nodes at a time. We attempted 8 at a time, however imaging this many nodes at a time proved troublesome for the installer.
  3. Keep your ESXi VMkernels and CVM VM’s on the same subnet. This is well documented, however for security we had to attempt to split these on different VLANs. This caused some issues with the auto-pathing feature.

I’ll point out here, after we configured out first, 2 block, 8 node cluster, we decided to manually install the second 2 block 8 node cluster and skip using the automated imaging process. This process again is very well documented and took less than 1 hour to have 8 VMware ESXi nodes up and running with storage presented to all nodes. Compare that to the traditional way and this is still a very impressive setup time.

When you’ve built your nodes, boot them into the BIOS and configure the Power Technology from Energy Efficient to Max Performance to ensure power savings don’t dampen performance.

vSphere Configuration

In this particular environment, we were making use of Nutanix de-duplication, which increases the overhead on each CVM. We therefore increased the RAM on each CVM from the default of 16GB to 32GB and set a vSphere reservation to ensure it always has this physical RAM available.

Nutanix have done a very good job with their documentation which detail recommended configurations per vendor for things such as HA and DRS in vSphere. After reading an Example Architecture blog by Josh Odgers I decided to add an advanced parameter to my HA configuration to change the default isolation address by adding “das.isolationaddress1 | Nutanix Cluster IP address”. I chose the cluster IP address over a CVM IP address for a simple reason. If a CVM hosting the cluster IP address fails, the cluster IP automatically is moved to another CVM in the cluster. The cluster IP is a new configuration option that was released for Hyper-V support, but we can make good use of it in the VMware world.

Each CVM resides on local SSD storage in the form of a 60GB SATA DOM. When you logon to the vSphere client and try and deploy a new workload you will have the option of deploying out to this 60GB SATA DOM SSD storage. This deployment was solely going to be used for a VDI project, therefore all workloads would be provisioned directly from a broker meaning we can control in the broker which datastores the workloads will reside in.  So, to avoid any confusion and to stop an over eager admin deploying out to the SATA DOM disk, I created a Datastore Cluster with all Storage DRS and IO options disabled and named it along the lines of “Nutanix NOS Datastore – Do Not Use”.

Overall, the Nutanix devices are very easy to deploy and you can be up and running in next to no time. Now, I’ve managed to get all this down in a post, I can do some performance tests for my next post!

Speak Your Mind

*