Over the past few weeks, I’ve been involved in a fast paced VDI project that is planned to scale up to 10K seats.
- Ensure the laptop/desktop you are configuring the Nutanix block from has IPv6 enabled. IPv6 is used for the initial cluster initialization process.
- If configuring multiple Nutanix blocks, only image 4 nodes at a time. We attempted 8 at a time, however imaging this many nodes at a time proved troublesome for the installer.
- Keep your ESXi VMkernels and CVM VM’s on the same subnet. This is well documented, however for security we had to attempt to split these on different VLANs. This caused some issues with the auto-pathing feature.
I’ll point out here, after we configured out first, 2 block, 8 node cluster, we decided to manually install the second 2 block 8 node cluster and skip using the automated imaging process. This process again is very well documented and took less than 1 hour to have 8 VMware ESXi nodes up and running with storage presented to all nodes. Compare that to the traditional way and this is still a very impressive setup time.
When you’ve built your nodes, boot them into the BIOS and configure the Power Technology from Energy Efficient to Max Performance to ensure power savings don’t dampen performance.
In this particular environment, we were making use of Nutanix de-duplication, which increases the overhead on each CVM. We therefore increased the RAM on each CVM from the default of 16GB to 32GB and set a vSphere reservation to ensure it always has this physical RAM available.
Nutanix have done a very good job with their documentation which detail recommended configurations per vendor for things such as HA and DRS in vSphere. After reading an Example Architecture blog by Josh Odgers I decided to add an advanced parameter to my HA configuration to change the default isolation address by adding “das.isolationaddress1 | Nutanix Cluster IP address”. I chose the cluster IP address over a CVM IP address for a simple reason. If a CVM hosting the cluster IP address fails, the cluster IP automatically is moved to another CVM in the cluster. The cluster IP is a new configuration option that was released for Hyper-V support, but we can make good use of it in the VMware world.
Each CVM resides on local SSD storage in the form of a 60GB SATA DOM. When you logon to the vSphere client and try and deploy a new workload you will have the option of deploying out to this 60GB SATA DOM SSD storage. This deployment was solely going to be used for a VDI project, therefore all workloads would be provisioned directly from a broker meaning we can control in the broker which datastores the workloads will reside in. So, to avoid any confusion and to stop an over eager admin deploying out to the SATA DOM disk, I created a Datastore Cluster with all Storage DRS and IO options disabled and named it along the lines of “Nutanix NOS Datastore – Do Not Use”.
Overall, the Nutanix devices are very easy to deploy and you can be up and running in next to no time. Now, I’ve managed to get all this down in a post, I can do some performance tests for my next post!