Category Archives: vCAC

Nutanix and VMware APIs for Array Integration (VAAI) – Quick Tip



In the second of my series of Quick Tip’s with Nutanix I wanted to cover off  VMware APIs for Array Integration (VAAI).

The Nutanix platform supports VAAI which allows the hypervisor to offload certain tasks to the array. This vSphere feature has been around a while now and is much more efficient as the hypervisor doesn’t need to be the “man in the middle” slowing down certain storage related tasks.

Nutanix support all the VAAI primitives for NAS

  • Full File Clone
  • Fast File Clone
  • Reserve Space
  • Extended Statistics

If you are not aware of what these primitives mean, I’d suggest reading the VMware VAAI Techpaper.

For both the full and fast file clones an  NDFS “fast clone” is done meaning a writable snapshot (using re-direct on write) for each clone is created. Each of these clones has its own block map meaning that chain depth isn’t anything to worry about.

I’ve taken the following from Steven Poitras’s Nutanix Bible

The following will determine whether or not VAAI will be used for specific scenarios:

  • Clone VM with Snapshot > VAAI will NOT be used
  • Clone VM without Snapshot which is Powered Off –> VAAI WILL be used
  • Clone VM to a different Datastore/Container –> VAAI will NOT be used
  • Clone VM which is Powered On –> VAAI will NOT be used

These scenarios apply to VMware View:

  • View Full Clone (Template with Snapshot) –> VAAI will NOT be used
  • View Full Clone (Template w/o Snapshot) –> VAAI WILL be used
  • View Linked Clone (VCAI) –> VAAI WILL be used

What I haven’t seen being made clear in any documentation thus far (and I’m not saying it isnt there, I’m simply saying I havent seen it!), is that VAAI WILL only work when the source and destination resides in the same container. This means consideration needs to be given as to the placement of ‘Master’ VDI images or with automated workloads from vCD or vCAC.

For example, if I have two containers on my Nutanix Cluster (Master Images and Desktops) with my master image residing in the master images container, yet I want to deploy desktops to the Desktops container VAAI will NOT be used.

I don’t see this as an issue, however more of a ‘Gotcha’ which needs to be considered at the design stage.

Nutanix Networking – Quick Tip

I

‘ve spent the past 4 months on a fast paced VDI project built upon Nutanix infrastructure, hence the number of posts on this technology recently. The project is now drawing to a close and moving from ‘Project’ status to ‘BAU’. As this transition takes place, I’m tidying up notes and updating documentation. From this, you may see a few blog posts  with some quick tips around Nutanix specifically with VMware vSphere architecture.

As you may or may not know, a Nutanix block ships with up to 4 nodes. The nodes are stand alone it terms of components and share only the dual power supplies in each block. Each node comes with a total of 5 network ports, as shown in the picture below.

Back_of_Nutanix

Image courtesy of Nutanix

The IPMI port is a 10/100 ethernet network port for lights out management.

There are two 2 x 1GigE Ports and 2 x 10GigE ports. Both the 1GigE and 10GigE ports can be added to Virtual Standard Switches or a Virtual Distributed Switches in VMware. From what I have seen people tend to add the 10GigE NICs to a vSwitch (of either flavour) and configure them in an Active/Active fashion with the 2 x 1GigE ports remaining unused.

This seems to be resilient, however I discovered (whilst reading documentation, not through hardware failure) that the 2 x 10GigE ports actually reside on the same physical card, so this could be considered a single point of failure. To work around this single point of failure, I would suggest incorporating the 2 x 1GigE network ports into your vSwitch and leave them in Standby.

With this configuration, if the 10GigE card were to fail, the 1GigE cards would become active and you would not be impacted by VMware HA restarting machines in the on the remaining nodes in the cluster (Admission Control dependant) .

Yes, performance may well be impacted, however I’d strongly suggest  alarms and monitoring be configured to scream if this were to happen. I would rather manually place a host into maintenance mode and evict my workloads in a controlled manner rather than have them restarted.

vCAC 5.2 Distributed Execution Manager (DEM) Install Error

In preparation for an upcoming project, I’m installing vCAC 5.2 in my home lab. Anyone who has installed vCAC will have used the vCAC prereq checker tool. This tool is simply fantastic. vCAC has a huge amount of prereq’s that need to be configured, this tool does a great job in capturing everything. I like that it provides instructions on how to resolve issues when components need resolving. There is also a ‘Fix Issue’ button which allows for an automated fix of a handful of the requirements.

Prereq checker

 

With the checker reporting I was good to go, I proceeded with the install. All was going well until I came to install the DEM worker and I was met with the following error.

vCAC DEM Install Error

 

I did a few basic checks to ensure DNS was all good in the lab, however I had no issues there. Upon further investigation I looked to see what services the vCAC Server Setup had installed previously.

There is only one service which is the “VMware vCloud Automation Center Service” and it wasn’t started.

service

 

The vCAC server setup allows you to specify a service account to assign to this service, which I had ensured was a local admin on the server. When trying to start the service I got a permission error.

After granting the account the right to ‘Log on as a service’ the service did start and I was able to finish the installation of the DEM worker.

It seems odd to me that the PreReq checker doesn’t prompt for this, as it seems such a comprehensive tool.