Category Archives: vCD 5.1

Nutanix and VMware APIs for Array Integration (VAAI) – Quick Tip



In the second of my series of Quick Tip’s with Nutanix I wanted to cover off  VMware APIs for Array Integration (VAAI).

The Nutanix platform supports VAAI which allows the hypervisor to offload certain tasks to the array. This vSphere feature has been around a while now and is much more efficient as the hypervisor doesn’t need to be the “man in the middle” slowing down certain storage related tasks.

Nutanix support all the VAAI primitives for NAS

  • Full File Clone
  • Fast File Clone
  • Reserve Space
  • Extended Statistics

If you are not aware of what these primitives mean, I’d suggest reading the VMware VAAI Techpaper.

For both the full and fast file clones an  NDFS “fast clone” is done meaning a writable snapshot (using re-direct on write) for each clone is created. Each of these clones has its own block map meaning that chain depth isn’t anything to worry about.

I’ve taken the following from Steven Poitras’s Nutanix Bible

The following will determine whether or not VAAI will be used for specific scenarios:

  • Clone VM with Snapshot > VAAI will NOT be used
  • Clone VM without Snapshot which is Powered Off –> VAAI WILL be used
  • Clone VM to a different Datastore/Container –> VAAI will NOT be used
  • Clone VM which is Powered On –> VAAI will NOT be used

These scenarios apply to VMware View:

  • View Full Clone (Template with Snapshot) –> VAAI will NOT be used
  • View Full Clone (Template w/o Snapshot) –> VAAI WILL be used
  • View Linked Clone (VCAI) –> VAAI WILL be used

What I haven’t seen being made clear in any documentation thus far (and I’m not saying it isnt there, I’m simply saying I havent seen it!), is that VAAI WILL only work when the source and destination resides in the same container. This means consideration needs to be given as to the placement of ‘Master’ VDI images or with automated workloads from vCD or vCAC.

For example, if I have two containers on my Nutanix Cluster (Master Images and Desktops) with my master image residing in the master images container, yet I want to deploy desktops to the Desktops container VAAI will NOT be used.

I don’t see this as an issue, however more of a ‘Gotcha’ which needs to be considered at the design stage.

Nutanix Networking – Quick Tip

I

‘ve spent the past 4 months on a fast paced VDI project built upon Nutanix infrastructure, hence the number of posts on this technology recently. The project is now drawing to a close and moving from ‘Project’ status to ‘BAU’. As this transition takes place, I’m tidying up notes and updating documentation. From this, you may see a few blog posts  with some quick tips around Nutanix specifically with VMware vSphere architecture.

As you may or may not know, a Nutanix block ships with up to 4 nodes. The nodes are stand alone it terms of components and share only the dual power supplies in each block. Each node comes with a total of 5 network ports, as shown in the picture below.

Back_of_Nutanix

Image courtesy of Nutanix

The IPMI port is a 10/100 ethernet network port for lights out management.

There are two 2 x 1GigE Ports and 2 x 10GigE ports. Both the 1GigE and 10GigE ports can be added to Virtual Standard Switches or a Virtual Distributed Switches in VMware. From what I have seen people tend to add the 10GigE NICs to a vSwitch (of either flavour) and configure them in an Active/Active fashion with the 2 x 1GigE ports remaining unused.

This seems to be resilient, however I discovered (whilst reading documentation, not through hardware failure) that the 2 x 10GigE ports actually reside on the same physical card, so this could be considered a single point of failure. To work around this single point of failure, I would suggest incorporating the 2 x 1GigE network ports into your vSwitch and leave them in Standby.

With this configuration, if the 10GigE card were to fail, the 1GigE cards would become active and you would not be impacted by VMware HA restarting machines in the on the remaining nodes in the cluster (Admission Control dependant) .

Yes, performance may well be impacted, however I’d strongly suggest  alarms and monitoring be configured to scream if this were to happen. I would rather manually place a host into maintenance mode and evict my workloads in a controlled manner rather than have them restarted.

Backup Options for vCNS Manager | vSphere Design

I was asked the following question recently:

“Why do I need to bother backing up the config file of my vCNS Manager, can’t I just snapshot it?”

It’s a good question, and one that involved a little lab testing to play around with.

If you were to snapshot your vCNS manager,which does work from testing in my lab (albeit limited functional testing),  then you are able to restore the vCNS manager from snapshot fairly efficiently and quickly.

The questions I then thought of were:

  • When is the backup window? (if there is one)
  • How often would a vCNS snapshot be taken?
  • How busy is the vCNS manager?
  • Does a backup restore involve change control or other teams?

The reason for these questions in my head were simple.

If a vCNS manager was in a relatively busy vCloud environment deploying a number of Edge devices daily, then yes they would continue to run if the manager were to fail, but if the vCNS manager were only scheduled to have a daily snapshot during a nightly backup window, then there could be an issue with unknown Edge devices after the restore from backup.

The official supported method of backing up vCNS manager is to schedule a backup from the manager itself, to backup the configuration to an FTP/SFTP site.

If the vCNS manager were to fail, you would simply deploy a new vCNS manager (normally within minutes) then re-apply the last saved configuration and get back up and running fairly quickly. Yes, you could argue that if only a single backup was taken daily then we would be in the same boat as with a snapshot, however, It’s much easier and more manageable, in my opinion, to set perhaps an hourly backup (in busy environments) and perhaps only keep a days worth of backup files.

After some debate with my client, my recommendation was to ‘keep it simple’. This meant, stay within the realms of vendor recommendation and support. Configure an hourly backup and keep a single days worth of backups. In the case of a failed and unrecoverable vCNS manger, deploy a new appliance and restore the configuration.

I’d be interested to hear any feedback from others as to what they do in their environments or in fact recommend to others.

VMware vCloud Director | VMRC Plugin Browser Compatibility

I was trying to access the VMRC of a VM residing in a vApp in a clients vCD setup and kept being prompted with the following error:

MissingPlugIn

 

Now, I’m used to seeing the following warning when trying to access vCD using Chrome or Safari, but I was using IE11 at the time…

Browser Error

 

twitter feedI did some snooping around and saw that a few people had posted about receiving the same error message and had various fixes, none of which worked in my case. So I tweeted out a statement to see if IE11 was actually supported. the guys who look after the VMware KB twitter account were quick to respond pointing me to some published KB’s with official supported browsers.

To my surprise, vCD 5.1 only supports up to IE9, hence the error message appearing. vCD 5.5 brings some support for further browsers.

I wasn’t aware of the restrictions, however it’s good to know. Check out the KB articles (links below) to see the latest supported browsers.

Supported Browsers in vCD 5.1 (KB2034554)

Supported Browsers in vCD 5.5 (KB2058296)

Thanks to the VMware KB team for the quick repsonse and for pointing me in the right direction. Once again, the VMware community shows why its Number 1!