Upgrading vSphere from Nutanix Prism

Nutanix customers love the fact we give them their weekends back by having 1-click upgrades for the Acropolis operating system, BIOS, BMC, Firmware and the Hypervisor. When speaking to some customers still go through a multi-step process to include:

Download Updates in VUM
Create a new baseline
Attach Hosts to baseline and scan hosts to validate
Place DRS to manual and evacuate guests from the host
Issue shutdown command to CVM
Place host into maintenance mode
Proceed with remediation wizard
Complete upgrade
Reboot host
Power on CVM
Validate RF in Prism and move on

Yes, a couple of these steps are added compared to non-Nutanix environments, however there are still a number of steps that need to be completed.

With Prism, as long as the cluster is managed with vCenter, we are able to manage the entire process for you, by simply opening the upgrade tab, uploading the offline upgrade package with the json file from the Nutanix support portal and off you go! It’s as simple as that, and here’s another video to show the process.

Building a Nutanix cluster

I’ve been at Nutanix for 30 months now, and what a journey it has been. The software has many new features, some of which I would have never thought would be possible, H/W models to serve every application requirement and multiple H/W vendors with Supermicro, Dell, Lenovo, Cisco and now HPE and even multiple hypervisor options with our own AHV, VMware vSphere, Microsoft Hyper-V and Citrix XenServer. 

Having such diversity, you need a strong story around installation and configuration. Going back to when I joined, I think it’s fair to say the automated installation process was still in it’s infancy, but today with CVM based foundation it’s a breeze and you can have multi-node clusters up and running guest VMs within an hour from the box, ever quicker if you choose AHV for the full native experience. 

Whilst Nutanix documentation is solid, I was asked by a customer for a video guide to help them with internal training. Therefore I’m starting a simple video series, starting with the cluster build process. I posted here:

The video is making use of the java applet for build, and is installing a VMware vSphere cluster. There is no narration, just a simple run-through. Yes, it has been sped up in parts, else it would be a little boring.

Stay posted for the rest of the series.

 

 

 

Hyper-V Terminology in VMware speak

 

I’ve never had much to do with Hyper-V and my knowledge of it is nowhere near as strong as VMware, however since joining Nutanix I find the topic comes up more and more in conversations with clients. It’s great that Nutanix is hypervisor agnostic and supports multiple platforms, but this means I need to get up to speed with the Hyper-V lingo!

I’ve come up with the below table that I can use as my mini translator, so when in conversation I have a quick reference point.

VMware vSphere Microsoft Hyper-V
Service Console Parent Partition
VMDK VHD
VMware HA Failover Clustering
VMotion Live Migration
Primary Node Co-Ordinator Node
VMFS Clustered Shared Volume
VM Affinity VM Affinity
Raw Device Mapping (RDM) Pass Through Disks
Distributed Power Management (DPM) Core Parking
VI Client Hyper-V Manager
vCenter SCVMM
Thin Provisioning Dynamic Disk
VM SCSI VM IDE Boot
VMware Tools Integration Components
Standard/Distributed Switch Virtual Switch
DRS PRO/DO – Performance and Resource Optimsation/Dynamic Optimisation
Web Access Self Service Portal
Storage VMotion Quick Storage Migration
Full Clones Clones
Snapshot Checkpoint
Update Manager VMST – Virtual Machine Servicing Tool

“The Times They are a Changin”

No, I’ve not gone crazy and plan on writing a blog on Bob Dylan, this post is still a tech focussed one.

After leaving university in 2003, I found my first job working for a mortgage company working in a team supporting the mortgage application software. After a few months, I managed to get promoted/transferred into the Server Support team. I was lucky in a sense to skip the usual progression of Helpdesk to Desktop Support to Server Support, but I grabbed this opportunity with both hands and started learning about Microsoft Server OS’s (primarily 2000 and some 2003 at the time if you are wondering), AD, Exchange, Citrix PS4, Firewalls, switches and a little known software product called VMware ESX 2.5. I’d like to say I was one of the early users of ESX 2.5, but I’d be exaggerating the truth, in fact I remember the more senior members of the team evaluating the software and discussing benefits, of which I had a little understanding of at the time. This was my first introduction into virtualisation.

The company was eventually bought out by Lehman Brothers, and rather than make the ‘Big Move to the City’ I took voluntary redundancy (a huge payday for me in those days) and spent a month having fun in Las Vegas, Amsterdam and Barcelona, before starting work the following month for a local reseller who were a big HP shop, Microsoft Gold Partner and VMware VAC (Authorised Consulting Partner). I started life here on the HelpDesk, which wasn’t a backward step as you may think, as it was providing 3rd line support for the reseller contractual customers, which involved support and administration of a wide variety of platforms including VMware. From here and over 5 years I progressed to a Technical Specialist focussed on Virtualisation and Shared Storage technologies (primarily VMware and Dell EqualLogic). I spent many hours speaking to customers about the benefits of virtualisation and countless hours in front of VMware Convertor progress bars. During this time, I started to understand really grasp the benefits virtualisation brings to the business world and evangelised the technology to many first timers. And at that time there were many.

As my knowledge and belief in virtualisation grew, I decided I wanted to move on to bigger and better projects at an enterprise level and start looking at other benefits aside from server virtualisation, such as private/public cloud, VDI and automation. Almost three years ago, I joined the consulting team at Xtravirt and was thrown straight into a 4K seat VDI deployment spanning EMEA. The past three years have seen me involved in some large VDI deployments as well as some big deployments with the vCloud stack.

It’s been an interesting journey to see how the term ‘Software-Defined’ is now more the focus extending virtualisation concepts across the technology stacks, not just compute. Vendors are now concentrating on making the entire datacenter Software-defined to make IT available as a service.

So, if you’re still reading this (which I hope you are), the reason for me writing the article is to announce that I’ll be leaving Xtravirt and joining Nutanix as a Systems Engineer on 21st October 2014. I’ve had the pleasure of working on a large scale VDI deployment over the last 7 months hosted on a Nutanix platform, the web-scale architecture and SDS (software-defined storage) approach Nutanix bring to the market genuinely excites me. This is a change of role for me moving into a product focussed pre-sales role, but when you truly believe in something, it makes the decision a much easier one.

My blog will still remain heavily focussed on the VMware side of things, however as I start to learn about Microsoft Hyper-V and KVM you may see some of that sneaking in too!

I’d like to thank Xtravirt for the past three years, and look forward to the future with Nutanix!

 

#SWUKVMUG | VDI Made Easy with Nutanix

Last week, I was asked to present at the South West UK VMUG, alongside Nutanix to present a real world deployment story. I talked at a high level about how using Nutanix in a 6K seat VDI deployment, not only made my life easier as one of the the architects on the solution, but how it also helped my client meet their requirements easily.

I’ve linked to my slides below, for those who are interested.

Big thanks to Michael Poore, Simon Eady, Barry Coombs and Jeremy Bowman for the invite to come and speak, also big thanks to Nutanix for allowing me to take half of their presentation time!

If you would like to view the presentation, then you can download it by clicking the link

 

Nutanix and VMware APIs for Array Integration (VAAI) – Quick Tip

In the second of my series of Quick Tip’s with Nutanix I wanted to cover off  VMware APIs for Array Integration (VAAI).

The Nutanix platform supports VAAI which allows the hypervisor to offload certain tasks to the array. This vSphere feature has been around a while now and is much more efficient as the hypervisor doesn’t need to be the “man in the middle” slowing down certain storage related tasks.

Nutanix support all the VAAI primitives for NAS

  • Full File Clone
  • Fast File Clone
  • Reserve Space
  • Extended Statistics

If you are not aware of what these primitives mean, I’d suggest reading the VMware VAAI Techpaper.

For both the full and fast file clones an  NDFS “fast clone” is done meaning a writable snapshot (using re-direct on write) for each clone is created. Each of these clones has its own block map meaning that chain depth isn’t anything to worry about.

I’ve taken the following from Steven Poitras’s Nutanix Bible

The following will determine whether or not VAAI will be used for specific scenarios:

  • Clone VM with Snapshot > VAAI will NOT be used
  • Clone VM without Snapshot which is Powered Off –> VAAI WILL be used
  • Clone VM to a different Datastore/Container –> VAAI will NOT be used
  • Clone VM which is Powered On –> VAAI will NOT be used

These scenarios apply to VMware View:

  • View Full Clone (Template with Snapshot) –> VAAI will NOT be used
  • View Full Clone (Template w/o Snapshot) –> VAAI WILL be used
  • View Linked Clone (VCAI) –> VAAI WILL be used

What I haven’t seen being made clear in any documentation thus far (and I’m not saying it isnt there, I’m simply saying I havent seen it!), is that VAAI WILL only work when the source and destination resides in the same container. This means consideration needs to be given as to the placement of ‘Master’ VDI images or with automated workloads from vCD or vCAC.

For example, if I have two containers on my Nutanix Cluster (Master Images and Desktops) with my master image residing in the master images container, yet I want to deploy desktops to the Desktops container VAAI will NOT be used.

I don’t see this as an issue, however more of a ‘Gotcha’ which needs to be considered at the design stage.

Nutanix Networking – Quick Tip

I’ve spent the past 4 months on a fast paced VDI project built upon Nutanix infrastructure, hence the number of posts on this technology recently. The project is now drawing to a close and moving from ‘Project’ status to ‘BAU’. As this transition takes place, I’m tidying up notes and updating documentation. From this, you may see a few blog posts  with some quick tips around Nutanix specifically with VMware vSphere architecture.

As you may or may not know, a Nutanix block ships with up to 4 nodes. The nodes are stand alone it terms of components and share only the dual power supplies in each block. Each node comes with a total of 5 network ports, as shown in the picture below.

Back_of_Nutanix

Image courtesy of Nutanix

The IPMI port is a 10/100 ethernet network port for lights out management.

There are two 2 x 1GigE Ports and 2 x 10GigE ports. Both the 1GigE and 10GigE ports can be added to Virtual Standard Switches or a Virtual Distributed Switches in VMware. From what I have seen people tend to add the 10GigE NICs to a vSwitch (of either flavour) and configure them in an Active/Active fashion with the 2 x 1GigE ports remaining unused.

This seems to be resilient, however I discovered (whilst reading documentation, not through hardware failure) that the 2 x 10GigE ports actually reside on the same physical card, so this could be considered a single point of failure. To work around this single point of failure, I would suggest incorporating the 2 x 1GigE network ports into your vSwitch and leave them in Standby.

With this configuration, if the 10GigE card were to fail, the 1GigE cards would become active and you would not be impacted by VMware HA restarting machines in the on the remaining nodes in the cluster (Admission Control dependant) .

Yes, performance may well be impacted, however I’d strongly suggest  alarms and monitoring be configured to scream if this were to happen. I would rather manually place a host into maintenance mode and evict my workloads in a controlled manner rather than have them restarted.