Home Lab Refresh

I’ve had my old lab a while now which consists of the following:

2 x Mac Mini w/16GB RAM and 256GB SSD upgrades

1 x Cisco SG300 24 port GB switch

1 x Synology DS412+ w/ 2 x 256GB Samsung Pro SSD and 2 x 2TB WD RED HDD

I also have an iMac with Core i7, 32GB RAM and 1TB fusion drive which allows me to spin up some workloads to ease congestion on the Mac Mini’s. 

This has served me well, however, the 16GB RAM in the Mac Minis was really a limitation, especially when I wanted to spin up and down different vApps consisting of a number of items which could easily consume the combined 32GB RAM available.

Luckily, there are a number of people in the community who had done the hard work for me in testing and blogging about various methods which meant I could be lazy and simply look for the feedback of others. The stand out two sites I looked at were that of  Frank Denneman and Fred Hoffer.

I had a few requirements that I needed to meet which were:

R01: Be quiet, be very quiet

R02: Have IPMI/WoL capability (IPMI preferred)

R03: Be able to spin up vApps containing multiple VMs (i.e., A Horizon View environment), without delay from resource constraints 

After some design workshops in my head, and avoiding talking to the CFO (in this case my wife to be) I decided I would keep a single Mac Mini as a management server (running ESXi 5.5) and I would host the vCenter appliance, a DC and a jump box which I can connect from remotely, all installed on a local 1TB SATA drive. The Mac Mini is silent and has very little power draw so I don’t mind leaving it powered on 24/7.

I already have a NAS which is more than adequate for the my needs and allows me to use the WoL function so I can power on remotely only when needed, and the switch I have, whilst is not 10GB but in reality id much more than I need.

Due to the expense of the system I went for I have only purchased one to start with, but plan on adding a second further down the line, for now though, this should keep me busy.

So, the all important kit list: (I’ll link to the manufacturer site for parts) I was lucky enough to be in the US twice already this year, so I got some parts over there! (Note: This setup may look familiar if you’ve read the above blogs)

Case: Fractal Design R4 

Motherborad: SuperMicro X9SRH-7TF

CPU: Intel Xeon E52620V2???

CPU Cooler: Noctura NH-U9DXi4

Power Supply: Corsair RM550

SSD: Samsung 840 Pro 256GB (x2)

This platform rocks and is almost silent. I’m going to upgrade to vSphere 6 shortly so will have a further post about the configuration at a later date. Thanks to the vExpert programme for giving me the ability to licence my lab!

Hyper-V Terminology in VMware speak

 

I’ve never had much to do with Hyper-V and my knowledge of it is nowhere near as strong as VMware, however since joining Nutanix I find the topic comes up more and more in conversations with clients. It’s great that Nutanix is hypervisor agnostic and supports multiple platforms, but this means I need to get up to speed with the Hyper-V lingo!

I’ve come up with the below table that I can use as my mini translator, so when in conversation I have a quick reference point.

VMware vSphere Microsoft Hyper-V
Service Console Parent Partition
VMDK VHD
VMware HA Failover Clustering
VMotion Live Migration
Primary Node Co-Ordinator Node
VMFS Clustered Shared Volume
VM Affinity VM Affinity
Raw Device Mapping (RDM) Pass Through Disks
Distributed Power Management (DPM) Core Parking
VI Client Hyper-V Manager
vCenter SCVMM
Thin Provisioning Dynamic Disk
VM SCSI VM IDE Boot
VMware Tools Integration Components
Standard/Distributed Switch Virtual Switch
DRS PRO/DO – Performance and Resource Optimsation/Dynamic Optimisation
Web Access Self Service Portal
Storage VMotion Quick Storage Migration
Full Clones Clones
Snapshot Checkpoint
Update Manager VMST – Virtual Machine Servicing Tool

Disable vSphere SSO – Reg Hack

DISCLAIMER: This ‘hack’ is UNSUPPORTED and should never be used in a Production Environment – for home labs though may  be useful!

Whilst experiencing an issue with a corrupt SSO installation in vSphere 5.5, I discovered a reg hack that allowed me to continue to login to a vSphere environment with domain credentials until I could resolve the existing issue with SSO. This is a simple change and involves editing the vpxd.cfg file.

  • On the vCenter server navigate to vlxd.cfg file
    • For Windows 2003: C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\vpxd.cfg
    • For Windows 2008: C:\ProgramData\VMware\\VMware VirtualCenter\vpxd.cfg
    • For the Appliance: /etc/vmware-vpx/
  • Open vpxd.cfg in a text editor
  • Search for the following:

NewImage

  • Change ‘true’ to ‘false’ between the highlighted tags

NewImage

  • Restart the vCenter server

“The Times They are a Changin”

No, I’ve not gone crazy and plan on writing a blog on Bob Dylan, this post is still a tech focussed one.

After leaving university in 2003, I found my first job working for a mortgage company working in a team supporting the mortgage application software. After a few months, I managed to get promoted/transferred into the Server Support team. I was lucky in a sense to skip the usual progression of Helpdesk to Desktop Support to Server Support, but I grabbed this opportunity with both hands and started learning about Microsoft Server OS’s (primarily 2000 and some 2003 at the time if you are wondering), AD, Exchange, Citrix PS4, Firewalls, switches and a little known software product called VMware ESX 2.5. I’d like to say I was one of the early users of ESX 2.5, but I’d be exaggerating the truth, in fact I remember the more senior members of the team evaluating the software and discussing benefits, of which I had a little understanding of at the time. This was my first introduction into virtualisation.

The company was eventually bought out by Lehman Brothers, and rather than make the ‘Big Move to the City’ I took voluntary redundancy (a huge payday for me in those days) and spent a month having fun in Las Vegas, Amsterdam and Barcelona, before starting work the following month for a local reseller who were a big HP shop, Microsoft Gold Partner and VMware VAC (Authorised Consulting Partner). I started life here on the HelpDesk, which wasn’t a backward step as you may think, as it was providing 3rd line support for the reseller contractual customers, which involved support and administration of a wide variety of platforms including VMware. From here and over 5 years I progressed to a Technical Specialist focussed on Virtualisation and Shared Storage technologies (primarily VMware and Dell EqualLogic). I spent many hours speaking to customers about the benefits of virtualisation and countless hours in front of VMware Convertor progress bars. During this time, I started to understand really grasp the benefits virtualisation brings to the business world and evangelised the technology to many first timers. And at that time there were many.

As my knowledge and belief in virtualisation grew, I decided I wanted to move on to bigger and better projects at an enterprise level and start looking at other benefits aside from server virtualisation, such as private/public cloud, VDI and automation. Almost three years ago, I joined the consulting team at Xtravirt and was thrown straight into a 4K seat VDI deployment spanning EMEA. The past three years have seen me involved in some large VDI deployments as well as some big deployments with the vCloud stack.

It’s been an interesting journey to see how the term ‘Software-Defined’ is now more the focus extending virtualisation concepts across the technology stacks, not just compute. Vendors are now concentrating on making the entire datacenter Software-defined to make IT available as a service.

So, if you’re still reading this (which I hope you are), the reason for me writing the article is to announce that I’ll be leaving Xtravirt and joining Nutanix as a Systems Engineer on 21st October 2014. I’ve had the pleasure of working on a large scale VDI deployment over the last 7 months hosted on a Nutanix platform, the web-scale architecture and SDS (software-defined storage) approach Nutanix bring to the market genuinely excites me. This is a change of role for me moving into a product focussed pre-sales role, but when you truly believe in something, it makes the decision a much easier one.

My blog will still remain heavily focussed on the VMware side of things, however as I start to learn about Microsoft Hyper-V and KVM you may see some of that sneaking in too!

I’d like to thank Xtravirt for the past three years, and look forward to the future with Nutanix!

 

VMworld EMEA | See you there?

no limits

With just two weeks to go, VMworld EMEA is fast approaching and I’m lucky enough to be one of the handful of the Xtravirt team to be in attendance. Xtravirt are sending a large group across to Barcelona, not only to liaise with our partners and attend sessions for personal development, but also to network. This will be my fourth VMworld (all EMEA based) and I can honestly say these events are not only fantastic for content (both technical and business) but also networking. I’ve lost count of the number of people I’ve met, and stayed in contact with, be it at VMUGs/LinkedIn/Twitter over the years.

I’m not going to be posting a list of the sessions I plan to attend as this is likely to change, however you can be certain to find me in the bloggers/hang space area at some point most days, so be sure to come and say hello! Also, I’ll be sure to be watching a number of the vBrownBag sessions as these guys never fail to provide awesome community content.

There are a number of vendors in the Solutions Exchange I’ll be stopping at to say hello, they are:

Nutanix

pernixdata

VMUG

VMware (for EVO:Rail)

VMware (for vCloud Air)

Pure Storage

SimpliVity

Cloud Physics

As I’ve already mentioned, there will be a number of the Xtravirt team in attendance, so If you want to speak to any of us about the services Xtravirt offer, or just want a chat, be sure to come and say hello, alternatively, if you would like to arrange an official meeting please click here.

xtra

 

#SWUKVMUG | VDI Made Easy with Nutanix

Last week, I was asked to present at the South West UK VMUG, alongside Nutanix to present a real world deployment story. I talked at a high level about how using Nutanix in a 6K seat VDI deployment, not only made my life easier as one of the the architects on the solution, but how it also helped my client meet their requirements easily.

I’ve linked to my slides below, for those who are interested.

Big thanks to Michael Poore, Simon Eady, Barry Coombs and Jeremy Bowman for the invite to come and speak, also big thanks to Nutanix for allowing me to take half of their presentation time!

If you would like to view the presentation, then you can download it by clicking the link

 

Hey 10Zig – You Guys Rock!

This all started back in November of last year, when I wrote the post Calling all Thin Client Vendors. If you haven’t read the post I’m basically calling out vendors to lend me some kit with PCoIP compatibility for some testing I want to do at home in my own time and at my own pace. Cheeky I know, but hey, sometimes if you don’t ask you don’t get. Luckily for me, the ‘first’ lady of the virtualisation world (Jane Rimmer – if you hadn’t of guessed already!) read my post and got the attention of the guys over at 10Zig.

After some good conversations with James Broughton and Tom Dodds I re-iterated the point to them this was for personal use only, it wasn’t linked to a client and I doubted they would get any sales from this, they were still happy to work with me and lend me some units – how cool is that?

Not long after conversations were flowing, I got a completion date to move into my first ‘owned’ house, I lined up a couple of VCAP exams, then got engaged and was then placed on a long term fast paced project, I just didn’t see me having the time to carry out any of the testing I had planned, so I reluctantly told the guys to keep hold of the units they had planned to lend me.

A few weeks ago, Tom got back in touch to see if I were in a position to look at the units again – this time I was. Within a few days I had a large package arrive and was amazed to see the contents!10Zig GoodnessThey have sent me two (yes two!) V1200 series zero clients, the dual screen V1200-P TERA2 (Click here for info) and the quad screen V1200-QP TERA2 (Click here for info).

Also, and unexpectedly they also threw in a 5118v thin client (Click here for info) with Windows 8 Embedded to allow me to compare Thin vs Zero devices.

To top it all off, they have also arranged for me to have direct contact with their support team and a WebEx session to introduce me to the tech and get me on my way!

I’d better spin up my lab and make sure VMware View is ready to get put through its paces with these devices!

Stay tuned for some further posts on the performance of these units.

Finally Jane, thanks for putting me in contact with these guys! Tom, James thanks again for the generosity.

 

Nutanix and VMware APIs for Array Integration (VAAI) – Quick Tip

In the second of my series of Quick Tip’s with Nutanix I wanted to cover off  VMware APIs for Array Integration (VAAI).

The Nutanix platform supports VAAI which allows the hypervisor to offload certain tasks to the array. This vSphere feature has been around a while now and is much more efficient as the hypervisor doesn’t need to be the “man in the middle” slowing down certain storage related tasks.

Nutanix support all the VAAI primitives for NAS

  • Full File Clone
  • Fast File Clone
  • Reserve Space
  • Extended Statistics

If you are not aware of what these primitives mean, I’d suggest reading the VMware VAAI Techpaper.

For both the full and fast file clones an  NDFS “fast clone” is done meaning a writable snapshot (using re-direct on write) for each clone is created. Each of these clones has its own block map meaning that chain depth isn’t anything to worry about.

I’ve taken the following from Steven Poitras’s Nutanix Bible

The following will determine whether or not VAAI will be used for specific scenarios:

  • Clone VM with Snapshot > VAAI will NOT be used
  • Clone VM without Snapshot which is Powered Off –> VAAI WILL be used
  • Clone VM to a different Datastore/Container –> VAAI will NOT be used
  • Clone VM which is Powered On –> VAAI will NOT be used

These scenarios apply to VMware View:

  • View Full Clone (Template with Snapshot) –> VAAI will NOT be used
  • View Full Clone (Template w/o Snapshot) –> VAAI WILL be used
  • View Linked Clone (VCAI) –> VAAI WILL be used

What I haven’t seen being made clear in any documentation thus far (and I’m not saying it isnt there, I’m simply saying I havent seen it!), is that VAAI WILL only work when the source and destination resides in the same container. This means consideration needs to be given as to the placement of ‘Master’ VDI images or with automated workloads from vCD or vCAC.

For example, if I have two containers on my Nutanix Cluster (Master Images and Desktops) with my master image residing in the master images container, yet I want to deploy desktops to the Desktops container VAAI will NOT be used.

I don’t see this as an issue, however more of a ‘Gotcha’ which needs to be considered at the design stage.

VMware ESXi Cookbook | Book Review

Disclaimer: I was recently approached by a representative of Packt Publishing and was asked to review a copy of this book. I therefore received an ebook for review.0068EN_VMware ESXi 5

I was a bit dubious about this book  when I read the overview on the Pack Publishing Website, the website quotes

  • Understand the concepts of virtualization by deploying vSphere web client to perform vSphere Administration
  • Learn important aspects of vSphere including administration, security, performance, and configuring vSphere Management Assistant (VMA) to run commands and scripts without the need to authenticate every attempt
  • VMware ESXi 5.1 Cookbook is a recipe-based guide to the administration of VMware vSphere

I’ve been working with VMware products for a number of years now and this book looked like a beginners guide. I was also a little disappointed that the book was based on vSphere 5.1 and not the most current release vSphere 5.5 even though the current release of vSphere was out 6 months before the book.

Who is the book for?

The book is primarily written for technical professionals with system administration skills and basic knowledge of virtualization who wish to learn installation, configuration, and administration of vSphere 5.1. Essential virtualization and ESX or ESXi knowledge is advantageous.

I personally would say it was for people who were new to Virtualization or deploying VMware vSphere products for the first time. Perhaps even a useful resource for management or project management who want to delve a little deeper into the technology. Virtualization concepts would be advantageous, however the book covers each step of a basic installation in good detail.

Areas Covered

The book is split into 9 chapters, aimed at covering a cradle to grave ‘basic’ vSphere installation.

  1. Installing and Configuring ESXi
  2. Installing and Using vCenter
  3. Networking
  4. Storage
  5. Resource Management and High Availability
  6. Managing Virtual Machines
  7. Securing the ESXi Server and Virtual Machines
  8. Performance Monitoring and Alerts
  9. vSphere update Manager

The book reads and flows well, with the explanations clear and concise. The author does a good job explaining all concepts covered in the book.

Final Thoughts

If you are a seasoned vSphere administrator/architect this book probably isn’t for you. Saying this, it does act as a handy reference if there are areas of vSphere that you aren’t familiar with that you need to review.  One thing I do like about this book, is all screenshots (where possible) are taken from  the vSphere Web Client. As many of us know, the Web Client will be the only way to manage VMware infrastructure in the not too distant future, therefore for the old skool folk like myself it also acts as a handy reference to help complete tasks in this manner.

Overall, I would say the author has done a great job in what they set out to do. Create a quick fire reference for vSphere administration tasks.

 

 

Nutanix Networking – Quick Tip

I’ve spent the past 4 months on a fast paced VDI project built upon Nutanix infrastructure, hence the number of posts on this technology recently. The project is now drawing to a close and moving from ‘Project’ status to ‘BAU’. As this transition takes place, I’m tidying up notes and updating documentation. From this, you may see a few blog posts  with some quick tips around Nutanix specifically with VMware vSphere architecture.

As you may or may not know, a Nutanix block ships with up to 4 nodes. The nodes are stand alone it terms of components and share only the dual power supplies in each block. Each node comes with a total of 5 network ports, as shown in the picture below.

Back_of_Nutanix

Image courtesy of Nutanix

The IPMI port is a 10/100 ethernet network port for lights out management.

There are two 2 x 1GigE Ports and 2 x 10GigE ports. Both the 1GigE and 10GigE ports can be added to Virtual Standard Switches or a Virtual Distributed Switches in VMware. From what I have seen people tend to add the 10GigE NICs to a vSwitch (of either flavour) and configure them in an Active/Active fashion with the 2 x 1GigE ports remaining unused.

This seems to be resilient, however I discovered (whilst reading documentation, not through hardware failure) that the 2 x 10GigE ports actually reside on the same physical card, so this could be considered a single point of failure. To work around this single point of failure, I would suggest incorporating the 2 x 1GigE network ports into your vSwitch and leave them in Standby.

With this configuration, if the 10GigE card were to fail, the 1GigE cards would become active and you would not be impacted by VMware HA restarting machines in the on the remaining nodes in the cluster (Admission Control dependant) .

Yes, performance may well be impacted, however I’d strongly suggest  alarms and monitoring be configured to scream if this were to happen. I would rather manually place a host into maintenance mode and evict my workloads in a controlled manner rather than have them restarted.