VCAP-DTD | Objective 4.1 | Integrate a View Design with vSphere

Determine the appropriate vSphere infrastructure requirements for the View design

When designing a View infrastructure try and always stick to the pod and block scenario for the infrastructure as this is VMware’s preferred deployment method. Within the pod, create separate blocks for management and desktop clusters. The management cluster will include:

  • vSphere clusters that support server workloads for View
  • vCenter Servers
  • View Connection Servers
  • File Servers
  • Profile Management Servers
  • Database Servers
Desktop blocks should only run two types of workloads:
  • View Desktops
  • View Transfer Servers
Blocks are designed to support a maximum 2k desktops, with a maximum of 5 blocks per pod. That’s 10k desktops. (This obviously depends on host specification).
Within our pod we should look to make use of the following vSphere components:
  • ESXi
  • vCenter
  • HA
  • DRS
  • Storage I/O Control
  • Network I/O Control
  • Distributed Virtual Switches
  • VAAI
  • VASA

Determine the host/cluster/resource pool requirements for the design

Host

There are two main considerations that should be looked at when sizing ESXi hosts for View. Per server virtual machine density and virtual machine evacuation time frames. VM density is calculated by totalling the physical resource in a host, then subtracting the total resource load of the running VMs. When determining the appropriate host model to use, start with capacity planning results of resource load values for CPU, memory, network and disk throughput. When utilising these values you are bale to determine the maximum number go VMs for the most limiting ESXi resource.

Consideration should also be given to the amount of time it takes to evacuate VMs from one host to others in the cluster. This value can vary greatly depending on the ‘specifics’ of your environment. These ‘specifics’ include the amount of unique memory consumed in each VM and the relative use and bandwidth of vMotion networks in the cluster.

We now need to use the collected data to make an estimate of the number of VMs that can be hosted on a single ESXi host. By focussing on the capacity of a single host, we can define building blocks that can be put in place as the infrastructure grows and scales out (more blocks and pods).

Will a host centric model or workload centric model be used?

A host centric approach will require you to decide on an optimal VM density and then compute whether a host will be able to accommodate the workload at this density. Host resource is then adjusted to accommodate this density.

A workload centric approach will involve looking at measured workloads and computing how many workloads can be hosted on a given host.

A host centric model is a popular approach. Densities of 64-128 VMs per host is not uncommon in View 5.0. Remember though, if you have a host failure with 128 desktops on this could potentially have a massive impact on the business. What if all these 128 desktops belonged to the same pool of call centre users, this could potentially take out a call centre whilst the machines restart (if HA is enabled) on the remaining hosts within the cluster.

Determine utilisation requirements. For the typical server estate 80% utilisation is the norm, however with desktop clusters, you may look to increase this in real world scenarios. For the exam though, I’d suggest using 80%.

Cluster

  • Cluster Size is limited to 8 hosts when deploying linked clones
  • DRS should be enabled on the cluster(s) to automate the distribution of desktops, monitor DRS and approve recommendations to start with until you can find the correct automation level, you  may experience a different pattern depending on workloads to that of a server infrastructure
  • ESXi failover requirements should be considered and HA should be deployed.
  • Will N+1 be required?

Resource Pool

Personally, I’m a big believer that if you have done your homework and sized your infrastructure correctly, resource pools should not be required in a View deployment. The key point to remember here, and not just with View deployments is that resource pools should not be used to groups workloads together to make the easier to view in the client (this is what the VMs and Folders view is for), they should only be used when there is a requirement to silo some resource away from the cluster. Remember, remember, remember, resource pools only start working when the cluster is resource constrained.

Ensure vSphere infrastructure will meet established performance requirements for the design

Refer back to Objective 1.2 and revisit how to design fro CPU, Memory and storage. The key thing here is to revisit the requirements and capacity planning reports, both in real life and (hopefully) the exam  this information will provide you with the required detail to ensure performance expectations will be met.

Check back and document the utilisation percentage we have already discussed. Working to 80% will ensure that you have the ability to handle any spikes in workload.

Ensure the design incorporates inbuilt vSphere performance enhancers, such as VAAI, VASA, Storage I/O Control and Network I/O control.

Determine appropriate storage sizing for the design

Again, refer back to Objective 1.2 to see my detailed description on sizing for storage. As we all (should) know, storage is key for VDI and can make or break the project. Planning is crucial as well as analysis of your current environment to help project capacity and I/O requirements.

Remember that View allows you to be smart with your storage design. Placing disposable disks on cheaper lower end storage and placing replicas on separate high performance disk such as flash. Doing this will allow for savings in storage cost and faster read operations. vSwap placement can also be looked at, however I personally would suggest keeping it in the same location as the VM. Do remember, OS disks and persistent disks should be placed on separate data stores.

Ensure you figure out how many desktops can reside on a single datastore (I/O and capacity) and ensure this is adhered to, create a deployment register that will allow you to fill in desktop requirements to ensure you do not breach your own guidelines!

We’ve already talked about separating management and desktop clusters. At the same time, we should ensure that these clusters are using separate LUNs so that we do not get I/O contention between different workloads.

Finally, it goes without saying, to achieve the best consumption statistics where possible look to utilise View composer for linked clone VMs.

Ensure vSphere high availability configuration meets View design requirements

Using vSphere allows for high numbers of VMs to reside on a single host in a View deployment, however this does mean that more users are effected if that host were to fail. Requirements for HA can differ depending on the purpose of the desktop pool. For example, a floating desktop pool will more than likely have a different RPO requirement to that of a dedicated pool. With floating pools, an accepted solution maybe for the users to logon to a different desktop if the desktop they are using becomes unavailable through host failure.

In cases where availability requirements are high (such as dedicated desktops), proper configuration of VMware HA is critical. If for example, HA is deployed and you plan to use a fixed number of desktops per host, each host should run at a reduced capacity, so that if a host fails, the capacity of the desktops per server is not exceeded when the failed desktops are restarted on the remaining hosts in the cluster. For example, we have an 8 host cluster that runs 128 desktop per host. The design goal is to ensure we can tolerate a single host failure. We need to ensure that no more than 896 desktops are running within that cluster (128* (8-1)) to achieve this. DRS should also be utilised here to ensure desktops are balanced among the 8 hosts to ensure the +1 server is not sitting idle. DRS will also help rebalance the cluster after a failed server is added back into service.

Storage again is key here. You must ensure that your storage is able to support the I/O load that results when many VMs are restarted at once because of a host failure. Storage I/O is the key contributor to how quickly desktops will restart after a server failure.

Ensure vSphere network configuration meets View display protocol requirements

Let’s start with a few basics for vSphere networking for the design:

  • Use separate networks for management, VMs, and vMotion
  • Use a separate network for IP storage that stores:
    • VMDK files
    • VM templates
    • ThinApp Applications
    • ISO files
  • Use redundant switches with at least 2 active physical adapter ports
  • Ensure redundancy across physical adapters to protect against NIC and PCI slot failure
  • Provide redundancy at the physical switch level
  • Consider the vDS to provide a cluster wide switch which will reduce management and provide improved functionality
When designing desktops for multimedia use, consider using multiple physical network interfaces to separate the remote display traffic from the data traffic:
  • Remote display protocol (PCoIP or RDP)
  • ESXi host management
  • vMotion
  • User data access
VLANs are great for segmenting traffic, but remember, all VLAN traffic shares the bandwidth of the physical backbone. Ideally, where possible, use physical networks to provide the needed bandwidth.
VMware recommends providing about 1Gbps of network bandwidth per 100 virtual desktops on an ESXi host. This includes typical data traffic including the remote display protocol.
A good thing to remember, and something I’m sure will appear in the exam is that vSwitches default to 120 ports. This is normally fine in a server environment, however when we can accommodate 120 desktop per host, we will need to up this.
During the PoC phase of your implementation it is highly recommended to measure both thePCoIP/RDP session bandwidth and the desktop data traffic to validate all assumptions.
Here are some estimates on bandwidth usage for both RDP and PCoIP protocols. Remember, these are just averages…..
RDP
  • The minimum bandwidth for a usable session is ~30Kbps
  • The bandwidth for streaming multimedia content increases to 100+Kbps
  • The average session bandwidth is 100-150Kbps
  • The average for heavy users is 200-200Kbps
  • If 30% of users are heavy users, the overall becomes 130-180Kbps
PCoIP
  • Plan for an average active session bandwidth of 250-300Kbps
  • Peak bandwidth can burst to 500Kbps-1Mbps

Comments

  1. Mike leahy says:

    Hi Seb

    Can you drop me an email? I would like to run a question by you if you don’t mind

    Thanks
    Mike

  2. Mike leahy says:

    Hi Seb

    Can you drop me an email? I would like to run a question by you if you don’t mind

    Thanks a lot
    Mike

Speak Your Mind

*