Identify Current Workflow
In order to design an effective VMware View solution, we need to understand how users use their existing desktops. An essential first step for a design is to determine the use cases that can be supported by the end solution.
- Try to correlate hardware/software and user requirements to use cases
- Determine use cases and map to View instances
- Assign user groups, counts and locations to each use case
The number of use cases is a key factor in the complexity of the design. The number of users in each use case affects the scalability of the design.
- Current and projected workload details – Type of computer used, hardware configuration(s), operating system. Are specific peripherals in use, such as local backup drives, USB devices, licence dongles? Are there unique software requirements such as device drivers? NOTE: Device drivers cannot be virtualised.
- Access scenario – How does the user access their desktop? How does the user access applications such as central DB’s, internal web sites, other core applications? Do they access via LAN, remote access over VPN, SSL tunnels?
- Key applications – What is the suite of desktop tools? Is Microsoft Office used? What is used for email? Are there business specific/critical in house tools? Are there any special audio/video requirements?
- User Activity – What can the user do locally? Do they have Administrative privileges? Are the machines domain members?
- Category of User – What type of user are they? Common categories include, Knowledge Worker, Power User, Task Worker or Mobile User
Evaluate additional peripheral needs
What peripherals are in use now?
- Storage Devices?
- Web Cams?
- Smart Card Readers?
- License Dongles?
Are these peripherals used for business purpose? (A user charging their iPhone from their computer shouldn’t be classed as business use)
Assessment tools can be used to help identify these devices.
Evaluate current network bandwidth and latency and determine network I/O requirements
Design factors for network include but are not limited to:
- Bandwidth – The actual connection capacity
- Latency – A critical aspect for multimedia is end-to-end transit times
- Load Balancing – Needs to be considered in an environment with multiple View Connection and Security servers to enhance performance, scalability and availability.
- Total and Concurrent anticipated sessions – Certain access components have concurrent and total session limits
- Security – Control the point of ingress
Evaluate current storage environment and determine storage I/O requirements
IOPS come in 2 forms:
- front end, which is what the workloads need (normally people refer to front end IOPS)
- back end, which is the amount of physical IOPS (IO operations) that the disk spindles need to perform
It is worth noting that IOPS are split by READ and WRITE
1 front end READ IOPS = 1 back end READ IOPS (making reads east to ‘calculate’)
WRITES have a RAID penalty. For example, in RAID 5, 1 x front end write IOPS – 4 x back end write IOPS (x4).
Here are the RAID write penalties (multipliers)
- RAID 0 = 1
- RAID 1 = 2
- RAID 5 = 4
- RAID 6 = 6
- RAID DP = 2
- RAID 10 = 2
With this information we are able to calculate IOPS.
100 VDI workloads @ 15 IOPS with 50/50 R/W split (with storage configured @ RAID 5)
Front End IOPS
- 100 x 15 = 1500
Back End IOPS
- READ = 1500 / 2 (50% READ) = 750
- WRITE = 1500 / 2 (50% WRITES) x 4 (RAID Penalty) = 3000
Total backend IOPS required = 3750
We now have the total number of IOPS for our example, now we n eed to identify the number of spindles required. For this example I will assume 180 IOPS per SAS disk.
- 3750 (IOPS from previous calculation) /180 (IOPS per SAS disk) = 21 (20.8 rounded UP to 21)
To sum up, in order to calculate the IOPS and disks required, we need to have the following information:
- Number of front end IOPS (workloads and their IOPS requirement)
- IOPS per disk
- RAID Level
- R/W Split
Based on current environment, determine CPU/Memory/Storage sizing requirements
Evaluating an existing environment must be done by an assessment tool such as Liquidware Labs Stratusphere FIT or Lakeside SysTrack and should never be done by guess work to help you determine the correct sizing requirements going forward.
In both real world and the exam scenario you should always look to size VMware View environments to support the pod and block scenario (where possible). If you are not sure what I am talking about then click here and read this first.
To determine the number of ESXi hosts required to accommodate desktop users, then you absolutely need accurate performance and utilisation data.
VMware testing has shown that vSphere can successfully run 14 Windows 7 desktops per core with good performance. Note however, this density is for light to medium Microsoft Office workloads. This is also simply a guideline, your own analysis should be done in respect to your own workloads.
Whilst VMware say that you can run up to 16 desktops per core, official guides still state that you should design for 8 desktops per core for Xeon processors. This does depend however, on the desktops average workload.
Consideration also needs to be given to the implications of high VM per core ratios. For example, if hosting 128 or more VMs per host, a host outage could affect a large number of users. Although vSphere 5 has a maximum of 512 VMs per host, the number deployed in a VMware View scenario should be much much lower.
By monitoring usage and workloads throughout pilot and pre-productions rollouts, the number of VMs per core can be increased where capacity permits. VMware recommends not exceeding 80% of CPU or memory capacity in a host. This calculation does not factor in the resources required if the host is acting as a backup for other hosts in the cluster.
Remember in a real world scenario we make look to increase 80% usage perhaps up to 90% even 95% depending on the scenario, but that’s not the exam scenario!!
Perfmon data provides CPU requirements from the physical desktop. ‘% Usage’ is a percentage of the total CPU resource. For example, 5% of 2GHz equals 100MHz.
Lets look at our scenario from earlier from the IOPS calculation for 100 desktop workloads.
We have 100 desktops that require 100MHz each.
- 100 (desktops) x 100 (MHz) = 10GHz
To determine the number of hosts based on core speed and the number of cores.
- If core speed is 3GHz then we need a minimum of 4 cores.
This does not take into account the additional CPU resources required for overhead, peak and protocol.
You are able to calculate the amount of processing capacity that is needed to handle the operating system and applications as observed on a physical desktop, but the estimate is not accurate. Moving the workload from a physical desktop to a virtual environment adds a measurable amount of overhead. Consideration needs to be given to the additional processing that is needed to accommodate the work associated with virtualising the environment and handling the display protocol. Also spikes need to be catered for. By adding a buffer to the observed ranges, you are able to ensure your design is able to handle any fluctuations in processor load. A VMware recommended average for a buffer is to add 20%.
For accurate memory consumption statistics, comparison needs to be done between ‘high water mark’ and ‘low water mark’ thresholds. High water mark is with no page sharing, low is with page use optimisation.
It is possible to configure a memory size for VMs that is smaller than that of the physical system they are replacing. If users have 4GB of RAM in their physical desktops it doesn’t mean they require 4GB of RAM in a virtual desktop. A recommended practise from VMware is to start at 1GB then monitor and size accordingly. This may seem very conservative and perhaps is, however this is a VMware recommendation, therefore should be used in the exam. Real world scenarios may well differ.
There is no direct connection between the amount of physical RAM and the amount of virtual RAM that should be assigned. Unlike with CPU calculations, where a virtual environment adds overhead to the processing power needed, memory requirements decrease when the desktop is virtualised. The reduction is thanks to TPS and memory compression in ESXi.
If possible, it’s a good idea to use a per session memory setting that discourages guest OS swapping. Windows guest page swapping uses the Windows page file which is located within the system disk. When using linked clones the increase in the OS disk or the disposable disk will affect the number of linked clones per datastore.
VMware testing and documentations shows that you are able to achieve at least a 40% saving from TPS and 15-25% saving from memory compression. Keep this in mind when calculating required memory.
So, back to our scenario of 100 workloads.
- Each desktop is assigned 1GB.
- Average use per desktop is 768MB.
- Average Peak utilisation is 1GB.
- Total RAM with no savings is 100GB.
- Anticipated saving benefit is a conservative 30%.
- Therefore total RAM for our workloads is 70GB.
I’ve already covered how to workout IOPS requirements. Full clones are easy to calculate so I’ll be looking at linked clone calculations here. Making some further assumptions of requirements…
- Replica will be 22GB (Windows 7 Parent VM)
- VSWAP file will be 1GB
- Windows Page file will be 1GB
- VMware log files will be 100MB
- VMDK (Delta disk) will be 4GB (estimate)
- Free space cushion of 10%
Therefore 1 (vswap) + 1 (Win page file) + 0.1 (VMware log files) + 4 (VMDK delta) = 6.1 + 22 = 28.1Gb of space for a single linked clone. Each linked clone after this requires a further 6.1GB of space as only one replica is required per datastore.
Note: A maximum of 64 linked clones per VMFS datastore and 250 linked clones per NFS datastore.
For a VMFS datastores our calculations will be as follows.
- Each linked clone (6.1GB)
- 64 Linked Clones (64×6.1=390.4)
- One Replica (22GB)
- Minimum Space Required = 412.4GB, rounded to 500GB.
10% of a 500GB datastore is 50GB. We have 87.6GB of free space within our datastore which gives as out 10% buffer comfortably.
We will therefore require 2 VMFS datastores at 500GB in size each.
Using the same figures above for an NFS datastore our calculation will be:
100 x 6.1 = 610 + 22 = 632GB Required. Rounded up to 750GB allows for our 10% buffer size. As we can store more linked clones on an NFS datastore only 1 datastore is required.
Remember the IOPS requirement can also dictate number of datastores required!
Consideration should be given into the placement of VM swap files. Placing the VM swap files on local storage giving large reductions in storage requirements. (OK for my working example the savings aren’t huge (64GB for each VMFS and 100GB for NFS), however larger scenarios will reap larger benefits. Placing the vswap on local storage will:
- Reduce the size of the shared datastores.
- Removes the swap read/write I/O from the shared storage device.
- Slighlty impacts performance of vMotion and vSphere HA operations.
- Slightly increases CPU and memory requirements as the hosts will need to handle swap files.
- Increases the local storage requirement to house the swap files.
Now to throw a spanner in the works, what happens when we use tiered storage and linked clones? As we know, View composer supports the use of shared storage. Using tiered storage:
- Allows us to place the replica on a separate high performance datastore such as EFD.
- Placing on EFD provides savings in storage and gives much faster read operations.
- Replica storage must be shared so all ESXi hosts running linked clones in a pool can access the replica’s disks.
- If the datastore with the replica fails, all linked clones in the pool will be unavailable.
- If the replica is placed on local storage, all linked clones in the pool must be placed on the same local storage. (This is not advisable for large VIew deployments)
Moving the replica to a separate datastore reduces each linked clone datastore by the size of the replica (22GB in my scenario). A replica is required for each pool, so if you have multiple pools allocated to the same datastore, the saving is equal to the sum of all the replicas.
When using linked clones do not store any user data on the operating system disk, ensure this is redirected.
- Redirecting storage profiles outside of the OS disk minimises the size of the linked clones.
- It allows disk I/O to be directed to another datstore.
- It does not lose any user data during a View Composer operation
In most scenarios user profiles and data is redirected to a network share, utilising a lower tier storage. This is not always possible, for example in situations where local mode desktops are required. In cases such as this, the user profile must be stored locally on the virtual desktop. You could consider redirecting the profile and data to a persistent data disk. Separating the this data from the OS disk provide more flexibility for what can be checked in or replicated form a local mode desktop.
As well as redirecting profiles and user data it is worth considering redirecting the paging file and temp files to a disposable disk. The disposable disk is thin provisioned and therefore does not inflate to full size when Windows zeros out the paging file. Doing this also reduce the amount of data that needs to be transferred during checkout check-in and replication of local mode desktops.
Evaluate application requirements and dependencies
Utilise the assessment tools already mentioned (more than once), to gain an understanding of what applications are in use, by who and how often.
Ensure you don’t waste time with applications used over x number of given days, similarly ensure applications used by single or a handful of users are actually required.
Ensure compatibility with the choice of virtualised OS.
Package the applications, ensure the process is documented, tested, agreed.
Once applications are packaged then virtualise with ThinApp, ensure UAT and sign-off.
Can all application be virtualised? Are there applications better placed in the gold image?
Do any applications have specific requirements that mean they cannot be virtualised?
Will the applications be supported by the vendor if ThinApp’d?
Are multiple versions of the same application required by any one user? Internet Explorer? Office?
Evaluate access requirements
Is two factor authentication required?
Are there any security requirements that mean different deskstops should be provided based on location? For example, reduced desktop for tablet devices/kiosks?
Should desktops be available from the internet?
WIll desktop pools require tags?
Evaluate management and administrative needs and determine user groups and access requirements
How many gold images will be required?
How many pools will be required?
Will we make use of linked clones or full clones?
Persistant or non-persistant desktop? Perhaps a mixture to meet all requirements?
Who will administer the desktops and what permissions will they require in both vSphere and View Administration?