VMware vSphere 6.0 Part 4 - Clusters, Patching, Performance
4.4 (56 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
1,169 students enrolled
Wishlisted Wishlist

Please confirm that you want to add VMware vSphere 6.0 Part 4 - Clusters, Patching, Performance to your Wishlist.

Add to Wishlist

VMware vSphere 6.0 Part 4 - Clusters, Patching, Performance

Learn about load balanced DRS clusters, High Availability failure recovery clusters, Fault Tolerance, VM/Host performace
4.4 (56 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
1,169 students enrolled
Created by Larry Karnis
Last updated 6/2017
Curiosity Sale
Current price: $10 Original price: $20 Discount: 50% off
30-Day Money-Back Guarantee
  • 8 hours on-demand video
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Understand ESXi host requirements for DRS/HA clusters
  • Create and edit Distributed Resource Scheduling (DRS) load balanced clusters
  • Understand and adjust DRS automation level settings
  • Understand and apply DRS placement and migration recommendations
  • Use Enhanced VMotion Compatibility (EVC) to safely grow existing DRS clusters
  • Deliver high VM service availability using VMware High Availability clusters
  • Understand and configure HA cluster settings such as Admission Control, All Paths Down and Permanent Device Loss policies
  • Design HA clusters for continued management and correct behavior using network redundancy and Heartbeat datastore redundancy
  • Configure DRS and HA clusters according to VMware's best practices for clusters
  • Understand the features and capabilities of vSphere Fault Tolerance
  • Configure ESXi hosts for FT network logging
  • Enable Fault Tolerance protection on individual VMs
  • Understand the purpose and use of VMware Update Manager
  • Configure VUM for correct behavior according to your vSphere environment
  • Create and update ESXi host patch baselines
  • Apply baselines to ESXi hosts or clusters and check for compliance
  • Patch and update non-compliant ESXi hosts
  • Understand ESXi use of physical CPU resources
  • Understand the five ways ESXi uses to efficiently manage memory
  • Use Overview and Advanced performance charts to monitor resource use
  • Identify and correct common performance issues
View Curriculum
  • We assume that you are familiar with ESXi host and vCenter Server management. That you can create and use VMs, connect to shared storage and perform day to day management tasks in your vSphere environment
  • One way to acquire these skills is to take our VMware vSpher 6.0 Part 1, Part 2 and/or Part 3 classes on Udemy

VMware vSphere 6.0 is the platform businesses depend on to deploy, manage and run their virtualized Windows and Linux workloads.

In this course you will learn how to effectively manage host CPU/Memory resources with DRS clusters, to minimize VM downtime caused by ESXi host failures with HA clusters, to eliminate unplanned VM downtime with Fault Tolerance, to patch and update ESXi hosts with VUM and how to maximize ESXi host and VM performance.

Learn DRS & HA Clusters, Fault Tolerance, VMware Update Manager and Performance

This course covers five major topics that all vSphere 6 vCenter administrators must know:

  • We start with a thorough presentation of VMware Distributed Resource Scheduling (DRS) clusters. DRS clusters dynamically balance VM CPU and memory demands by automatically VMotion migrating VMs experiencing CPU/Memory resource stress. We also look at Enhanced VMotion Compatibility (EVC) - a feature that lets you safely mix newer and older ESXi hosts in a DRS cluster
  • Next, we will learn how to minimize VM downtime due to unplanned ESXi host failures by implementing High Availability clusters (HA). HA clusters monitor ESXi host health, detect ESXi host failures and re-assign VM ownership from failed ESXi hosts to healthy ESXi host peers. We will also learn about key HA policies like All Paths Down and Permanent Device Loss handling - new to vSphere 6
  • We move on to look at how to completely eliminate unplanned VM downtime (even if an ESXi host fails) through VMware Fault Tolerance (FT). FT hot replicates a running VM to a peer ESXi host. If the ESXi host running the primary copy of a FT protected VM fails, FT automatically places the replicated copy into service. We'll see how to configure, run and test FT protected VMs.
  • Next we will see how to use VMware Update Manager to safely and efficiently patch and update ESXi hosts. We will learn about Patch Baselines (patch sets), how to attach Baselines to an ESXi host or cluster, how to check for patch compliance (all needed patches present on a host) and how to patch ESXi hosts.
  • Finally, we will take a close look at ESXi host and VM performance. We will see what the VMkernel does to efficiently utilize physical CPU and how we can right size vCPU in VMs. We will see the five memory management techniques used by the VMkernel to efficiently manage memory and how to turn on Transparent Page Sharing to maximize memory use. We will see how to configure Storage I/O Control and how to identify and fix host and VM performance bottlenecks.

The skills you will acquire in this course will help make you a more effective vSphere 6 administrator.

Who is the target audience?
  • This course is intended for vSphere Administrators who wish to add DRS cluster, HA cluster, Fault Tolerance or VMware Update Manager capabilities to their existing vSphere environment
  • This course will also benefit vSphere Administrators who want to learn how to improve the scalability and performance of their vSphere environments
Students Who Viewed This Course Also Viewed
Curriculum For This Course
185 Lectures
VMware Distributed Rescource Scheduling (DRS) Load Balanced Clusters
29 Lectures 01:19:33

vCenter can organize two or more ESXi hosts into a load balancing cluster called a Distributed Resource Scheduling cluster. DRS clusters dynamically monitor ESXi host load and VM resource demands. Because VMs resource demands change over time, it is possible that a VM that was previously receiving all of the resources it needed could become resource starved due to the changing resource demands of other VMs on the same host.

DRS looks for this very situation and will take action by either recommending VMotion migrations or initiating VMotion migrations to rebalance VMs across the cluster. In this way, your DRS cluster always runs VMs in the most resource efficient manner. DRS delivers the following benefits:

  • Resource contention is immediately addressed
  • ESXi hosts are always approximately equally loaded
  • As VM resource demands change, DRS responds with re-balancing decisions
  • All VMs receive the same resource availability (subject to resource settings). This means that as VM resource demands increase over time, the load is rebalanced so that all VMs receive the same level of service (subject to resource tuning)
  • New ESXi servers can be provisioned based on demonstrable resource demands across all systems rather than the need of one or a few VMs
  • Adding a new ESXi host to a DRS cluster causes the cluster to rebalance VMs across the new host thereby reducing resource stress across all VMs
Preview 02:06

DRS clusters are vCenter objects that you can create only in the Hosts and Clusters view. To create a new cluster, select either a datacenter or a folder, right click and select New Cluster. This creates the new inventory object and lets you set the cluster's name. You would then right click the new cluster and select Edit Settings... to enable and configure DRS.

A DRS cluster needs one or more ESXi hosts in order to function. You can add ESXi hosts to a DRS cluster by dragging an ESXi host onto the DRS cluster inventory object. You can add a new ESXi host to a DRS cluster at any time. Upon receiving a new host, the cluster will reassess VM placements and migrate VMs onto the new host to even out resource consumption across all hosts.

It is the ability to hot-add ESXi hosts to DRS clusters that gives IT managers the ability to provision new servers in response to demonstrable increases on PC server work loads. In the past IT managers had to provision new PC servers for every new workload (OS + application). Now IT managers can simply create new VMs on a DRS cluster and let the cluster load balance the VM population. If the cluster's overall resource utilization rate climbs too high (say over 80%), you can simply add a new ESXi host to the cluster. DRS will re-balance across all hosts so overall resource load goes down and VM performance goes up.

When you add a host to a DRS cluster – everybody benefits!

Preview 01:03

Initial Placement is the act of selecting a suitable ESXi host for VM placement and power on. When a user powers on a VM, DRS will:

  • Query the current resource load across ESXi hosts
  • Look at the resource demands of the VM (# of vCPU cores, RAM declaration)
  • Cold migrate (reassign VM ownership) to the ESXi host that has the most free resources
  • Tell that ESXi host to boot the VM

Because DRS cluster resident VMs should all be VMotion compatible, any ESXi host can act as a power on host for the VM. By finding the least busy ESXi host at VM power on time, DRS attempts to place the VM on the host that will provide the VM with the best overall access to resources.

Dynamic Balancing starts monitoring ESXi host load and VM resource demands. If DRS determines that a VM is being resource starved, it will look to see if another ESXi host has free resources of the type the VM needs (CPU, RAM). If yes, then DRS will either migrate or recommend the migration of the VM over to that host. In this way, DRS can respond to changes in VM resource demands as VMs run without the need for human intervention.

Preview 02:24

DRS has three modes of operation...

In Manual mode, DRS makes Initial Placement and Migration recommendations only. However, VMotion will take no action on its own.

DRS clusters in Partially Automated mode will make Initial Placement decisions at VM boot time thereby freeing users from having to deal with this task. But, DRS will only make VMotion migration recommendations when it detects signs of resource stress.

DRS clusters in Fully Automated mode make both Initial Placement and Dynamic Balancing decisions automatically. In this way, they address both power on resource contention and running VM resource contention. Fully Automated DRS clusters have additional tunable that let the ESXi administrator fine tune DRS behavior.

Preview 04:48

If you set your DRS cluster to fully automated, you must further tune the cluster to trade off load balancing vs. VMotion overhead.

VMotion takes CPU and memory resources to complete – so there is a resource penalty to pay when a VM is hot migrated to a new host. Aggressive DRS clusters move VMs when the potential VM performance improvement is just slightly more than the cost of the VMotion. This could result in unnecessary VMotions if the VM is experiencing a very short spike in resource demands.

As a DRS cluster administrator, your job is to find the setting that provides good overall load balancing while minimizing the number of VMotions. Typically this setting is either in the middle (default) or just to the right or left of the middle.

Experiment to find the best setting for your organization.

Setting DRS Automation Level

A DRS cluster in Fully Automated mode can be tuned for resource contention sensitivity. This allows ESXi administrators to set the level of resource contention that must be present before DRS will intervene.

The relative benefit a VM will receive through VMotion migration is represented in DRS by one to five stars. The more stars a recommendation receives, the more DRS believes the VM will benefit if a recommendation is accepted.

DRS recommendations are based on two factors:

  • How much resource starvation the VM is experiencing and
  • How long the VM has been starved for resources

In a nutshell, the length and severity of resource starvation, along with the target ESXi hosts ability to satisfy the VMs resource needs, determines the number of stars a recommendation receives.

A 5-star recommendation is a special case. DRS includes VM affinity and anti-affinity settings (see slide 16). A 5-star recommendation is only made when a VM's current ESXi placement is in violation of any (anti-)affinity rules.

DRS Migration Threshold

Adding ESXi Hosts to a DRS Cluster

DRS Initial Power On Placement Guidance

CPU compatibility has always been an issue with VMotion. Here's the problem:

When a VM powers on, it gets service by a physical CPU core. At power on time, the VM probes the CPU for special capabilities (e.g.: SSE, SSE2, SSE3 or SSE 4.1 instruction support, 64-bit support, Virtualization Assist technology, etc.). Once the VM learns about the special capabilities of the CPU running it – it never, ever re-probes the CPU. This can present a problem for VMotion because – if a VM moves to a host that lacks some of the capabilities the VM expects, any attempt to use those capabilities will result in application or OS failures (e.g.: Illegal instruction faults if a VM tries to execute an SSE3 instruction on a host that only supports SSE2).

VMware partially solves this problem with Enhanced VMotion Compatibility mode. With this DRS feature turned on, ESXi hosts will mask away features that are not common to all the physical CPUs in the cluster. This creates a situation where VMs see only compatible CPU features – even if the physical CPUs are not the same.

EVC works within processor families – and to a limited extent, across processor families. Before turning on EVC, check to make sure that all of the CPUs in your physical hosts are EVC compatible and determine the highest common EVC processor family. You can use the VMware CPU identification tool to assist (See VMotion chapter).

To use EVC, select your CPU maker, then CPU family.

Enhanced VMotion Compatibility and AMD Opteron CPUs

Intel and AMD improve their CPU products regularly. They add features, new instructions, new hardware capabilities, etc. - some of which are visible to the Guest OS. Any CPU mismatch could result in VMotion failures or VM failures due to CPU compatibility.

If you have Intel Xeon CPUs, EVC will mask away the differences within Xeon product families. That would allow you to mix different versions or stepping of these four Xeon families:

1. Xeon Core i7 CPUs
2. Xeon 45nm Core 2 CPUs
3. Xeon Core 2 CPUs
4. 32nm Core i7 CPUs
5. Sandy Bridge CPUs
6. Ivy Bridge CPUs
7. Haswell CPUs

Click the Xeon processor family that represents your CPUs. Review the information provided and ensure that all of your CPUs match before turning on EVC.

Enhanced VMotion Compatibility and Intel Xeon CPUs

EVC abstracts CPUs within a CPU family down to a common set of features and functions. This makes all CPUs VMotion compatible (because VMs all see the same CPU feature set). This lets you:

  • Mix hosts with new and older CPUs from the same CPU family
  • Mix hosts from different manufacturers as long as the CPUs meet EVC requirements
  • Mix hosts with different core/socket counts
  • Extend the life (capacity) of your cluster rather than buying new hosts
Enhanced VMotion Compatibility Benefits

EVC places strict limits on hosts when they attempt to join a cluster. EVC performs a host CPU compatibility check and will refuse to allow any host to join the cluster where the new host CPUs a older than the CPU family selected when EVC was enabled.
EVC was introduced in ESXi 3.5 Update 2 and is supported for this release of ESXi and all newer releases of ESXi (e.g.: ESXi 3.5 Update 3, Update 4, ESXi 4.x, ESXi 5.x and ESXi 6.0).

DRS Cluster EVC Requirements

EVC Validation Successful

EVC Validation Failure

Mixing Server Generations within an EVC Enabled DRS Cluster

To set (Anti-)Affinity rules, right click the DRS cluster → Manage tab → Settings → VM/Host Rules

Some VMs may perform better whenever they reside on the same ESXi host. An example might be a SQL database VM and an Accounting VM that uses SQL for record storage/retrieval. If these VMs reside on different ESXi hosts then network SQL requests must flow through physical networking at 1GB. If these VMs reside on the same ESXi host then they can exchange packets at virtual network speed (which should be faster). In cases like this, you would create an Affinity rule to tell DRS to keep these VMs on the same ESXi host.

For many applications, it makes sense to create two or more VMs that perform exactly the same function. That way, you can take one VM down for maintenance and the service is still available. Applications that could benefit from this approach include E-mail servers, Web servers, DNS servers, DHCP servers, etc. that perform the same function for the same clients.

If DRS were to place two VMs that provide the same service on the same ESXi host, problems could arise because the VMs compete with each other for the same resources. If the ESXi host were to go down, both VMs would fail. In this case, create an Anti-Affinity rule and DRS will place these VMs on separate ESXi hosts.

DRS Affinity and Anti-Affinity Rules

Navigation: Click your DRS cluster → Manage → Settings → VM Overrides
Click Green +, select VMs you wish to override → OK
For each VM in the roster, click the VM and select a new automation level

A DRS cluster is configured with a default automation level (either Manual, Partially Automated or Fully Automated). With no further action, this default automation level would be applied to all VMs on the cluster. However, ESXi administrators can override the default DRS cluster automation level on a VM per VM basis.

For example: suppose your organization ran strict change management procedures only on critical production VMs. An ESXi administrator could set the default DRS cluster automation level to Fully Automated and then use per-VM overrides to downgrade critical production VMs to Partially Automated. That way, a person would have to approve (and log) any VMotion migrations of critical production VMs (only).

For another example, suppose a small IT department had 3 ESXi hosts; 2 for Production and one for Test. They wanted to give their production VMs the best possible performance but did not want their Test VMs to leave the test ESXi server. They could create a 3 host DRS cluster and set their production VMs to Fully Automated. They could then set their test VMs automation level to Disabled. As a result, Test VMs would never leave the Test ESXi host. But production VMs could be migrated to/from the Test ESXi host to take advantage of the resources available on all ESXi hosts.

Per VM DRS Cluster Setting Overrides

Distributed Resource Scheduling Clusters

CPU and RAM Host Utilization

The Resource Allocation tab gives you a point in time view of your VMs and the resources they are receiving (as well as the host they are on and the shares they hold, reservations, limits, etc.).

This view displays values that are editable. If you see a VM whose resources, etc. don't match your needs simply click the value you wish to change and edit it directly. No need to launch the Cluster Properties window to make small changes.

vSphere Client Resource Allocation tab

The DRS tab displays the status of your cluster along with additional functions...

The Run DRS link lets you tell DRS to refresh its recommendations – now!

The Apply Recommendations button is how you give DRS permission to make the changes it suggests

The Edit... link lets you change the cluster properties directly from this screen

The Faults button lets you review past DRS actions that failed for any reason

The History button lets you review past DRS actions taken to keep the cluster in balance

Web Client DRS Tab

DRS History

DRS clusters honor all resource settings on individual VMs and also on Resource Pools including CPU/Memory reservations, shares and limits. So, VMs will not have their resource entitlements changed simply because they are now managed by a DRS cluster.

Because DRS relies on VMotion, DRS can only load balance VMs that are VMotion compatible. For best results, you should carefully plan your ESXi host deployment and configuration so that all of your ESXi hosts are VMotion compatible. Then, you should take care to configure your VMs so that they use only common storage, networking and removable media resources so that you do not inadvertently lock a VM to an ESXi host.

DRS can run VMs that are not VMotion compatible – but it cannot move them. So, if an ESXi host is running a mix of VMotion and non-VMotion compatible VMs, DRS is limited to moving only the VMotion compatible VMs. This may impair its ability to fully load balance across the cluster.

DRS Clusters and Resource Management

If your organization is new to VMotion and DRS it is likely that you may receive resistance to the idea of automatically hot migrating VM. If this happens to you, you can ease your organization into VMotion and DRS as follows:

Create a DRS cluster and set the automation level to Manual

  • Let VM owners review and accept Initial Placement recommendations
  • When VM owners get tired of always accepting DRS Initial Placement recommendations, increase the automation level to Partially Automated

Over time, VMs will experience resource starvation on their ESXi host

  • Have VM owners address their own performance issues by asking them to review and accept DRS Migration recommendations
  • Once VM owners get tired of always accepting VMotion migration recommendations, increase the automation level to Fully Automated

Finally, deal with any per-VM concerns by using DRS Rules to override the DRS cluster default for that VM.

Strategy for Adopting DRS

DRS will only violate affinity rules when it has no choice. The most likely scenario is where a DRS cluster is also an HA (fail over) cluster. For example, if you have two ESXi hosts in a combined DRS+HA cluster and you have two VMs in an anti-affinity rule (keep apart) for high service availability, the following situation could lead to a 5-star recommendation:

The two VMs are running on separate ESXi hosts

  1. One host fails completely causing an HA VM migration of VMs that died when the ESXi host died to the surviving ESXi host
  2. The failed VM in the anti-affinity relationship is started on the surviving ESXi host
  3. Both VMs (in the anti-affinity relationship) are now running but in violation of the anti-affinity rule intended to keep them on separate ESXi servers
  4. The failed ESXi host is repaired and rebooted
  5. When it comes up, it rejoins the DRS+HA cluster
  6. DRS will generate a 5-star recommendation on one of the 2 anti-affinity VMs so
  7. that it can be migrated over to the restored ESXi host
DRS Best Practices


VMware High Availability (HA) Clusters
41 Lectures 01:58:33

High Availability clusters solve the problem of rapid VM placement and recovery if a VM should fail because the host it was running on failed (any of VMkernel panic, hardware failure, non-redundant storage failure, etc.)

HA minimizes VM down time by:

  • Actively monitoring all ESXi hosts in an HA cluster
  • Immediately detecting the failure of an ESXi host
  • Re-assigning ownership of VMs that died when an ESXi host in an HA cluster dies
  • Instructing the new ESXi host to boot the VM

The overall objective for HA is to have VMs back in service in less than 2 minutes from the time an ESXi host fails. Users will still need to re-establish any authenticated sessions against the recovered VM... but (hopefully) the VM down time experienced will be no more than a nuisance.

Preview 01:53

For HA to place and power on VMs that fail when an ESXi host fails, those VMs must use only resources common to both the failed and the new ESXi host. This means the VM must use common

  • Networks including any production, test, NAS or IP storage networks
  • Datastores for both virtual disk storage and/or ISO/floppy image storage
  • NAS resources

VMs can meet HA compatibility without meeting VMotion compatibility. Specifically, because the VM is being cold-booted rather than hot migrated, there is no need to maintain ESXi CPU compatibility between HA cluster peers. The reason for this is that when the VM is booted on the target ESXi host, the VM can probe for CPU properties so there is no need for the CPUs to exactly match between the failed ESXi host.

Preview 00:53

HA is enabled on a new or existing cluster simply by checking the Enable VMware HA check box. Once you complete setting the HA cluster properties, vCenter will connect to each ESXi host in the cluster and reconfigure it to act as a peer in an HA cluster. It only takes a few minutes for this process to complete.
Preview 01:33

Virtual Machine Monitoring

Review and Set HA Failure Policies

ESXi Host Failure vs. ESXi Host Isolation

Permanent Device Loss (PDL)

All Paths Down (APD)

VM Monitoring Sensitivity

Admission Control Policy

VMware HA clusters reserve ESXi host CPU and memory resources to ensure that VMs that fail when an ESXi host fails can be placed and restarted on a surviving ESXi host. There are two factors that determine how much ESXi host resources are held back:

  1. The number of ESXi host failures the cluster can tolerate
  2. The total number of ESXi hosts in the cluster

In the above examples, three HA clusters are illustrated. In each case the cluster is configured to tolerate a maximum of one ESXi host failure.

In a 2-node cluster, HA must hold back 50% of all ESXi host CPU and memory. That way, it can guarantee that it can place and restart all VMs from a single failed ESXi host. Consequently each host can never be more than 50% busy.

In a 3-node cluster, HA will hold back 33% of all ESXi host CPU and memory. If an ESXi host fails, half of the VMs from the failed host will be placed on each of the surviving ESXi hosts. A healthy 3-node cluster can be up to 66% busy.

In a 4-node cluster, HA only needs to hold back 25% of each hosts resources. And, in a 5-node cluster only 20% of each host's resources are kept in reserve.

Admission Control Policy Explained

Admission Control Settings

Admission Control - Slot Size

ESXi Host Failure Options

HA Cluster Network and Datastore Heartbeating

vCenter is the central management console for HA clusters. vCenter is responsible for creating, monitoring and coping with host failures and recoveries.

The fundamental assumption for HA clusters is that any host can fail at any time. And, because vCenter could be running as a VM, it is possible that the vCenter VM could fail when a host fails. This is a challenge because, if the cluster depended on vCenter and vCenter has failed, then the cluster could not recover from an ESXi host failure. To circumvent this problem, vCenter publishes the cluster configuration to every ESXi host in the HA cluster. That way, each ESXi host knows:

  • All of its ESXi peers in the HA cluster
  • Which VMs are assigned to which ESXi host
  • Specific cluster properties (like VM restart priority)

In the event of an ESXi host failure that also causes the vCenter VM to fail, the surviving ESXi hosts would cooperate to distribute the failed VMs (including the vCenter VM). Once the VMs were distributed, each host would begin booting its newly assigned VMs according to individual VM restart priority. As a result, the vCenter VM would be placed and booted quickly thereby bringing vCenter back into service.

Best Practice
If your vCenter server is a VM, it is a best practice to give it high restart priority.

HA Network Heartbeat Explained

HA Datastore Heartbeats

Configure Datastore Heartbeat

HA VM Restart Priority

In the Virtual Machine Options page, you are presented with a roster of all of the defined VMs on the cluster. You can click the VM row under the Restart Priority column header to change the restart priority for a VM.

High Priority – AD, DNS, DHCP, DC, vCenter and other critical infrastructure VMs
Medium Priority – Critical application servers like SQL, E-mail, Business applications, file shares, etc.
Low Priority - Test, Development, QA, training, learning, experimental and other non critical workloads
Disabled - Any VMs not required during periods of reduced resource availability (select from Low Priority examples)

HA VM Restart Best Practice

To ensure continued operation of all of your virtual infrastructure, it is important that you assign VM restart priority with care. By default, all VMs are placed and restarted at the HA cluster's default restart priority (set on the HA clusters main settings page (right click cluster > click Edit Settings > click HA > click VMware HA). The default for HA cluster Restart Priority settings is Medium.

Note HA will not power off VMs on healthy ESXi hosts to free up resources for High or Medium priority VMs from failed ESXi hosts. Whether it is reasonable to do this or not would be up to the local administrator.

Best Practice
Set your cluster default restart priority to low. Then individually set your critical infrastructure VMs (DNS, DHCP, AD, DCs, etc.) to high and your critical business VMs to medium.

Setting HA Cluster HA Restart Priority - Per VM Overrides

Adjust Individual VM Restart Priority

HA Cluster Overview

vSphere HA Cluster Summary

Impact of ESXi Host Network Isolation

An ESXi host can determine that it is isolated if it loses ESXi Console network connectivity. When that happens, the ESXi host checks the link state of the physical NIC that uplinks the ESXi Console virtual NIC with the physical switch. If the physical NIC link is down, the ESXi host knows that it is isolated.

Isolation Response behavior is triggered after 15 seconds (tunable). If a NIC cable was pulled accidentally, 15 seconds should be sufficient time to fix the problem (re-plug the cable) and avoid a cluster failure.

Through heart beat failure, other ESXi hosts would quickly determine that a peer HA cluster node is unresponsive. They would then check their own ESXi Console physical NIC link to verify that they have network connectivity. In this way, they determine that they are not the isolated host and that they should cooperate with other surviving HA cluster nodes to implement the clusters' Isolation Response policy.

ESXi Host Management NIC Failure

After 15 seconds, the isolated HA cluster node implements the VM's isolation response policy. If that policy is Power Down, then our VM would power crash. This is the virtual equivalent of pulling the power plug on a physical machine.

Pulling the virtual power on a VM can be traumatic but does avoid a potentially greater problem. If the VM was told to perform an orderly shutdown instead of loosing it's power, then:

  • The remaining cluster nodes would have no way of monitoring progress
  • The shutdown request could fail at the VM level
  • The VM could hang or lock up during shut down

When the VM has been successfully powered down, the isolated ESXi host removes the exclusive lock it holds on the VM's virtual disk. Healthy ESXi cluster nodes monitor the VM's lock and know that it is safe to take ownership of the VM once it's lock has been removed.

VM Powered Off when ESXi Host Running the VM is Network Isolated

HA cluster nodes would then distribute the powered off VMs from the isolated host amongst the the surviving ESXi hosts. VMs would be placed and powered on according to their Restart Priority setting on the HA cluster.

In this case, the left-most ESXi host assumes ownership of the Web01 VM. This VM is added to the hosts VM inventory and then immediately powered on. When the VM is powered on, it's new owner establishes it's own exclusive lock on Web01's virtual disk.

The isolated node would watch for the presence of a lock file for each VM it lost. Once the VM has been successfully powered on on another ESXi host, the isolated host would remove the VM from its own VM inventory.

Change VM ESXi Host Ownership - Power VM Back On

Before you can take an ESXi cluster node out of a cluster for maintenance, you must announce to the cluster that the host is being pulled from service. You do this by putting the ESXi host into Maintenance mode as follows

Right click ESXi host > Enter Maintenance Mode

When you place an ESXi host into Maintenance Mode, the following takes place:

  • On DRS cluster, the ESXi host will no longer receive new VM power on requests, nor will it be the target for a VMotion request. The DRS cluster will attempt to VMotion off all VMotionable VMs on the host that has entered Maintenance Mode. If you have non-VMotion compatible VMs, you would need to power them off yourself before you shut down the ESXi host.
  • On HA clusters, the host in Maintenance Mode will not receive VMs from failed ESXi hosts. You have to manually shutdown any VMs running on the ESXi host that is going into Maintenance Mode.

Once the ESXi host is fully evacuated of VMs, you can shutdown or reboot the ESXi host (right click the host), patch, upgrade, etc. the ESXi host. When you boot it back up, it will automatically re-join any clusters for which it was a member.

Maintenance Mode

It is critical that VMware ESXi administrators be informed of major network upgrades and/or outages... and especially if network outages may occur on switches used by High Availability cluster ESXi management ports.

The scenario above is a real possibility and will have serious results. If a switch that provides ESXi HA management networking fails (for any reason), all nodes in the HA cluster will believe they are isolated. If that happens, and the cluster Isolation Response policy is Power Off, then all ESXi hosts will power off all VMs.

To defend against this possibility, you could:

  • Use multiple physical switches
  • Upgrade one switch at a time
  • Have multiple ESXi Management ports on multiple switches
  • Change the Isolation Response policy during switch maintenance windows to 'Leave Powered on'
Isolation Response - Epic VM Fail

vSwitch0 is the vSwitch used to connect the default Management port to the physical (management) LAN segment. If you loose connectivity to this NIC, your ESXi host is unmanageable and your HA cluster will trigger it's Isolation Response policy. You have two choices when designing your networks to maximize your management capabilities.

NIC Team vSwitch0
If you NIC Team vSwitch0, then you will have 2 or more NICs connected to the same physical LAN segment usually through the same physical switch. You are protected from a NIC failure, cable pull or switch port failure but not from a physical switch failure.

Second Management Port
You should consider adding a second management port on a completely separate physical LAN segment by making a new service console port on vSwitch1, vSwitch2, etc. If these other vSwitches uplink to different physical switches than vSwitch0, then you will have achieved management port redundancy, NIC redundancy and switch redundancy. This provides you with the maximum protection and minimizes the likelihood of a HA fail over event caused by a single hardware component failure.

If you add a second management port, please make sure that vCenter can connect to all ESXi hosts on all management ports. It is best if vCenter has 2 NICs with connections to each of the physical switch(es) used for management port connectivity.

Isolation Response - Best Practice

DRS and HA clusters work best together. DRS will dynamically place and load balance VMs while HA will restart VMs that fail when a host fails. Using these tools, an IT department can deliver consistently good VM performance with very little VM down time.

In a combined DRS+HA cluster if a host fails:

  • HA will detect the loss of the host
  • HA will place and power on VMs that failed when the host failed
  • HA is now done
  • DRS will then load balance the remaining hosts in the cluster
  • Once VMs power on, DRS will move them if they can get better resource allocations on different (surviving) hosts
  • When the failed ESXi host boots up, it will be added back into the cluster
  • DRS will then VMotion VMs back onto the recovered host to rebalance the cluster
Combined HA and DRS Clusters

vSphere Clusters - Best Practice

What's New in vSphere 6 for HA Clusters

HA Disabled VM Handling

VMware introduced vLockStep into vSphere 4... vLockStep is replication technology that replicates the complete state of a VM running on one ESXi host into a VM running on a second ESXi host. In essence, the two VMs form an active/stand-by pair... They are the same in all respects; they have the same configuration (virtual hardware), share the same virtual MAC address, have the same memory contents, CPU contents, complete the same I/Os, etc. The main difference is that Secondary VM is invisible to the network. VMware upgraded their Fault Tolerance technology to use Rapid Checkpointing in vSphere 6. Rapid Checkpointing provides more scalability than vLockStep.

If the Primary VM were to fail for any reason (e.g.: VMkernel failure on the machine running the Primary VM), Fault Tolerance would continue running the VM – by promoting the Secondary VM to the Primary VM on the surviving host. The new Primary would continue interacting with peers on the network, would complete all pending I/Os, etc. In most cases, peers wouldn't even know that the original Primary has failed.

To protect against a second failure, Fault Tolerance would then create a new Secondary node on another ESXi host by replicating the new Primary onto that host. So, in relatively little time, the VM is again protected and could withstand another VMkernel failure.

Note that Fault Tolerance does not protect against SAN failures

VMware recommends a minimum of 3 ESXi hosts in an HA/FT configuration so that, if one ESXi host is lost, there are 2 hosts remaining so that the FT protected VM can create anew Secondary copy on the 3rd cluster host.

VM Fault Tolerance

High Availability Clusters Lab

VMware Fault Tolerance
27 Lectures 01:02:00

VMware introduced FT vLockStep into vSphere at version 4.0... and replaced it with Fast Checkpointing replication and synchronization technology that:

  • Builds a duplicate VM (Secondary) on a different HA cluster host
  • Quickly and efficiently synchronizes Primary VM to the Secondary VM
  • Makes all I/O operations visible to the Secondary VM
  • Ensures that the Secondary VM is in exactly the same state as the Primary VM at all times
  • Replicates updates to the Primary's .vmdk to the Secondary's .vmdk

If the Primary VM were to fail for any reason (e.g.: VMkernel failure on the machine running the Primary VM), Fault Tolerance would continue running the VM – by promoting the Secondary VM to the Primary VM on the surviving host. The new Primary would continue interacting with peers on the network, would complete all pending I/Os, etc. In most cases, peers wouldn't even know that the original Primary has failed.

VMware's best practice is to build HA/FT clusters using an odd number of servers. This allows FT to protect against a second failure. In the case of a host failure, FT would create a new Secondary node on another ESXi host by replicating the new Primary onto that host. So, in relatively little time, the VM is again protected and could withstand another ESXi host failure.

Preview 02:37

Fault Tolerance - Use Cases

Fault Tolerance Lab - Part 1

What's New in vSphere 6 for Fault Tolerance

Fault Tolerance HA Cluster and ESXi Host Requirements

Fault Tolerance HA Cluster Compliance Checks

Fault Tolerance Virtual Machine Requirements

Fault Tolerance Protected Virtual Machine Restrictions

Fault Tolerance Networking - Best Practice

Fault Tolerance VMkernel Port Configuration

Enabling Fault Tolerance Protection on a Virtual Machine

Fault Tolerance Enabled - VM Compliance Checks

VM Turn On Fault Tolerance Wizard - Step 1

Select Fault Tolerance ESXi Host - Step 2

VM Virtual Disk (VMDK) Replication Phase

Fault Tolerance Protected Virtual Machine

Fault Tolerance Operations on an FT Protected VM

VMware Fault Tolerance - ESXi Host and Storage Recommendations

Fault Tolerance Network Bandwidth Estimates

Fault Tolerance Virtual Machine Best Practices

Fault Tolerance Lab

Patching and Updating ESXi 6 Hosts with VMware Update Manager (VUM)
35 Lectures 01:19:08

VMware Update Manager for vSphere 6.0 - Space Size Estimator

VMware Update Manager Storage Requirements

Launching the VMware Update Manager Installer

Installing VMware Update Manager

Install and Enable the VMware Update Manager Plugin for vSphere Client

Overview of VMware Update Manager Configuration

VUM Configuration - Network Connectivity

VUM Configuration - Patch Download Settings

VUM Configuration - Patch Download Schedule

VUM Configuration - Virtual Machine Settings

VUM Configuration - ESXi Host Maintenance Mode Settings

VUM - VMware Supplied (Default) Patch Baselines

The Four Types of Patch Baselines

The New Baseline Wizard

Fixed and Dynamic Patch Baseline Options

Fixed Patch Baselines - Manually Selecting Desired Patches

Dynamic Patch Baselines - Selecting Desired Patches by Property

Attaching a Baseline to an ESXi host or Cluster

Scanning ESXi Hosts or Clusters for Patch Compliance

Patch Compliance Scan Complete - Review Results

ESXi Host Patch Details

Scheduling VMware Update Manager Tasks

Using VUM to Patch DRS Clusters

Using VUM to Patch High Availability Clusters

VMware Update Manager In Action...

Monitoring VUM Activity using Tasks and Events

VMware Update Manager Lab

ESXi Host and Virtual Machine Performance Analysis and Tuning
38 Lectures 01:56:12

Virtualization is most effective when a large number of PC server workloads can be consolidated onto a much smaller population of ESXi hosts. The key to doing this effectively is the VMkernel's ability to determine which VMs need service and then to ensure those VMs get the resources they need. Since different VMs can spike on resources at different times, the VMkernel must stay vigilant in its efforts to monitor and allocate resources.

The first thing to realize is that the VM does not need it's full allocation of resources - all the time. For example, a VM that is only 10% busy (CPU wise) could, in theory, get by with just 10% of a CPU – as long as it received CPU service exactly when it needed it. In this way, a single CPU resource (e.g.: a CPU core) could service many VMs and give them all the cycles they need – just as long as they didn't all need cycles at exactly the same instant in time.

Secondly, if you know where to look, an operating system will tell you when it doesn't need CPU service. All operating systems include a very low priority Idle task. This task is run when there is absolutely nothing else for the operating system to do (no tasks needing service, no I/O's to complete, etc.). VMware tools monitors the guest OS Idle task and reports back to the VMkernel whenever the Idle task is running. In this way, the VMkernel knows which VMs are idling and can either reduce idling VM's scheduling priority or even pull the CPU from Idling VMs so that it can swing the CPU over to VMs with real work to do.

Preview 02:37

Along with memory, CPU is one of the most likely resources to experience contention. So, managing CPU effectively will have a great impact on the overall load an ESXi host can handle.

When an ESXi host boots, the VMkernel scans the hardware for CPU resources including Sockets, Cores and Hyper-Threaded Logical Processors. Of the three, an additional socket provides the highest overall increase in CPU performance. After that, adding cores to a physical CPU package is the next most effective strategy to improve CPU capabilities. Finally, if you are an Intel customer, then using Hyperthreading will modestly boost performance even further.

ESXi abstracts sockets, cores and logical processors into separately schedulable, weighted CPU resources called Hardware Execution Contexts (HECs). Since a socket is the most effective CPU resource, the VMkernel will schedule VMs against all sockets first. If there are more VMs to run, the VMkernel will then schedule remaining VMs against CPU cores across CPU sockets. If there are still more VMs that want service, then the VMkernel will schedule those (lower priority) VMs against Hyperthreaded Logical Processors.

ESXi licenses by the socket starting with a minimum of one sockets per server. So, the best way to maximize the performance of CPU to your VMs is to purchase competent (large cache, high frequency) multi-core CPUs.

Preview 04:49

The VMkernel runs a VM task scheduler to assign physical CPU resources to VMs. The VMkernel task scheduler runs 50 times per second and each time it runs it must decide which VMs will run and which VMs will wait for CPU service.

The scheduler first decides which VMs get to run. To do this it uses a number of factors including:Is the VM running real work or just it's idle task?

  • - Has the VM received its declared reservation yet this second?
  • Has the VM hit it's limit?
  • How many CPU shares does the VM hold relative to the total outstanding?
  • Are there any CPU affinity rules that would impact scheduling decisions?
  • How many physical CPU resources does the VMkernel have to give out?
  • Etc.

To run a VM, the scheduler must find physical CPU resources equal to the number of vCPUs in the VM. If it were to provide the VM with fewer than the declared number of vCPUs the Guest OS in the VM would treat this as a CPU failure and would Blue-Screen the OS.

Preview 03:34

Physical to Virtual CPU Scheduling

The VMkernel abstracts physical CPU resources into independently schedulable processor resources called Hardware Execution Contexts (HEC). Depending on the capabilities of your CPU(s), an HEC can be any of:

  • A full socket (for single core, non-HyperThreaded CPUs)
  • A core (for dual, quad and six core CPUs)
  • A HyperThreaded Logical Processor (for Intel CPUs that support hyperthreading)

An HEC is exactly like the physical CPU (same maker, model, speed, cache, etc.) but is presented to the VM as a single socket/single core CPU resource.

The VMkernel assigns weights to HECs according to their relative processing power. Using a physical single core CPU as a baseline, each additional CPU socket adds about 85-98% in additional performance. Each core (on a socket) adds an 65-85% additional performance while HyperThreaded Logical Processors might add only 5-30% of a CPU core in performance.

The VMkernel assigns the most powerful HECs to high priority VMs, thereby distributing high priority VMs across sockets. Next, the VMkernel schedules VMs across CPU cores. Finally, low priority VMs get scheduled on cores or HT Logical Processors.

If there are more Virtual CPUs than physical HECs, (as in the slide above) some VMs are forced to wait.

Physical to Virtual CPU Scheduling

Concurrent vs. Sequential Tasks
Concurrent applications are applications that can process more than one request at the same time. Such applications are either multitasking or multi-threaded. Examples of concurrent applications include modern web servers (service multiple web requests concurrently), mail servers (service inbound and outbound mail requests concurrently), database servers (service queries concurrently), etc. Under heavy load, concurrent applications benefit from additional vCPUs because different tasks/threads can execute simultaneously on different vCPUs.

Sequential applications are applications that service one request at a time or do one thing at a time. These tasks run as a single process with only one thread of execution. (Legacy applications are often designed in this manner.) CPU bound sequential applications receive no additional benefit from adding vCPUs because the application can only use one CPU at a time. Adding vCPUs to a sequential application wastes the CPU resource because the guest OS has no choice but to run it's Idle Task with the additional CPU resource(s). Sequential applications typically execute best on high frequency CPUs that also contain larger caches – because they will execute faster than if they were serviced by slower (frequency) CPUs with smaller caches.

If you have a CPU bound VM, add a second vCPU. If application performance doesn't improve, then you have a sequential application. In this case, remove the additional vCPU from the VM because the VM will waste these extra cycles.

Sequential vs. Concurrent Tasks

Since VMs don't need 100% CPU service all the time, the VMkernel can effectively run two or more VMs on the same physical CPU resource and give these VMs all of the CPU service they need. The trick is to detect when a VM is idling and immediately steal away the CPU from that VM and give it to another VM that has real tasks to run.

Since an Idling VM would accomplish nothing, and the same VM waiting on a Run Queue for it's turn to run would also accomplish nothing, there is no harm in forcing an Idling VM to give up it's CPU resource.

Light duty workloads that may only need 3-5% of a CPU (e.g.: DNS, DHCP, Active Directory, Domain Controllers, light Web, File/Print, etc.) it makes sense to consolidate these VMs onto ESXi at very high ratios (up to 8 vCPUs per physical CPU resource) because, at 5% utilization, 8 VMs would only need 40% of a CPU core.

Busier VMs may still perform fine with no more than 25-35% of a CPU when they spike and less otherwise. So, on average, if your VMs are 20% busy a 2-socket quad core server could, in theory, run up to 40 VMs before experiencing CPU starvation.

Safe Physical CPU Over Commit

ESXi loads into memory at boot time. ESXi 6.0 needs a minimum of 4GB of memory to boot and run (this is checked for at boot time). Note – the presence of any device (e.g.: shared memory video) that reduces available memory below 4GB will prevent an ESXi host with only 4GB of RAM from booting.

The VMkernel takes approximately 5% of RAM and leaves the rest for VM use. You can see current system-wide RAM statics as follows:

Web Client > Your ESXi Host > Manage tab > Settings tab > Memory

VMs compete with each other for use of remaining physical memory. If there is more RAM in the ESXi host than VMs need, then VMs will get all of the physical RAM they attempt to use (up to their declared maximum). If VMs attempt to use more physical RAM than the ESXi host has available, then the VMkernel must step in and use it's memory management skills to minimize memory contention. The VMkernel has five techniques for accomplishing this task.

Note: ESXi 6.0 is rated for a maximum of 6TB of physical host memory. However, VMware has been working with hardware OEMs to double physical memory support to 12TB. Please see VMware's hardware compatibility portal to find out which machines can handle more than 6TB of physical memory

Physical ESXi Host Memory

VMware encourages its customers to run a mix of VMs whose memory foot print (declared memory needs) exceeds physical RAM by between 20-40%. This means that an ESXi host with 16GB of RAM could easily and efficiently run a mix of VMs that collectively declare 20+GB of memory (25% over commit) with no reduction of memory performance and no sign of memory stress.

VM Memory Over Commit

The VMkernel uses paging to map physical RAM into the virtual address space of a VM. The VM can use any of

  • Physical memory
  • Other VM's memory (Transparent Page Sharing)
  • Unallocated memory (reserved but unused Swap space)
  • Disk space (used Swap space)

To meet a VM's memory requirements. Through demand paging, a VM is tricked into thinking that it has received a full physical allotment of RAM. It also thinks that it's RAM is present, contiguous starting at physical address zero.

Physical RAM to VM Memory Allocation

Transparent Page Sharing
A memory management trick built into ESXi that is not available on VMware Server, VMware workstation or other hosted virtualization solutions. Transparent page sharing works as follows:

  • - VMkernel uses spare CPU cycles to find duplicate memory pages across VMs
  • VMkernel will only scan VMs of the same declared OS
  • If a duplicate page is found, it is mapped to a single common copy
  • Common pages are marked Read-Only
  • f a VM tries to change a mapped page, it gets a local, private copy

Transparent Page Sharing is highly effective

  • Can yield up to a 20% memory savings at little cost
  • Works best if VMs have the same OS, applications, DLLs and patch level
Menory Management - Transparent Page Sharing (TPS)

Memory Management - Transparent Page Sharing Security Concerns

Enabling Inter-Virtual Machine Transparent Page Sharing

If a VM spikes on memory usage (to meet a VM application's request for memory), the VMkernel maps more physical RAM into the VM's virtual memory space. Suppose, later, the application were to finish and give back (to the Guest OS) the RAM it no longer needed. This could leave the Guest OS with an over allocation of RAM. If this situation were left alone, eventually all VMs would have a 100% RAM allocation (when they first spike on RAM) and the VMkernel would become memory starved. To prevent this situation from happening, the VMkernel needs some way to reach into a running VM and take back any memory over allocation. The name of the strategy that performs this function is – Ballooning.

Ballooning is a memory management technique used by the VMkernel whenever memory becomes tight. The VMkernel receives reports (by VMware Tools) of any guest OS memory over allocation. When the VMkernel determines that memory is becoming scarce, it will use Ballooning to take back RAM from running VMs. It may take back RAM and immediately hand it over to memory starved VMs or it may simply 'bank' the RAM for future use.

Menory Management - VM Memory Ballooning

If some VMs are memory starved while other VMs are over allocated with RAM, VMkernel will balloon away excess RAM from some VMs and re-assign it to the VM experiencing memory contention. It does this by:

  • Tracking excess RAM inside VMs as reported by VMware tools
  • Issuing a command to VMware tools to acquire free RAM from the Guest OS
  • Tools asks the OS for a physical RAM allocation
  • OS responds by giving Tools some/all of the excess RAM the VM possesses
  • Tools hands the memory to the VMkernel
  • The VMkernel gives the RAM to VMware Tools in the memory starved VM
  • Tools in the target VM gives the RAM to the OS
  • The OS uses the additional RAM to page-in needed pages from disk
  • Since the target VM has more RAM, its programs can run more efficiently which improves performance in the VM

Ballooning is an ongoing process of taking memory from over provisioned VMs and giving it to resource starved VMs. As VMs memory demands change, they could easily transition from a net beneficiary to a net supplier of RAM to other VMs.

This mechanism is ongoing and is completely transparent to the Guest OS.

Virtual Machine Memory Ballooning Explained

ESXi 6.0 uses memory compression as an alternative to paging memory to disk. Under extreme memory stress, the ESXi host will page VMs to free up RAM. But paging to disk is very slow and negatively impacts performance. Additionally, ESXi hosts are generally over provisioned with CPU thanks to newer 4, 6, 8 and 12 core CPUs. So, VMware engineers decided to use RAM compression to reduce physical paging to disk.

Memory compression uses up to 10% of a VM's memory as a compressed page cache. This means that memory (normally used for VM memory) is re-purposed to function as a page cache. This cache is dynamic and exists only when the VM must page.

When paging is required, memory pages are stolen from the VM and used for the page cache. These stolen pages are then compressed (using 2:1 compression), so that two compressed pages take up the same space as one normal page. In this way, pages that would normally have to go to disk are stored in RAM.

Under extreme memory stress, pages in the compression cache may be forced to disk. If that happens, they are removed from the compression cache, uncompressed and written to disk. The VMkernel tries to select the best candidates to page to avoid thrashing.

In the example above, 4 pages are stolen from our VM. At 2:1 compression, all 4 pages fit in 2 pages of the compression cache, leaving room for 4 more compressed pages. The same result is achieved as paging to disk but without the need for disk I/O and in 1/10th the time.

Memory Management - VM Memory Compression

The VMkernel will never steal more than 65% of a VM's memory through Ballooning. Also, Ballooning can never force a VM to live with less memory than any memory reservation the VM holds.

The final memory management tool used by the VMkernel is VMkernel Swap – the paging to disk of VM memory pages by the VMkernel. This is a memory management technique of last resort and is always an indicator of extreme memory stress.

It is a last-resort technique because the VMkernel:

  • Doesn't know what the VM is doing, so it...
  • Has no idea which pages are important and which pages are not at the Guest OS level
  • Cannot make good guesses at which pages will hurt the VM the least if they are Paged out

VMkernel paging will never force a VM to give up a memory reservation.

Memory Management - Virtual Machine VMkernel Swapping

VM Ballooning vs. VMkernel Swap

Running large VMs across multiple NUMA nodes (to meet the VM's vCPU core and/or memory needs) can produce significant performance issues. For best results, try to size VMs so their vCPU core and vRAM needs can be met entirely within the resources available on a single NUMA node in your PC server.

For example if your PC server has 2 NUMA nodes each with one pCPU of 10 cores and 64GB of local memory, your VMs should (if possible) not have more than 10 vCPU cores and declare no more than 64GB of RAM. Declaring VMs larger than this would force them to span multiple NUMA nodes where cross node memory references may incur significant delays.

For more information, please see VMware Performance Best Practices for vSphere 6.0 guide.

Virtual Machine Sizing and Non-Uniform Memory Management (NUMA)

If many VMs reside in the same datastore, then excessive disk traffic could result in bandwidth contention to that LUN. This can be resolved through per-VM LUN shares.

Normally a VM holds 1,000 shares against each LUN used by it's virtual disk(s). If the storage path is idle, then VM disk I/O requests are handled on a first come, first served basis. But, if the disk path is over committed, then disk I/O requests will queue up as the storage sub-system struggles to keep up with demand.

If a LUN is backed up with storage requests (i.e.: requests wait more than 30ms for service), it is possible that a single VM (or a small number of VMs) could be responsible for most of the disk I/O traffic. If the storage controller handled requests in a first-come, first-served basis than VMs performing a small amount of disk I/O could find their requests at the back of a very long disk I/O queue – and their performance would suffer.

To get around this problem, once disk I/O requests exceed 30ms of wait time, the storage controller handles disk I/O requests in proportion to the number of LUN shares held by a VM. In the above example, if each VM held 1,000 LUN shares then VM C would get 1/3 of all I/O bandwidth to the LUN even though it is generating relatively little disk traffic. This lets it jump to the head of the queue and receive consistently good disk I/O service.

Storage Input / Output Control

By default Storage I/O Control is turned off. This means that disk I/O scheduling is always done First-Come, First-Served (FCFS). The problem with FCFS is that one VM doing a lot of disk I/Os can induce queuing at the physical disk controller. Other VM's disk I/Os go to the back of the queue and may have to wait a long time (maybe multiple seconds) before they are serviced.

To prevent this from happening, you can enable Storage I/O Control on the LUN. When you do this, the disk scheduler uses FCFS scheduling for all I/Os provided no I/O has sat in the I/O queue for more than 30ms. Once any I/O has waited 30+ms for service, the disk scheduler changes to priority scheduling based on VM disk shares (change in VM > Edit Settings > Resources)

VMware does not recommend you adjust Storage I/O Control wait times (normally 30ms) unless you have a good reason to (normally that means you were advised to make a change by VMware support).

Datastore Storage I/O Control

ESXi 5.x and 6.x can recognize and use Solid State Drives (SSDs) in a variety of ways to speed up your ESXi hosts. Options include:

Use SSDs for fast, local storage
You can generally connect SSDs to server Serial Attach SCSI (SAS) RAID controllers to provide fast local storage. SSDs are subject to failure so you should consider using SSDs in a redundant (i.e.: RAID-1, RAID-5, RAID-6, RAID-10, RAID-50, RAID-60) configuration.

Use SSDs as an ESXi Host Cache
Host Caches are local read cache volumes that are used to hold frequently read data in storage local to the ESXi host. This option is useful in an environment were VMs read and re-read the same data over and over. This happens in a VMware View or vCloud environment were Linked-Clone VMs read the same data from the same base/replica VM virtual disk. Booting a large number of VMs anchored to the same replica can cause I/O storms that can cripple a SAN. By using a Host Cache, re-read data is fetched from the local cache significantly reducing the I/O load on the SAN

VMkernel Swapfiles
Swapfiles are paging files used by the VMkernel whenever host memory is constrained. Paging to SSDs is much faster than paging to spinning disks

ESXi Hosts and Solid State Drives (SSDs)

VMware offers Overview and Advanced performance charts. Overview charts are performance charts of the most popular CPU, Memory, Network and Disk metrics. Simply click the Performance tab to see the Overview performance charts for the selected inventory item.

Overview Performance Charts

The vSphere Client has a very competent charting system. You can rapidly select from monitoring major sub-systems (CPU, Memory, Network, Disk, System) or drill down to very detailed resource specifics (e.g.: Memory Ballooning, CPU Ready time, etc.)

To help the visually impaired, you can click on any row in the Performance Chart Legend and the data plot associated with that row will bold (a stroke of genius!).

And, you can use the icons in the upper right hand corner to Reset, Tear Off, Print or save the chart in MS Excel format.

Advanced Performance Charts

Advanced Performance Charts - Options

Performance problems are the result of one of two situations:

  • Over committing the box with more VMs than it can handle or
  • Detuning the VMs which prevents them from getting the resources they need

Over committing the ESXi host forces it to run a mix of VMs whose resource demands exceed the available resources on the box. The ESXi host has no choice but to force some VMs to wait.

Over Tuning the box is when an inexperienced (but well intentioned) administrator goes overboard with CPU and Memory Reservations, Shares and Limits to the point where the VMkernel CPU scheduler and memory manager can no longer move resources around freely because it must honor resource settings.

A perfect example of over tuning is assigning a large reservation to a VM that doesn't need it. The VMkernel has no choice but to allocate the reservation, possibly starving other VMs of needed resources while the over tuned VM wastes the allocation.

Performance Problems

CPU Ready time is the best metric to track when looking for signs of CPU stress. CPU Ready time is time that a VM's VCPU spends waiting for CPU service from the VMkernel. In other words, it is Ready to run, but not able to run because it isn't receiving the CPU resources it needs.

The ideal amount of CPU Ready time for a VM is 0ms. Any more than that is an indication that the VM is being forced to wait in a run queue for some number of milliseconds – every second. This is time spent in line – waiting for CPU servers, rather than running. Large amounts of CPU Ready time (100's of ms or more per second) are indicative of excessive CPU stress that is directly forcing this VM (and its users) to wait.

Tracking ESXi CPU Ready Time

Operating system (e.g.: Windows, Linux, etc.) performance monitoring tools were programmed under the assumption that the operating system is the exclusive owner of all hardware resources. An assumption that is no longer valid with virtualization.

For example, Windows tracks CPU Busy time in Task Manager by subtracting time spent in Windows' Idle task from total available time. The result is expressed as a percent and is assumed to be the amount of time Windows spent servicing tasks.

With virtualization, it is possible that the VM could receive no CPU time (due to CPU over commit). In that case, Windows Idle task would clock no time and Windows would incorrectly assume that Windows was 100% busy! This is especially troubling for the guest OS administrator because, not only is Windows reporting 100% CPU busy, but CPU performance may be poor (because the VMkernel is forcing the VM to wait).

When diagnosing performance issues, it is usually best to use the vSphere Client performance analysis tools rather than relying on Guest OS performance analysis tools.

High CPU Ready Time - Experience Inside a Virtual Machine

Resolving CPU Over Commit

Memory stress shows up first as Memory Ballooning at the VM level. If memory becomes even more scarce, it will show up as VMkernel paging. These two metrics can be monitored by the vSphere Client.

Like CPU, it is best not to put too much stock in guest OS memory performance counters. While they should be accurate within reason, they do not tell the whole picture.

Monitoring Memory Stress

A sign that a VM may be under sized (declared memory size is too small) or experiencing VMkernel paging is that Windows Task Manager reports an excessive number of Page Faults or dramatic ongoing increases in Page Fault Delta values.

Page faults are a standard procedure for all OS's. When an OS looks for a file or executable to put into memory, it searches through its own used pages of memory first and retrieves the file from RAM if it is possible. This is known as a soft page fault – and it is fast.

Most OS page fault reporting refers to hard page faults where the OS must go to disk to get the requested file (data or executable).

When reviewing Page Faults values, don't look at the total number of faults. Rather, look for large dramatic increases in the number of page faults across tasks as displayed by the PF Delta column. If you see numbers in the Page Faults Delta column changing dramatically (up or down), then chances are good that the VM is memory starved.

Monitoring Page Faults Using Windows Task Manager

You can easily review host or datacenter overall CPU and memory performance by clicking the Resource Pool/host/datacenter and then clicking the Virtual Machines tab. Pay attention to the Host CPU, Host Memory – MB and Guest Memory % columns. You can click on any of these column headers to sort by that value. You should also review the Resource Allocations tab to see how the VMkernel is handing out resources to VMs and Resource Pools on the host or cluster. The Resource Allocations tab is also useful because it will show what percentage of shares a VM holds relative to its peers (and consequently what % claim it has on the scarce resource).

Monitoring VM CPU and Memory Consumption

CPU Best Practices

Memory Best Practices

Performance Analysis Tools

Performance Lab

About the Instructor
Larry Karnis
4.3 Average rating
933 Reviews
3,467 Students
6 Courses
VMware vSphere Consultant/Mentor, VCP vSphere 2, 3, 4 and 5

Get VMware vSphere and View trained here... on Udemy!

What do you do if you need to learn VMware but can't afford the $4,000 - $6,000 charged for authorized training? Now you can enroll in my equivalent VMware training here on Udemy!

I have created a six courses that together offer over 32 hours of VMware vSphere 6 lectures (about 8 days of instructor lead training at 4hrs lecture per day). With Udemy, I can provide more insight and detail, without the time constraints that a normal instructor led training class would impose. My goal is to give you a similar or better training experience - at about 10% of the cost of classroom training.

I am an IT consultant / trainer with over 25 years of experience. I worked for 10 years as a UNIX programmer and administrator before moving to Linux in 1995. I've been working with VMware products since 2001 and now focus exclusively on VMware. I earned my first VMware Certified Professional (VCP) designation on ESX 2.0 in 2004 (VCP #: 993). I have also earned VCP in ESX 3, and in vSphere 4 and 5.

I have been providing VMware consulting and training for more than 10 years. I have lead literally hundreds of classes and taught thousands of people how to use VMware. I teach both introductory and advanced VMware classes.

I even worked for VMware as a VMware Certified Instructor (VCI) for almost five years. After leaving VMware, I decided to launch my own training business focused on VMware virtualization. Prior to working for VMware, I worked as a contract consultant and trainer for RedHat, Global Knowledge and Learning Tree.

I hold a Bachelor of Science in Computer Science and Math from the University of Toronto. I also hold numerous industry certifications including VMware Certified Professional on VMware Infrastructure 2 & 3 and vSphere 4 & 5 (ret.), VMware Certified Instructor (ret.), RedHat Certified Engineer (RHCE), RedHat Certified Instructor (RHCI) and RedHat Certified Examiner (RHCX) as well as certifications from LPI, HP, SCO and others.

I hope to see you in one of my Udemy VMware classes... If you have questions, please contact me directly.



Larry Karnis