VMware vSphere 6.0 Part 3 - Storage, Resources, VM Migration

Learn Shared Storage, how to make/expand VMFS volumes, to use Resource Pools and to cold, VMotion & Storage VMotion VMs
4.7 (36 ratings)
Instead of using a simple lifetime average, Udemy calculates a
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
771 students enrolled
50% off
Take This Course
  • Lectures 150
  • Length 6 hours
  • Skill Level Intermediate Level
  • Languages English
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
    Certificate of Completion
Wishlisted Wishlist

How taking a course works


Find online courses made by experts from around the world.


Take your courses with you and learn anywhere, anytime.


Learn and practice real-world skills and achieve your goals.

About This Course

Published 1/2016 English

Course Description

VMware vSphere 6.0 is the platform businesses depend on to deploy, manage and run their virtualized Windows and Linux workloads.

In this course you will learn how to connect to and use shared SAN Storage including Fibre and iSCSI storage, to Format and Expand VMFS volumes, how to Resource Tune Virtual Machines, how to create and tune Resource Pools, and how to perform cold, VMotion and Storage VMotion migrations.

Learn Storage, VMFS, Resource Management and VM Migrations

This course covers four major topics that all vSphere 6 vCenter administrators must know:

  • First, we look at shared Fibre and iSCSI shared storage. We start with an overview of features and capabilities of suitable SANs. We move on to discuss storage network design for redundancy and performance, SAN security with Zoning and Authentication, iSCSI hardware and Software initiators, how to scan and review SAN LUNs and various Storage views. We complete the section with a presentation of NFS v4.1 file shares (new to vSphere 6.).
  • Next we look at VMware's cluster file system - VMFS. We learn how to create VMFS partitions and their properties. We look at basic capacity management (and what can go wrong) and offer two strategies for managing VMFS capacity. We go on to talk about Path Selection Policies (PSPs) and explain the trade offs of VMware's three provided PSPs. We show you how to select the optimal PSP based on your storage network design and your SAN's capabilities.
  • We move on to discuss Resource Management and Resource Pools. Effectively managing CPU and Memory is critical to host and VM scalability. VMware provides three tunables - Reservations, Shares and Limits, to help you delegate resources as you require. We show you how to assess resource needs and how to create/use Resource Pools. Finally, we explain Expandable Reservations and show you how to use them safely.
  • Finally, we look at VM migrations. VMware supports cold (powered off) migrations to a new ESXi host or datastore. Next we look at VMotion hot VM migration - where we hot move a running VM to a second ESXi host. We look at CPU compatibility requirements and show you how to tell if your physical host CPUs are VMotion compatible. Finally, you will learn how to perform Storage VMotion (hot move a VM from one datastore to another) and the Use Cases for Storage VMotion.

The skills you will acquire in this course will help make you a more effective vSphere 6 administrator.

What are the requirements?

  • For this course to be of benefit, you must know how to install and configure ESXi 6, how to install vCenter Server and how to create Virtual Machines
  • One way to acquire these skills is to take our VMware vSpher 6.0 Part 1 AND Part 2 class on Udemy

What am I going to get from this course?

  • Describe the shared storage options supported by vSphere
  • Describe the feature sets found in entry level and mid-tier SAN appliances
  • Create Fibre and iSCSI storage network designs for high performance and high service availability
  • Configure and ESXi host to connect to iSCSI storage
  • Explain the features and benefits of VMFS file systems
  • Format, update and expand VMFS datastores
  • Connect to and use NFS 4.1 file shares
  • Understand and delegate resources using Resource Pools
  • Understand and use vCPU and vRAM Reservation, Shares and Limits
  • Use resource parameters on Virtual Machines and Resource Pools
  • Perform cold, VM migrations to new ESXi hosts and/or VMFS datastores
  • Perform an ESXi host physical CPU compatibility test for safe VMotion
  • Perform hot VMotion Virtual Machine migrations between ESXi hosts
  • Perform hot Storage VMotion Virtual Machine migrations between datastores
  • Understand VM, ESXi host and storage requirements for VMotion

Who is the target audience?

  • This course is intended for vCenter administrators who want to improve their understanding of Virtual Hardware, want to consistently and rapidly deploy virtual machines, who need to effectively manage VM and ESXi host resources and who want to connect to shared storage.

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.


Section 1: Introduction
VMware vSphere 6
Course Goals and Objectives
Course Goals and Objectives (continued)
New Skills
Presented by Larry Karnis
Should You Take This Course?
Let's Get Started!
Section 2: Shared Storage
Fibre, iSCSI SAN and NFS v4.1 NAS Shared Storage

SANs or Storage Area Networks are specialized shared storage devices that usually include specialized share storage networks. The idea behind a SAN is to centralize the provisioning, access and management and back up of storage resources. Furthermore, SANs simplify storage tasks that can be difficult or impossible with local, fixed storage resources (e.g.: local SCSI/SAS RAID cards and disks), such as:

Redundant Pathways. SANs can be designed for multiple storage pathways. By using two or more pathways through a storage network to a storage device, you gain the benefit of performance (I/Os can complete on either path) and redundancy (if one path fails, I/Os can be retried on a surviving path)

Improved Performance. SAN usually contain multiple interface processors (Storage Processors), and include powerful on board CPUs and memory to reduce I/O latency and minimize RAID overhead.

Capacity Management. SANs usually allow administrators to grow SAN volumes dynamically (by allocating unused physical storage to an existing RAID set). In this way, storage administrators can expand full volumes without having to provision more storage or copy data from one SAN volume to the next.

Snapshotting. Most SANs support volume snapshotting. A volume snapshot is a moment in time copy of a SAN volume that can be backed up to near line or off line storage.

Shadowing. High end SANs support volume shadowing. This feature lets SAN administrators replicate I/Os on a local volume over to a volume on a remote SAN. This feature helps with disaster recovery in that the remote SAN always has an up to the minute accurate copy of critical production SAN volumes.

Project Plan

SANs solve many problems associated with local PC server storage including:

Capacity. Physical servers have limits on the number and size of physical disks that can be connected to the server.

Over provisioning. Often, PC servers are over provisioned with storage at deployment time because it can be so difficult to expand local RAID volumes later on. Usually, this storage goes unused resulting in a huge waste due to excess capacity. This problem is further exacerbated by the fact that local RAID cards often do not let you easily expand RAID arrays onto new storage volumes.

Cost. Purchasing high end RAID cards and enterprise class disks for each PC server can add up quickly to a sizable investment in storage. Often trade offs taken to reduce costs result in lower performance or redundancy than is required or desired.

Back Ups. Local disk image back ups are a good defense against catastrophic data loss. But it can be challenging to perform image back ups of PC server RAID sets. SANs usually provide LUN snapshot capabilities so that image backups of SAN LUNs can be performed at any time.

Shadowing. SAN shadowing is the replication of updates to a LUN on a production SAN to a corresponding LUN on a Disaster Recovery SAN. By replicating all I/Os on the DR SAN you are protected from data loss due to a facility or server failure. Local RAID cards normally do not provide volume shadowing capability.

Entry Level SAN Features
High End SAN Features

Fibre SAN deployments include a shared Storage Appliance (the SAN), a Storage Network and Fibre Host Bus Adapters (HBAs). Most components in a Fibre SAN can be duplicated (HBAs, SAN Switches, SAN Storage Processors). Duplication provides two benefits:

Redundancy. If one component fails, the ESXi host can find an alternative surviving path through which it can complete I/Os

Performance. If all paths are healthy, ESXi can use different paths to different LUNs to distribute the overall I/O load. This reduces contention and results in overall greater performance.


SANs include management tools that let SAN administrators create uniquely numbered SAN LUNs (addressable storage volumes) from RAID sets of physical drives. RAID sets can be created for both capacity and storage efficiency purposes.

Modern SANs support SATA, Serial Attached SCSI (SAS) and solid state (SSD) drives. SATA drives are used to create SAN LUNs that offer high storage density with reasonable performance at low cost. Enterprise SAS drives are less storage dense but perform 3-5x faster. These devices are used to create SAN LUNs for workloads that demand the highest overall performance.

Don't discount SATA storage. Seagate and Western Digital both make 'Enterprise' SATA drives with 5 year warranties. SATA drives are relatively slow (7,200 rpm, 8-12ms average seek) compared to 15,000 rpm enterprise SAS drives. But they are cheap and can be provisioned in large numbers at low cost – making it possible to create highly redundant RAID sets across many spindles. This divide-and-conquer approach often yields as good as or better performance per dollar than small RAID sets of 15k rpm disks.

SANs also include hot-spare capability. Usually SANs are provisioned with one or more drives that are not (immediately) put into service. If an active drive fails, the SAN will remove the failed drive from an active SAN LUN and replace it with an available hot-spare. This minimizes the total time the SAN LUN spends in a non-redundant state.


Every visible node in a Fibre SAN deployment is addressed by a unique hardware address called the World Wide Name (WWN). WWNs are 8 byte addresses made up of a 4 byte Vendor ID (first 4 bytes) followed by 4 additional bytes that uniquely identify the device. All components in a Fibre SAN array have WWNs including Fibre HBAs, Fibre Switches, Storage Processors and SAN LUNs. When new SAN LUNs are created the SAN management tool will assign a unique WWN to the LUN so it can be distinguished from other LUNs on the SAN.

Storage traffic on the SAN is delivered by WWN (source and destination WWN). SAN administrators can use Zoning to restrict which nodes can exchange traffic. Zoning rules specify pairs of WWNs that are allowed to exchange data. Normally a SAN administrator would set up zoning rules that specify which Fibre HBAs can talk to Storage Processors (to defend against ad-hoc deployment of new servers). Administrators would also create LUN visibility rules on the SAN that specify, on a LUN by LUN basis, which Fibre HBAs can see which LUNs (by associating the WWN of the Fibre HBA with the WWN of the LUN). When an ESXi server scans the SAN for storage volumes, the SAN consults its visibility rules and exposes only authorized LUNs to the ESXi host.


There are four different types of network isolation available in Fibre Networks – Port Zoning, WWN Zoning, Soft Zoning and Hard Zoning. The purpose of all Zoning is to create/enforce device access control lists (to prevent unauthorized devices access).

Different Zoning strategies are implemented by different vendors; so you should consult with your Fibre Switch and Fibre SAN configuration guides to find out which Zoning strategies your hardware supports.

Each Zoning strategy has pros and cons relating to ease of configuration, ease of modification and level of device isolation/protection offered. Generally, WWN and Hard Zoning are the most secure, followed by Port Zoning and finally Soft Zoning (which is viewed as very insecure).


ESXi uses hardware addresses to uniquely identify a SAN LUN. Hardware addresses are constructed as follows:

vmhba#- a generic name for a storage controller followed by the storage

controller's unique number

C #- Channel number – usually 0. On some SAN's, it is the Storage Processor #

T #- the Storage Processor/Target number used to deliver I/Os to the SAN

LUN #- the target SAN LUN to receive I/O requests

An example of a complete hardware path would be vmhba1:C0:T1:L2 which references Fibre Controller 1, Channel 0, Storage Processor/Target 1, LUN 2.

ESXi maps vmhba# to specific device drivers for storage controllers. That way, administrators do not need to know anything about the make or model of storage controller in use.


Different SANs identify LUNs in different ways. Many SANs assign a unique LUN number to each LUN, while others assign all LUNs 0 and differentiate LUNs by their Target number.

There is no single standard for this. You need to check your SAN maker to see what they do (The ESXLab remote access lab uses a SAN that identifies LUNs using different Target numbers).


When an ESXi server boots, it scans its PCI bus for storage controllers. Storage controllers are assigned vmhba# numbers in the order they are found during a bus scan.

If ESXi finds a fibre controller, it instructs the fibre controller to scan its storage bus for Storage Processors (SP). Then, for each SP, ESXi scans the HBA/SP pair for visible LUNs. ESXi enters each visible LUN along with the LUNs WWN into a storage volume roster for that HBA. This way, when ESXi finds the same WWN for a storage volume through multiple paths, it knows that the additional paths are alternative paths to the same SAN LUN.

ESXi can use a maximum of 256 SAN LUNs per SAN. LUNs do not need to be sequentially numbered as ESXi scans all possible LUN numbers (range of 0-255) and records all LUNs it finds. You can click the Rescan link in the Storage Adapters view at any time to scan for new LUNs. Rescans are safe to perform while ESXi is actively running VMs. They complete quickly and with no risk to ESXi or running VMs. This way, the SAN administrator can provision new storage (new SAN LUNs) at any time. ESXi administrators can scan for new LUNs, partition them, format them VMFS and put them into service while the ESXi host is active.


iSCSI's major advantage is that it uses commodity Ethernet networking rather than specialized (i.e.: expensive) fibre networking. iSCSI networks can be simple flat LAN segments, vLANed isolated segments, redundant segments (e.g.: HBA 1 and SP 1 on one switch, HBA 2 and SP 2 on a second switch with the switches uplinked) or on separate routed LAN segments. Leveraging TCP/IP's inherent reliability and routing makes it easy to provision a reliable iSCSI network at reasonable cost.

Gb Ethernet has a theoretical maximum of 128MB/second (more with bonding). However, with latency and protocol overhead it is unlikely that you will ever achieve this speed on a single copper link. Speeds of 70MB/s to 90MB/s are attainable assuming that the storage array can keep up.

10Gb Ethernet is fully supported by ESXi 6.0 as is 40Gb Ethernet using high end Mellanox 40Gb Ethernet controllers.

With bonding and multiple Storage Processors it may be possible to delivers 2x-4x the speed of a single path (about 128MB/sec of peak throughput on a 1Gb Ethernet path and about 700-800MB/s on a 10Gb Ethernet path). This is not in the same league as Fibre SANs (which signals at 4, 8 or 16Gb/sec) but should be sufficient for light to medium storage I/O workloads.


ESXi can boot from LUNs on an iSCSI SAN only if it is configured with a iSCSI hardware initiator and that controller has been configured as a boot controller with an assigned boot LUN.


iSCSI SANS are completely suitable for production use. You can purchase iSCSI SANs that support:

- Active/active multipathing

- Multiple 10Gb Ethernet targets

- LUN thin provisioning

- LUN replication

- LUN compression

- LUN access controls

- High capacity and performance scalability

- Hot swap most hardware components with zero down time

The example above is more oriented toward non-enterprise production uses of iSCSI. The point is, that while iSCSI can offer nearly all of the features of enterprise Fibre SAN solutions, you can buy down market solutions that trade off features, scalability and redundancy for a much lower cost of entry than Fibre can hit.


In the diagram above, the Ethernet, TCI/IP network is drawn as a cloud because the network could be any of:

- A simple flat segment (e.g.: a single switch)

- A more complex flat segment (e.g.: two switches stacked or up linked).

This configuration gives you some protection from a switch failure.

- A routed network of two or more segments. This configuration also

provides protection from a switch failure.

For best performance it is recommended that the ESXi boxes and the iSCSI SAN reside on the same local segment (so as to avoid congestion and latency at a router). Also, iSCSI traffic should be on an isolated segment so that disk I/Os over iSCSI do not have to compete with other network traffic.

iSCSI traffic is not encrypted so anyone connecting to the iSCSI storage LAN segment could sniff traffic and potentially capture data. For this reason, it is best to isolate iSCSI traffic on a private physical or virtual LAN segment


Network redundancy improves network reliability and performance. You have three different options:

  1. Stacking switches gives you the simplicity of a single LAN segment while protecting you from connectivity issues caused by a switch failure. Most mid-tier to enterprise switches support stacking. Alternatively, you could simplyuplink the two switches to create a larger single segment.
  2. Adding a router between the two switches keeps each segment isolated. This implies that HBA1 would normally talk to SP1 and HBA2 would talk to SP2. With static routes set up at the ESXi level, packets could be routed around failed components. Here you run the risk of congestion at the router if one route becomes unavailable and all traffic flows through the router to the surviving path.
  3. NIC Teaming is a simple way to improve reliability and performance at the ESXi level. In this case, traffic would continue to flow through a Software iSCSI initiator VMkernel port even if one NIC in the team lost its link. This strategy is not appropriate for Hardware iSCSI initiators.

Your best redundancy option may be to combine one or more of the above strategies. For instance, Stacking two Ethernet switches combined with NIC Teaming would give you a very high degree of redundancy without the added complexity of separate, routed LAN segments.


iSCSI uses a qualifying naming scheme that differs from standard fully qualified names. iSCSI Qualified Names (iQNs) use this format:

iqn- must be present. Stands for iSCSI Qualified Name

yyyy-mm- the year and month the vendor's domain name was registered

.com.vendor – the vendor's domain name in reverse

alias- the local alias assigned to the node

Example: iqn.1998-01.com.vmware:esx1

This example indicates that the host is a VMware host and that the domain vmware.com was registered in January, 1998.


Your ESXi host must know which LUNs are available to it on your iSCSI SAN. There are two ways to update the ESXi storage roster for iSCSI based volumes.

Static configuration allows ESXi administrators to type in the LUN properties directly into the ESXi host, essentially transposing information from the SAN configuration display to the ESXi host. This approach should be avoided because there is no way to dynamically update the ESXi host whenever the iSCSI SAN's configuration changes.

Send Targets is a special request built into the iSCSI protocol. The ESXi host can issue a Send Targets request to the iSCSI SAN at any time. The SAN responds by reviewing the LUNs visible to the requesting ESXi host and then returning a list of visible LUNs and their properties. The ESXi host would then use this information to populate its roster of available LUNs for the iSCSI controller.


ESXi can use a limited number of iSCSI hardware initiators (iSCSI controller cards).

An iSCSI hardware initiator appears to the ESXi host as a storage controller. It includes an on board CPU, memory and firmware that implements the TCP/IP protocol (for network traffic) and the iSCSI Storage Initiator stack (to act as a storage controller). At the back end, the iSCSI hardware initiator provides one or two RJ45 jacks for connectivity to an Ethernet LAN segment. ISCSI hardware initiators have the following advantages:

- All network and iSCSI Initiator overhead is off loaded onto the card

- The card may implement Jumbo Frames (up to 9,000 byte packet payloads).

Jumbo Frames are an enhancement to Ethernet that allow a single packet to carry much larger payloads thereby eliminating a lot of TCP/IP protocol overhead. Traditionally, Ethernet frames have a maximum transfer unit (maximum payload size) of 1,500 bytes. This is insufficient for block oriented traffic where 3 frames (2 x 1500 bytes and 1 x 1,096 bytes) would be needed to carry an 4k disk block. With Jumbo Frames, the entire disk block could be transferred in one packet resulting in reduced protocol overhead (as only one packet needs to be sent/acknowledged vs 3).

Note: Modern operating systems (Windows Vista, Windows Server 2008, RedHat Enterprise Linux 5.4+ and 6.0+, etc.) do I/Os in 4k (4096 byte) blocks. Jumbo frame enabled Ethernet storage networks could easily carry the disk I/O request in a single frame.


ESXi also supports iSCSI Software Initiators. The VMkernel supports iSCSI Software Initiator through a dynamically loadable VMkernel driver module that implements the iSCSI Software Initiator stack and the TCP/IP stack within the VMkernel. All that's needed to complete the picture is a VMkernel Port on a Virtual Switch (connected to the same LAN segment as the iSCSI SAN). With iSCSI Software Initiators, the ESXi host can act as an iSCSI client without the need to invest in expensive iSCSI controller card(s).

iSCSI Software Initiator stacks look to the VMkernel like an iSCSI controller is present. The ESXi host CPU runs the iSCSI Software Initiator stack which places modest overhead on the ESXi host. iSCSI I/Os flow through a VMkernel port to a physical NIC.


You are now ready to edit the iSCSI Software Initiator's properties. The Initiator is disabled by default. To turn it on, click the Add... link and then click OK. This instructs the VMkernel to load the Software iSCSI Initiator stack. It will take a few seconds before this step completes.

Change iSCSI SW Adapter IQN

Next, we must configure Dynamic (LUN) Discovery. Click the tab and then click Add. When the Add Send Targets Server window pops up, enter in the IP address and port number of the first Storage Processor on the iSCSI SAN, and click OK. If your iSCSI SAN has multiple Storage Processors, repeat this step for each additional Storage Processor.

Note the CHAP button in the pop up window. Click this button if you need to enter CHAP authentication information for this Storage Processor. ESXi 5 has the ability to keep separate CHAP information for each Storage Processor.

Be very careful that you get the IP address and Port number correct. If you make a mistake, you can remove the incorrect entry but the remote won't take effect until you reboot the ESXi server.

Configured iSCSI Targets

CHAP authentication can be employed to ensure that only authorized ESXi hosts can access storage on your iSCSI SAN. CHAP support has been greatly improved since ESXi 3.5. ESXi 4 can now do:

1-way CHAP – where ESXi authenticates to the iSCSI SAN

2-way CHAP – where ESXi authenticates to the SAN and then the SAN authenticates
back to ESXi


iSCSI uses Challenge Handshake Authentication Protocol (CHAP) whenever authentication is required. CHAP is an authentication protocol that was popular during the MS Windows RAS (Remote Access Services) days. CHAP is a simple shared secret (password) protocol where the ESXi client and the iSCSI SAN both have the same user name and password account information. CHAP is simple, low overhead and does not expose any sensitive information. This is critical because CHAP does not use encryption.

Step 1 – ESXi sends a log in request with the pre-assigned login ID

Step 2 – The SAN looks up the login id and password. It generates a large,

one time Hash (H) code and sends the code to the ESXi host

Step 3 – The ESXi host uses the stored Password (PW) and the one time hash

(H) in a mangling algorithm to produce a one-time result (R)

Step 4 – The SAN uses the password on file (PW), the same one-time hash (H)

and the same algorithm to produce a one-time result (R)

Step 5 – The ESXi host transmits its one-time result (R) to the SAN

Step 6 – The SAN compares its local R with the ESXi R. If they match, the ESXi

host is authenticated and the SAN will handle it's I/Os

Anyone sniffing the LAN gets the Hash code (H) and the Result (R). However, there is no known way to derive the password (PW) from these two values other than a dictionary attack – and that would take years (if you have a well chosen password).

iSCSI Target Correctly Added

Once you have completed the CHAP Authentication tab, you have completed your iSCSI Software Initiator configuration chores. The next step is to scan your iSCSI SAN for available LUNs (assuming of course that your SAN administrator has already created some LUNs for your use).

To do this, go to the Storage Adapters view, (Configuration > Storage Adapters), click the iSCSI Software Adapter (usually vmhba33) and then click the Rescan... link. The above dialog will pop up and ask you if you want to scan for new empty LUNs, LUNs already partitioned and formatted VMFS or both. Normally you would just click OK so you can discover both types of LUNs.

It can take 5-60+ seconds for the SCAN to complete. It can take a further 30 seconds for the LUN roster to populate with the list off newly discovered LUNs


In the screen grab (above) the iSCSI scan has completed an a new storage volumes have been discovered (In our case, the SAN identifies LUNs by unique Target numbers 0, 1, 2, 3...).


The Storage view displays a roster of all usable storage volumes available to the ESXi host. The storage view will display accessible storage volumes that are either partitioned and formatted VMFS or NAS storage volumes (NFS shares).

VMFS volumes have hardware paths under the Device column header and the type vmfs3 under the Type column header.

NFS volumes have IP:/path under the Device column header and NFS under the Type column header.

The Details window displays properties of a selected storage volume including the total volume size, storage used, the Path Selection Policy (Fixed or MRU), the total number of paths to the volume, the number of broken and disabled paths, the block size for the VMFS file system and the number of LUNs that provide raw storage space for the VMFS.

Datastore Properties

The Storage Views tab lets you review storage consumption by VM and also storage maps. Storage reports (storage consumption by VM) displays very useful information such as:

- If the VM benefits from path redundancy to storage. Path redundancy is critical to
high availability

- The amount of storage used by a VM

- The amount of space used by snapshots active on the VM

- The number of virtual disks the VM has


Examples of Free and Open Source iSCSI SAN solutions

Openfiler – Free iSCSI Target (SAN) software. Not VMware certified. Not suitable for high transaction volumes or high I/O loads

FreeNAS – NFS/iSCSI open source OS turns a PC or server into a storage appliance

Nexenta Stor Community Edition – NAS/iSCSI storage OS load for PCs/PC servers

StarWind – free iSCSI Target (SAN) software for Windows

StorMagic - Virtual Storage Appliance for VMware

QUADstor – Open source iSCSI SAN for Linux

DataCore – Virtual SAN software

TrueNAS – Commercial version of the Open Source FreeNAS project


Additional commercial SAN solutions: Tintri, SimpliVity, VMware vSAN, Exablox...

Windows Server 2008, Server 2012 iSCSI Target... Microsoft added block mode storage via iSCSI Target software as a free download in Windows Server 2008 and built this feature in to Server 2012. This allows Windows server to share block storage with remote iSCSI initiators.

Hyperlinks for these products can be found in the supplemental material attached to this lecture.


The vSphere Client will display very little in the way of useful diagnostics if things should go wrong. The first thing to do when troubleshooting iSCSI connection problems is to double check everything including IP addresses, configuration settings, firewall setup, etc.

If all else fails, review log files via the ESXI console.

ESXi 6.0 Adds NFS v4.1 Support
NFS 4.1 Multipathing (vCenter)

ESXi 5.0, 5.1 and 5.5 support NFS v3 client connections only. These hosts can use all of the above named features (ESXi + NFS v3 column)

ESXi 6.0 can connect to an NFS server using either NFS v3 or NFS v4.1 connections... If you use vSphere Client to create an NFS connection, you automatically connect using NFS v3 only. If you use vCenter plus vSphere Web Client to create an NFS connection, you can choose to connect using either NFS v3 or NFS v4.1. If you connect to NFS via NFS v4.1, you can only use features identified in the ESXi + NFS v4.1 column.

Many of the features that do not work with NFS v4.1 are high end features available only on VMware's most expensive licenses. Features that do not work on datastores provided by NFS v4.1 connections include Storage DRS (load balancing) clusters and Storage I/O Control (for storage bandwidth management).

Additional features that are not supported when using NFS v4.1 connections are Site Recovery Manager (disaster recovery tool for virtual environments) and Virtual Volumes (virtual disk containers introduced in vSphere 6.0).

If you need any of the unsupported features or if you need to administer an ESXi host with vSphere Client, do not use NFS 4.1 client connections.

Parallel NFS v4.1 with ESXi 6.0

Many NFS NAS devices do not yet support NFS v4.1. A partial list of NAS devices that DO NOT support NFS 4.1 include:

● FreeNAS / TrueNAS

● OpenFiler

● NexentaStor Community Edition

● NetApp’s OnTAP Simulator

● Synology DSM 5.1

Some NFS servers do correctly support NFS v4.1. Check VMware's hardware compatibility portal to verify your NAS server supports NFS 4.1 (and is running the correct firmware version).

When in doubt, stick with NFS v3

(or risk data corruption on your NAS)

Upgrading NFS v3 to NFS v4.1
Shared Storage Lab
Review & Questions
Section 3: VMware File System
VMFS - VMware's Cluster File System
VMware's Cluster File System
Project Plan

Traditional SAN storage provisioning involves allocating a private SAN LUN to physical servers. In this case, the PC Server benefits from using the SAN:

  • Redundant configurations provide for additional performance/redundancy
  • Multipathing redirects SAN LUN I/Os around failed storage components
  • Active concurrent paths provide extra performance as I/Os can use different paths to different LUNs
  • SAN acceleration provides faster I/Os than local captive storage can deliver
  • LUN Snapshotting and shadowing assist with backup and disaster recovery

The problem is that legacy operating systems Windows, Linux, UNIX, etc.) require exclusive access to any SAN LUNs. That is, none of these operating systems would permit two or more physical machines using the same LUN at exactly the same time.

So, even though a PC server may use a SAN LUN, that SAN LUN is effectively held captive by that PC server. If the PC server were to fail, then no other physical machine would be able to use that LUN (for recovery purposes) unless the SAN administrator reconfigured the SAN to make the LUN visible to a new PC server.

VMware File System (VMFS) volumes were designed for safe, concurrent access by ESXi hosts. This means that, unlike traditional operating systems, many ESXi hosts can connect to, mount and concurrently use files on the same VMFS volume at the same time.


VMFS file systems are general purpose, hierarchical file systems that can be used to hold files needed for your virtualization initiatives. VMFS was designed to be an efficient (very low overhead) file system suitable for files of all sizes. It is especially important that VMFS remain efficient on extremely large files because virtual disk files (.vmdks) can be up to 2TB in size.

VMFS are often used to hold other files useful to virtualization such as operating system, utility and application install images. If you rip (using Roxio, Nero or your favorite CD/DVD ripping tool) install media to files on a VMFS, then any VMcan mount those files on its virtual CD/DVD device and use the image as if it were physical media. This eliminates many problems normally encountered with physical media including:

  • Lost, misplaced, scratched or dirty media
  • Use of non-approved media to install software into virtual machines
  • Eliminates the need and risk of having to use physical media in the ESXi server. This can be especially dangerous because, an operating system install media left in an ESXi physical CD/DVD tray could be booted if the machine were accidentally restarted, possibly reformatting local disks
  • Virtual CD/DVD images deliver data at up to 10 times the speed of physical CD/DVD readers (that typically read at no more than 2MB/second).

VMFS volumes are designed to safely handle concurrent I/O activity by multiple ESXi hosts. This is accomplished by cleaver use of LUN and file locks. For example, when an ESXi host is told to power on a VM, it must assert a file lock on the VM (so that no other ESXi host can manipulate the VMs virtual disk files). The virtual disk file lock is established as follows:

- The ESXi host asserts a non-persistent SCSI reservation (LUN lock) on the entire VMFS volume. This gives the ESXi host temporary exclusive access to

the LUN. I/Os from other ESXi hosts will queue at the host while the non-persistent SCSI reservation is present

- The ESXi host then places a file lock on the .vmdk file of the VM to be powered on

- The ESXi host updates the file system structure to indicate that the VM has been powered on

- The ESXi host then releases the non-persistent SCSI reservation against the

LUN. This allows I/Os from other ESXi hosts to other files on the LUN to proceed

- The ESXi host then proceeds to power on and run the VM


ESXi hosts scan for storage volumes on boot or whenever the Rescan link (Configuration Tab > Storage Adapters > Rescan...) is clicked. The ESXi host will update it's available storage roster with the properties of all visible LUNs found during a rescan.

If a LUN is partitioned and formatted VMFS, the ESXi host will add the VMFS volume to the available storage view (Configuration Tab > Storage). Any volume in this view is immediately available for use by the ESXi host either to access existing files on that VMFS or to create new files on the VMFS.

VMFS volumes can be referenced by either their Runtime path (e.g.: vmhba32:C0:T1:L1) or their label (e.g.: Production, Test, etc.). The vSphere Client displays VMFS volumes by their label.

If you log in to the Local/Remote Tech Support command line, you can navigate over to the top directory of a specific VMFS volume with the command:

# cd /vmfs/volumes/VMFS-Label(VMFS-Label is the name of the VMFS)


It is easy to construct a new VMFS volume onto an available storage volume. You do this by invoking the Add Storage Wizard (Configuration tab > Storage > Add Storage...)

The first step of this wizard asks you if you want to add either a NAS/NFS resource or a Disk/LUN resource. Click Disk/LUN.

Next, the wizard will display a list of all volumes visible to the ESXi host that contain non-VMFS partitions or volumes that have no partition table at all (e.g.: new SAN LUNs or local physical/RAID volumes).

Whenever physical LUN properties are displayed, the following should be kept in mind:

Capacity – is the actual reported size of the LUN in GB or MB

Available – is the amount of unpartitioned space available on the LUN

A disk is unpartitioned when the Available space almost matches the reported capacity (as some space is held back for the MBR and partition table). If available space reports as zero, then the disk is fully partitioned with non-VMFS partitions. If you delete these partitions, then non-VMFS data will be irretrievably lost.


Above is a roster of available storage volumes on an ESXi host. To complete the Add Storage wizard, select one of the volumes. All column headers are sortable – and LUNs are added in the order found – so click a column header to re-order the LUNs into something that makes sense for you (e.g. Click LUN header to see LUNs sorted by their LUN ID value).

Partition Configuration

Once the Add Storage Wizard completes, your new VMFS volume is ready for use (and is added to the Storage roster).

ESXi uses VMFS labels as a way to defend against SAN LUN renumbering and/or changes in the LUN Name (e.g.: vmhba1:C0:T1:L1) which can happen across boots. Fibre SANs may sometimes renumber (reassign LUN volume numbers) as SAN administrators add/remove volumes from the SAN. By using the VMFS volume name, rather than it's number, ESXi administrators can continue to use a mnemonic name rather than having to concern themselves with the currently active LUN number for a LUN.

VMFS volumes are very efficient but do impose both capacity and performance costs on a storage volume.

Capacity - VMFS volumes lose about 3-6% of their overall capacity to VMFS file system overhead. Smaller VMFS LUNs lose more capacity to overhead than do larger LUNs
Performance – A VM's virtual disk is represented as file in a VMFS. When a VM does I/Os against it's virtual disk, those I/Os are completed by the VMFS file system driver. As a result, the VM not only has its normal disk I/O overhead (e.g. NTFS overhead) but also a modest amount of VMFS overhead.

VMFS Datastore Settings
Running out of Datastore Space

VMFS volumes now support capacity growth through dynamic LUN expansion. This means that your SAN administrator can grow a storage volume and you can grow a VMFS partition and file system onto the newly allocated space.

There is another strategy for growing VMFS volumes. VMFS supports capacity expansion through LUN spanning – the joining together of a VMFS volume with additional empty SAN volumes.


Suppose our Production VMFS volume were full (or nearly full). If a VMFS volume fills, there can be undesirable results including:

- You cannot power on a VM because there is no room to create the VMkernel swap
file that must be present to handle VM paging to disk

- You cannot snapshot a VM because there is no space left tohold the file that
accumulates the changes to a virtual disk that occur after the snapshot is taken

- You cannot make new VMs on the LUN because there is no room left to allocate
space to the VMs virtual disk and other constituent files

- You cannot increase the size of existing virtual disk files

- VMs with snapshots will freeze when there is no more space to record virtual disk


LUN spanning is a capacity management technique that lets ESXi administrators increase the size of a VMFS by gluing together (spanning) an existing VMFS with an empty volume. Once the Span is complete, the VMFS will be able to use free space on the original volume allocated to the VMFS and the new volume (that was added to the VMFS).

The other advantage to LUN Spanning is that spans can be created while the VMFS is in use so that capacity issues can be dealt with immediately rather than having to wait for the next maintenance window.


VMFS LUN spanning can cross multiple volumes. In the example above, the Production VMFS has been spanned across two additional volumes. In this case the Production VMFS will report, as its capacity, the sum of the sizes of all three volumes assigned.

LUN Spans are not a form of RAID. That is, LUN Spans do not mirror or stripe across the allocated volumes. As storage is requested, the Span will allocate space from the first volume until it fills. Once the first volume has filled, additional storage needs will be met by allocating free space from the second LUN. And, when that LUN fills, storage will be allocated from the third LUN (and so on). Files on the Span get free space from whichever volume has it to give. So, there is no way to know (on a file by file basis) which volumes contribute storage to a file.

Once a volume is assigned to a VMFS (either as the first or subsequent volume), it is considered in use and that volume is removed from the available storage roster.


You can add a volume to a VMFS at any time by completing these steps:

Configuration tab > Storage > click VMFS > Properties > Increase button

The LUN Properties window lets you review the currently assigned storage volume(s) for a VMFS and also lets you add additional volume(s) to the VMFS through the Increase button.


When you click the Increase button, you invoke the Increase Datastore Capacity Wizard. This wizard starts by showing you a roster of all volumes visible to the ESXi host that contain either unpartitioned space (Capacity nearly equals Available space) or volumes with non-VMFS partitions (Available space is either substantially less than Capacity or reports as None).

Note that the Datastore Capacity Wizard will automatically assume you want to create a Span if there is no free space on the LUN whose properties you are editing. If there is free space on the LUN, the wizard will assume you wish to grow the LUN.

In the example above, we select a volume and assign it as LUN span to the VMFS.

You need to rerun the Increase Datastore Capacity wizard for each additional LUN you wish to add to a VMFS.

Completed LUN Span

Be aware that ESXi does not judge or second guess the suitability of the LUN(s) you select for use as extent volumes for a VMFS. Poor choices for extent candidates are any LUNs that do not match the

- Performance

- Visibility

- Redundancy

of the original LUN in the VMFS. If the additional LUNs are not as fast as the first LUN (because of a different RAID strategy or different SAN acceleration settings), then some of your I/Os to the LUN will take longer to complete than others – probably leaving you scratching your head wondering why some VMs run quickly and others don't.

If you use additional LUNs that are visible to you but not other ESXi hosts then any VMs on the span would not be able to use VMotion, DRS or HA.

And, if you span redundant LUNs with non-redundant LUNs, then you risk data loss across all files on the LUN if the non-redundant volume were to fail.

Exercise care when selecting LUN span candidate volumes for a VMFS


As of ESXi 4.0, you can now increase VMFS space by:

1. Having your SAN administrator grow the SAN LUN on which a VMFS partition lives

2. Growing the VMFS partition and file system on the newly expanded LUN

This process can be performed hot – while ESXi is up and running and while VMs are actively using the VMFS that is being grown.


When growing a VMFS it is really important to record the LUN Name (vmhba#:C#:T#:L#) before you attempt to grow the VMFS. You will need this information so you can select the correct volume in the Extent Device screen (above). You get this information from the Storage view.

With the hardware path (vmhba#...) in hand, review the Extent Device roster looking for a volume Name that has a Yes in the Expandable column. That will be your newly (physically) extended volume. Select this volume and continue with the Wizard.


The Increase Datastore Capacity wizard validates your selected volume - to ensure it is the same volume on which the VMFS lives and that it has free space available. If both conditions are met, then the wizard will allow you to grow the VMFS.

By default, the wizard will grow the VMFS onto all free space. You can grow the VMFS to less than all free space – but there is really no benefit to having unallocated free space on a volume.

Updated VMFS Capacity

At boot or on rescan, ESXi learns all healthy paths to each LUN. So, if a path were to fail, ESXi can easily reroute I/Os around the failed component to the desired LUN. For example, if Storage Processor 1 were to fail, ESXi would:

- Immediately detect the loss of SP1

- Select a healthy path that does not include SP1. There may be a short lag

in I/Os while ESXi tests the health of the path and selects an alternative

- Re-issues I/Os that did not complete on the failed path over to the new

active path so that no VM I/Os are lost or handled out of order

In the above case, ESXi would direct I/Os around the failed SP1 and through SP2. When SP1 reports that it is healthy (either because SP1 was replaced or the fault was cleared (e.g.: Fibre cable was plugged back in), ESXi must decide how to respond. It can either continue to use the known healthy (but more congested) SP2 or it can swing back to using SP1. Fail back policies determine how ESXi responds:

Most Recently Used - ESXi continues to use SP2. We have high confidence in the health of the path, but I/Os might take longer to complete due to path congestion at SP2.

Fixed Path - ESXi swings I/Os back to SP1, I/Os will complete more quickly but there may be a risk that SP1 could fail again if the fault was not completely cleared.


Multipathing on an iSCSI storage network is a function of how the TCP/IP network was provisioned. If you have two iSCSI hardware initiators or two ports on a single iSCSI HBA and/or two iSCSI Storage Processors, then ESXi will automatically discover all paths through the HBAs and SPs to visible SAN LUNs. As with Fibre SANs, having multiple HBAs and SPs contributes to iSCSI reliability and performance.

If you are using iSCSI software initiators, you can further enhance reliability and performance by NIC Teaming the vSwitch that is carrying iSCSI traffic. Through NIC Teaming, the vSwitch can assign more NICs to handle iSCSI traffic. And, if the assigned NIC fails, the Team will re-balance so that iSCSI I/Os will complete through a healthy NIC.


ESXi has multipathing capability built right into the VMkernel. As a result, there is no need for 3rd party multipathing tools. New to ESXi 4 is the ability to add a limited number of 3rd party multipath solutions to ESXi.

On boot or rescan, ESXi scans Fibre and iSCSI SANs and discovers all available paths to each LUN. Here is how ESXi uses hardware paths to reference LUNs:

The Canonical Path is the generic name used by ESXi when referencing a LUN. By default, this is the first path found to a LUN. The Canonical path remains the same regardless of any changes in underlying path usage due to path failures or active path re-assignments.


Fixed path and MRU multipathing are now considered Legacy I/O strategies and would not normally be selected unless you had just one physical path between your ESXi host and your SAN storage device or your SAN does not support active/active multipathing.

Review Path Selection Policy

By default, ESXi uses Fixed (VMware) multipathing. You should upgrade to Round Robin (VMware) which is active-active multipathing if you have more than one I/O path to a LUN and only if your SAN supports active/active multipathing. Round Robin multipathing distributes storage I/Os across all healthy paths which substantially improves VM disk I/O performance.

In the past, VMware only supported one I/O path per LUN. This made virtualization unsuitable for workloads that required high storage bandwidth (e.g.: workloads that need more than one path of I/O bandwidth). With Round Robin multipathing, these workloads can now be virtualized because their disk I/O demands can (finally) be met.


Active / Active Multipathing

All storage paths are used simultaneously to transmit I/O requests between the ESXi host and the SAN

Active / Stand-by Multipathing

This is where all traffic between an ESXi host and the SAN flows through one Active path. All remaining paths are in Stand-by mode. Should the Active path fail, a surviving (healthy) stand-by path is selected to be the new Active path. In this mode, only one path (the Active path) can carry I/Os.

Concurrent Multipathing

A variation of Active / Stand-by multipathing. In this mode, an administrator declares different active paths on a LUN by LUN basis. While I/Os may only flow through one path to a LUN at a time, different Active paths are declared for different LUNs, allowing I/Os to flow through multiple paths Concurrently. There is no attempt to dynamically load balance across paths. Performance is improved as I/Os are statically distributed across available healthy paths.

For a good discussion on multipath options, please see the attached document...

SAN Storage Considerations
SAN Storage Considerations
Storage Best Practice
Best Practices - Multipathing
Working With VMFS Lab
Review & Questions
Section 4: Resource Pools
Resource Management and Resource Pools

It is usually more important to allocate scarce resources such as CPU and RAM in a predictable manner than in a fair manner.

Fairness usually implies that all VMs get a fair (or equal) access to host resources. While this sounds nice, the reality is that some VMs (e.g.: Production VMs) are likelymuch more important to us than other VMs (e.g.: test, development, quality assurance and training VMs).

Predictable resource allocation implies that you know and have control over how resources are allocated. There are two aspects to predictable resource allocation:

If resources are not fully committed (i.e.: there are more physical resources available than all VMs demand) then the VMkernel will ensure that VMs get all the resources (either CPU or memory) that they request – and perhaps resources for idling (allocated but unneeded CPU, RAM).

If resources are over committed (i.e.: VMs currently demand more memory or CPU cycles than the ESXi host can deliver) then the VMkernel allocates scarce resources to the most important VMs.

The VMkernel has a number of strategies it uses to determine who is most important when experiencing resource contention. Some of these strategies are built into the VMkernel and others are under your control. We will explore these in this chapter.


Whenever the VMkernel has more CPU resources than VMs demand, the VMkernel gives all running VMs all of the CPU cycles they require.

On its own, the VMkernel has no way of knowing what the VM guest OS is doing with the CPU cycles it gets. If the guest OS wastes these cycles running its idle task (because it has nothing to do) then those cycles accomplish nothing in the VM and are not available for use by VMs that have real work to do.

If you install VMware Tools into your VMs (a best practice), then VMware Tools will report back to the VMkernel whenever the guest OS in a VM runs its idle task. By doing this, the VMkernel always knows which VMs truly need CPU service and which VMs would waste CPU by idling.

The VMkernel CPU scheduler automatically treats VMs that need to run as high priority VMs and VMs that want to idle as lower priority VMs. In this way, the VMkernel allocates CPU resources to where they are needed.

If the VMkernel has more physical CPU resources than are needed to run all non-idling VMs, then the VMkernel CPU scheduler will allow idling VMs to accumulate idle time.


Physical CPUs always cycle at full (rated) frequency. However, because (potentially many) VMs compete for a physical CPU resource (socket or core), a VM may not receive a full core of cycles in any given second of time.

If the host is over-provisioned with CPU resources, ESXi will allow a VM to use all of the CPU it wants. In this situation, the maximum number of cycles a uni-processor VM can use is the number of cycles a single CPU core can deliver (2.6ghz in the above example).

If the host is severely CPU over-committed, then the VM kernel must select which VMs run and which VMs wait. Under severe CPU stress, a low value VM could lose its turn at the CPU. It could receive as few as zero MHZ in a given second in time.

It is more likely that a VM will receive at least some cycles each second. How much depends on many factors including:

  • Has it received it's declared reservation. If not, it will get additional cycles
  • Has it reached it's user-defined limit? If it has, the VM will get no more cycles
  • How many CPU shares does the VM holds? VMs with more shares win access to CPU resources more frequently than VMs with fewer shares

When the VMkernel determines that it is time to run a VM, the VMkernel allocates physical CPU resources (usually CPU cores) to the VM equal to the number of Virtual CPUs in the VM). That is, if the VM has 1 vCPU it will run with one core of resources. A dual vCPU VM runs with two CPU cores and a 4 vCPU VM runs with 4 CPU cores.

Each vCPU can be no faster than the frequency of the physical CPU core that runs the vCPU. So if you have a physical CPU that runs at 2.6ghz, then a vCPU cannot run any faster than 2.6Ghz. This is the absolute upper limit of CPU cycles that can be allocated to the VM (on a per VCPU basis).

If you like, you can lower this limit to a lesser value by setting a CPU limit to some number of MHZ less than the frequency of the physical CPU core. For example, you can set a limit of 1ghz for very low value VMs. If you did this, the VMkernel CPU scheduler would never allocate more than 1ghz of cycles to the VM, even if there were spare CPU resources available.

One good example for the use of CPU limits is legacy NT4 applications. Some old NT4 based applications waste CPU by polling the keyboard rather than giving up the CPU when they are idle. If you migrated this workload to a VM, it would try to burn a full physical CPU core (of a modern, high speed CPU, not the 300-1,000mhz that an old Pentium 3 CPU could deliver). By setting a limit, you could control how much CPU this badly behaving application could consume – perhaps limiting it to no more cycles than it had when it was physically deployed.


You can also assign CPU reservations. A reservation is a guaranteed allocation of CPU cycles (in MHz) to a VM. This allocation is delivered to the VM every second by the VMkernel CPU scheduler and is provided regardless whether the VM needs the cycles or would waste the cycles running its idle task.

An example of a VM that could benefit from a CPU reservation is a busy VM that runs an interactive network application – such as Microsoft Terminal Services or Citrix servers.

Normally, under CPU load, the interactive VMmay lose its CPU to other VMs. If that were to happen, then users working with the VM might experience lag or jerkiness in their interactive sessions. If you assign a CPU reservation, then the VM will hold onto the CPU even if starts to idle. This would allow the interactive VM to appear more responsive (smoother) under load – as the VM can respond instantly to any keyboard or mouse events from the client.

Reservations are guaranteed commitments of resources. Once you declare a reservation, the VMkernel will honor it, even if it means penalizing other VMs. Excessive use of reservations could lead to artificial contention as the VMkernel is no longer free to pull CPU away from idling VMs and redirect it to busy VMs.

VM vCPU Shares

When a VM boots, the VMs BIOS reports the declared amount of RAM to the VMs guest OS. The guest OS will then treat this declaration as the total physical RAM available to the VM. If the VM needs more RAM than it was provisioned with, the VM will use it's native memory management capabilities (paging). Paging transfers less important memory pages to disk to free up memory for more important pages.

The VMkernel allocates RAM to a VM as the VM attempts to use memory, not on boot. So, if a VM boots with a 8GB memory declaration but only loads 4GB of pages into RAM, the VMkernel will only provide the VM with 4GB. In this way, the VMkernel prevents memory waste by allocating RAM to VMs that don't need it.

If a VM clearly demonstrates an ongoing need for more RAM than it was given (through persistent guest OS paging), you should increase the declared memory for the VM the next time you can power cycle it (power down, dial up RAM, power on).

It is possible that the VM could spike on memory (thereby gaining more RAM from the VMkernel) and then later have the application that needed the memory release it. When this happens, the VM ends up with an over allocation of physical RAM. The VMkernel will learn about this over allocation through VMware tools (who reports unused memory back to the VMkernel) and can steal back any over allocation through the Ballooning memory management technique (more later).

Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

Instructor Biography

Larry Karnis, VMware vSphere Consultant/Mentor, VCP vSphere 2, 3, 4 and 5

Get VMware vSphere and View trained here... on Udemy!

What do you do if you need to learn VMware but can't afford the $4,000 - $6,000 charged for authorized training? Now you can enroll in my equivalent VMware training here on Udemy!

I have created a six courses that together offer over 32 hours of VMware vSphere 6 lectures (about 8 days of instructor lead training at 4hrs lecture per day). With Udemy, I can provide more insight and detail, without the time constraints that a normal instructor led training class would impose. My goal is to give you a similar or better training experience - at about 10% of the cost of classroom training.

I am an IT consultant / trainer with over 25 years of experience. I worked for 10 years as a UNIX programmer and administrator before moving to Linux in 1995. I've been working with VMware products since 2001 and now focus exclusively on VMware. I earned my first VMware Certified Professional (VCP) designation on ESX 2.0 in 2004 (VCP #: 993). I have also earned VCP in ESX 3, and in vSphere 4 and 5.

I have been providing VMware consulting and training for more than 10 years. I have lead literally hundreds of classes and taught thousands of people how to use VMware. I teach both introductory and advanced VMware classes.

I even worked for VMware as a VMware Certified Instructor (VCI) for almost five years. After leaving VMware, I decided to launch my own training business focused on VMware virtualization. Prior to working for VMware, I worked as a contract consultant and trainer for RedHat, Global Knowledge and Learning Tree.

I hold a Bachelor of Science in Computer Science and Math from the University of Toronto. I also hold numerous industry certifications including VMware Certified Professional on VMware Infrastructure 2 & 3 and vSphere 4 & 5 (ret.), VMware Certified Instructor (ret.), RedHat Certified Engineer (RHCE), RedHat Certified Instructor (RHCI) and RedHat Certified Examiner (RHCX) as well as certifications from LPI, HP, SCO and others.

I hope to see you in one of my Udemy VMware classes... If you have questions, please contact me directly.



Larry Karnis

Ready to start learning?
Take This Course