VMware vSphere 6 Part 1 - Virtualization, ESXi and VMs
4.5 (588 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
2,979 students enrolled
Wishlisted Wishlist

Please confirm that you want to add VMware vSphere 6 Part 1 - Virtualization, ESXi and VMs to your Wishlist.

Add to Wishlist

VMware vSphere 6 Part 1 - Virtualization, ESXi and VMs

Learn VMware's ESXi 6 Hyperisor, Virtual Networking, NFS Shares and Virtual Machines. Learn how with Video Demos.
Bestselling
4.5 (588 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
2,979 students enrolled
Created by Larry Karnis
Last updated 5/2017
English
Current price: $10 Original price: $20 Discount: 50% off
5 hours left at this price!
30-Day Money-Back Guarantee
Includes:
  • 9 hours on-demand video
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Install and configure VMware's ESXi hypervisor according to best practices
  • Understand and configure virtual and physical networking
  • Connect ESXi to NFS shares
  • Create, edit, power on and run Virtual Machines
View Curriculum
Requirements
  • Have a basic knowledge of Ethernet and TCP/IP
  • You should understand and know how to use file shares (e.g.: SMB / CIFS shares)
  • Prior experience installing Windows desktop or server operating systems
Description

Overview

VMware vSphere 6.0 is the platform businesses depend on to deploy and manage their virtualized Windows and Linux workloads. In this course you will learn the concepts, skills and mechanics you need to get you started as a VMware vSphere 6.0 administrator.

Why Take This Course? (Please click blue Full details link just below)

Here are some great reasons why you should consider enrolling in this course:

  • Our courses are amongst the highest rated of all vSphere courses on Udemy. We consistently earn scores of 4.4 to 4.8 stars out of 5!
  • New! All of our course content is downloadable so you can learn any time on any compatible device
  • Most feedback of any vSphere course on Udemy. Over 580 people have rated this course making it the most reviewed vSphere course on Udemy. Please check out our feedback
  • Fastest growing vSphere course on Udemy! Over 2,900 people have enrolled in this class. We add more new students per day than any other vSphere course. People sign up because of our content, quality and price
  • One of the lowest cost vSphere courses on Udemy! We give you 8.5 hrs of detailed lectures for $20 or less. Other vSphere courses charge $70 or more for much less content
  • We have some of the newest vSphere courses on Udemy. Our courses were developed in late 2015 and published throughout 2016. Many others Udemy courses on VMware date  back to 2013 or earlier
  • We have the most complete set of vSphere courses on Udemy. Combined, our Part 1 through 6 courses give you 31+hours of vSphere 6 training. Nobody else offers anything close to the breadth and depth of vSphere training that you will get here
  • We offer lots of free preview content. Check the course out for yourself before you buy
  • Video demonstrations that show you how to complete all tasks discussed in your course


Check us out. Try any of the free preview lessons to see if this course is for you.


About This Course - Learn VMware vSphere 6 from the ground up

This is an in-depth introductory course on virtualization with VMware vSphere 6. This course covers four major topics that all vSphere 6 administrators must know:

  • First, we start by learning how to install and configure the ESXi 6.0 hypervisor. From there, we configure ESXi host management networking, join our host to a Windows AD domain, set up time services and much more
  • Next, we explore virtual and physical networking. Virtual networking is used to provide Ethernet and TCP/IP networking to VMs. Physical networking is used by both ESXi and VMs to communicate to physical network peers. You will learn to build NIC teams for fast, reliable networking, and how to enable Cisco Discovery Protocol on your virtual Switches
  • ESXi can use file shares for Virtual Machines, for backups and as a repository for install media images or to hold VM clones or templates. We will see the file choices ESXi supports, how to connect and disconnect from file shares and best practices for achieving both high performance and redundancy
  • We will learn how to build virtual machines. You will learn about the features and capabilities of the virtual hardware layer, how to install a guest OS into your new VM, the purpose and benefits of VMware Tools and how to optimize your VMs for best performance and lowest resource overhead.
  • Included at the end of each chapter is a series of one or more video demonstrations that show you exactly how to use your new vSphere skills in a live vSphere 6 environment!

Get in-depth lectures, insights, concepts, how-tos and more. Start learning vSphere 6 now - here on Udemy.

Who is the target audience?
  • System administrators who need to work with vSphere virtualization
  • Programmers who need to learn how to create vSphere Virtual Machines
  • Existing vSphere administrators who want to improve their knowledge of vSphere 6.0
  • Anyone who wants to get more work from their servers by virtualizing workloads
Students Who Viewed This Course Also Viewed
Curriculum For This Course
170 Lectures
08:51:56
+
Install and Configure ESXi 6.0 Hypervisor
66 Lectures 03:21:42


Our first step in this class is to install ESXi onto stand alone PC servers and then connect to those newly installed ESXi hosts using the vSphere Client and SSH. In future chapters we will add to our original implementation. Our ultimate objective is a scalable, highly redundant, load balanced Virtual Infrastructure implementation that supports a large community of Windows 2003 / 2008 / 2012, desktop, Linux and other VMs.

Preview 00:45

VMware ESXi is a bare-metal virtualization hypervisor solution. As such, it must install on an industry standard PC server. Please check VMware's Hardware Compatibility Guide (portal on www.VMware.com web site) for the most up to date list of supported PC servers.
Because it owns the hardware, ESXi is in full control of resource assignments to running VMs. The VMkernel, allocates hardware resources on an as-needed basis. In this way, the VMkernel can prevent idling VMs from wasting CPU cycles that could otherwise be used by busy VMs. Likewise, the VMkernel keeps track of needed RAM, not just requested or allocated RAM. It can dynamically re-assign RAM to memory starved VMs, thereby ensuring that VMs get the memory they need to run.

Preview 03:12

As your ESXi deployment matures (technically) you will want to introduce:
● Different LAN (virtual or physical) segments to isolate network traffic to improve both security and performance. You could use different LAN segments for things like IP Storage, Management and production systems

● Shared storage solutions including iSCSI, Fibre SAN and NFS shares

● Hardware redundancy in the form of multipath storage solutions and teamed NIC configurations

● You may even wish to consider a Boot From SAN or boot from USB/SD card solution so you don't need to provision and configure local storage.
Boot from SAN is available on supported Fibre SAN controllers and also with iSCSI SAN controllers (using iSCSI hardware initiators).

Preview 03:45

ESXi is capable of using the largest PC server hardware platforms. Apart from what is stated above, ESXi is limited to:
● No more than 320 CPU cores (includes Hyperthreaded logical processors) for CPU scheduling purposes

● All available RAM up to 4TB
Furthermore the following implementation limitations need to be considered:
● ESXi supports a modest selection of 10Gb and 40Gb Ethernet controllers

● Jumbo Frames supported, which may improve software iSCSI I/O performance.


Notes about Local Storage

● ESXi requires enterprise class storage controllers. This means that it usually doesn't work with embedded SATA controllers found on desktop motherboards

● ESXi has support for controllers from LSI Logic, Adaptec and many others. Most vendor branded controllers (Dell PERC, HP Smart Array, IBM ServeRAID, etc.) are made by (i.e.: rebranded from) either LSI Logic or Adaptec HP Smart Array controllers and have significant limitations you should know about:


1. They may refuse to boot off a local storage volume that is >2TB in size

2. They may refuse to use disk that do not carry HP's brand even if HP OEM's the drive. This means that generic Seagate, Western Digital, Hitachi, etc. enterprise drives (that work fine) may be rejected by HP controllers and/or storage shelves

Preview 06:50

JBOD – Just a Bunch of Disk. Physical disks in a non-RAID configuration.
ESXi comes in two forms – Embedded and Installable. Embedded is baked into firmware on the motherboard of select PC servers. This lets you boot your server without any local storage.
ESXi Installable is a version of ESXi that can be installed onto local storage, USB memory keys or SAN storage. It is installed from CD media that you can download from www.vmware.com.
ESXi does away with the Service Console found in ESX 4.1 and older. This provides a smaller, leaner hypervisor than full ESX. It is also more secure because there is less software (to exploit) and fewer services running on ESXi than there is on ESX.

Preview 03:27

DCUI - Direct Console User Interface This is the Yellow & Grey screen on the console of your ESXi host once it is fully booted.

Preview 02:13

ESXi is installed in text mode – so your PC server doesn't need to have graphics capability.
VMware makes it possible to set up an install server for ESXi so you can perform network based installs.

Preview 01:33


In the above screen shot, the ESXi 6.0 installer detected a local SATA based Intel SSD and a 4.09 TB local RAID array on an LSI Logic hardware RAID controller. Since our intent is to use the SSD as a hardware based Read cache (see Performance chapter), we'll select the RAID set as the install target for ESXi

Preview 02:02


VMware has no supported password reset tool for ESXi. Officially, the only way to reset the root password is to re-install the entire operating system.
However, there are community developed procedures that appear to work. If you need to recover the root password for ESXi and have some Linux administrator and command line skill, please visit
http://www.vm-help.com/esx/esx3i/Reset_root_password.php
The procedures in this blog have been tested on ESXi 3, 4 and 5.x and *should* work in vSphere 6.0. Note that these procedures appear to work for ESXi.

Preview 02:06

Virtualization abstracts the physical hardware to the VM. The VM guest operating system normally expects to own all hardware and also expect to be able to execute privileged CPU instructions that are not available to applications. If ESXi allowed guest operating systems full access to these privileged instructions, then the guest OS could manipulate hardware directly, possibly interfere with virtual memory page translation tables and perform other operations that could compromise the ESXi host. To avoid this problem, VMware blocked guest OS' from privileged/dangerous instructions and CPU features – and provides this capability through software that emulates (and controlled) what the guest OS could do. This worked but adds overhead to some operations.


Intel and AMD have virtualization hardware assist technology in their CPUs, offering sophisticated memory management capabilities, better hardware emulation features and other improvements that dramatically reduced the overhead of virtualization while maintaining compatibility with Guest OS'.


ESXi probes physical CPUs for Intel VT or AMD-V technology and will not install or run if the feature is not present or enabled, so please be sure to turn on this feature in your machine's BIOS.
For more information see: http://en.wikipedia.org/wiki/X86_virtualization

Preview 01:37


The installer will now install ESXi onto your selected storage volume. To do this, the installer:
- Wipes all partitions on the selected target storage volume - Creates partitions as needed (normally 8 partitions are created)
Useful information about the installation disk:

- ESXi consumes about 4GB of disk space in overhead. The rest is for VM use

- partition 4 is the boot partition and is located at the front of the disk (behind the Master Boot Record and partition table)

- partitions 2 and 4, 5, 6 & 8 are for ESXi use and occupy the front of the disk

- partition 7 is a vmkcore partition (partition code 0xfc) and is a ESXi partition used to hold crash dumps

- partition 3 consumes all remaining disk space and is partitioned and formatted as a VMware File System (VMFS)


Note: ESXi 6.0 can install on > 2TB volumes. ESX(i) 4.1 and earlier cannot. Be aware that some vendor supplied RAID controllers (e.g.: older HP gear) cannot use a greater than 2TB volume as a boot volume.

Preview 00:43

It only takes about 5-10 minutes to install ESXi 6.0 onto your PC server. The install proceeds non-interactively. A status indicator updates a percent completed horizontal bar

Preview 01:22

ESXi has a simple, BIOS-like interface called the Direct Console User Interface (DCUI). The DCUI makes it very easy to configure. To configure your ESX host... simply hit F2 at the greeter screen and update your host configuration.

Preview 01:33

The ESXi administrator account is root (the traditional Linux administrator account). When you install ESXi, the system defaults to:
- The root password is set during installation

- IP properties set via DHCP

- No command line access (either locally or remotely)


In the next few slides, we will discuss how to change these values.

Log In for the First Time
01:24

The ESXi configuration menu is a simple text interface where you complete your server's customizations.
Use the up/down arrows to move to a function. When a function is highlighted, its properties and the command keys used to modify that function are displayed on the right.

ESXi Configuration Menu
00:57

You must set the IP properties of your ESXi host before you can manage it. Select Configure Management Network to set the:
- Fully Qualified Domain Name (FQDN)

- IP address

- Netmask

- Default Gateway


and other properties.


You can set these values statically or dynamically using DHCP. If you use DHCP, you must configure your DHCP servers to send static properties to a host. To do this, configure your DHCP server with the MAC address of your ESXi host management NIC and then set the static properties to server whenever that NIC broadcasts for a DHCP lease.

Default Management IP Settings
01:24

It is a best practice to use static network settings for your ESXi host. To complete this task, you must:
1. Select the correct NIC for management networking 2. Set a static IP address and Netmask and Default Gateway values 3. Identify your local DNS server(s) and the default DNS search domains

Configure Management Network
01:16

You manage your ESXi host through your network. To communicate with your ESXi host (using either the vSphere Client directly or vCenter indirectly), you must have network connectivity to it.
Since modern PC servers may have many NICs and these NICs may be connected into different physical and/or virtual LAN segments, you may have to select the correct physical NIC (rather than the default NIC) before you can manage your machine.
NIC Teams The Network Adapters screen lets you review and select the NIC or NICs you wish to use to carry network traffic. If you select more than one physical NIC, you automatically create a NIC team. NIC teams afford better speed and redundancy.
Tip It can be difficult (or impossible) to tell which RJ45 jack is associated with which MAC address. A simple way of selecting the correct physical NIC(s) is to unplug all NICs from their switch except for the NICs you wish to use for management. Then use the Status column (Connected means the NIC has a link to the switch) to determine which NICs you should for management.

Select Management NIC(s)
02:55

ESXi 6.0 makes it easier to identify onboard NICs from add-on NICs. In previous versions of ESXi, all NICs were reported in the order they were discovered during a boot up PCI bus scan. Normally, onboard NICs were discovered first – but this was not guaranteed. This could lead to problems trying to identify how vmnic# (alias for physical nic #) mapped to physical NICs.
With ESXi 6.0, VMware now identifies NICs as follows:
- If the Hardware Label values starts with N/A, then the NIC is on the motherboard

- If the Hardware label value starts Chassis slot... then the NIC is an add on NIC
For NICs on the motherboard, the NIC labeled NIC 1 will show up first, then NIC 2 and so on.
For add-on NICs, port 1 will show up first and then ports 2-4 (if the card is a dual/quad NIC)

Network Adapter Details
01:32

Complete this form to set your ESXi host management NIC IP properties.
vCenter cannot manage an ESXi host whose IP address changes. For this reason it is best to give all of your ESXi, ESXi hosts fixed IP properties.
You must select Set static IP addresses... and complete all three fields to complete your static IP address properties assignment.

IPv4 Configuration
01:48

ESXi 6.0 supports IP V6. You can assign IP V6 addresses:
- Via DHCP

- Self generated via ICMP stateless configuration
You can assign up to 3 static IPV6 addresses to your ESXi host.

IPV6 Configuration
01:19

ESXi and vCenter require DNS services to function properly. So it is critical that you have DNS name servers set up and accessible from your local LAN segment.
It is a best practice to have both primary and secondary DNS servers available... but ESXi will function with just primary DNS.

DNS Configuration
02:03

DNS Suffixes are used to enable DNS to look up the IP address of a host specified only by it's host name (and not qualified with a domain name). An example might be a look up request for a host called esxi5.
DNS needs a full domain name. Custom Suffixes will append domain names from the list set on this screen to simple host names and then perform a DNS query. This continues until either:
- a matching FQDN is found and it's IP address is returned

- no matching FQDN is found and all suffix Domain names have been tried
It is a good practice to add at lest one domain name (the primary domain name for your organization) to this list!

Custom DNS Suffixes
01:48

All network changes are applied at one time when you leave the Configure Management Network sub-menu. First the new settings are applied to the appropriate configuration files and then the ESXi hosts' management network is brought down and back up again. For this reason it is best to be at the physical server's console when updating management networking properties.
You should be brought back to the System Customization menu. Your network changes should be visible.

Apply Network Changes
01:12

Test Management Network
02:37

Tech Support Mode enables functions used by support providers who are comfortable working on the ESXi command line. By default, all local and remote command line access to your ESXi host is disabled – so you can only access your ESXi host through:
- the vSphere client pointed directly at your ESXi host

- vCenter if vCenter has management control over your ESXi host

- The VMware Management Assistant service (VMA), if installed


Enabling Local Tech Support allows physical console command line access. Support personnel who have access to the physical console directly or via remote console services such as Dell DRAC (Dell Remote Access Controller), HP ILO (Integrated Lights Out) or IBM Integrated Management Module (MM) would be able to log in to your server.
Enabling Remote Tech Support enables the Secure Shell Daemon (sshd) and supports network based administrator access to your box without the need for remote console services.
Warning Enabling Remote tech support enables direct root access to your ESXi host through a TCP/IP connection. This is a potential security threat. Turn on this feature only if needed. If this feature is turned on, set a strong root password.


Never expose your machine to an untrusted network like the Internet - especially if Remote Tech Support is turned on!

Local/Remote Tech Support
04:19

It may happen that the management agents (services) on your ESXi host become unstable or crash. If this occurs, your ESXi host will not respond to vCenter or the vSphere client. In vCenter your host will grey out and report as disconnected.
You could reboot the ESXi host but that would bring down all running VMs. A more acceptable option is to simply restart the management agents on your ESXi host.
This function can be done at any time. Any connected vSphere Client sessions will be closed. Once this function completes, your host should become active in vCenter and should accept direct vSphere Client login requests.

Restart Management Agents
02:01

Once ESXi has rebooted, it is managed via VMware's vSphere Client. You can download the vSphere Client from www.vmware.com/download.
There are additional hot keys active on the ESXi console:
Alt-F1 – first command line log in screen

Alt-F2 – the ESXi greeter screen (screen shot above)

Alt-F3 to Alt-F10 – no function

Alt-F11 – Grey status screen/greeter screen with no F-key prompts

Alt-F12 – VMkernel log dump

ESXi Ready for Service
01:40

ESXi supports both local and remote command line access (must be enabled using the DCUI Troubleshooting). These services are off by default.
Allowing direct console or network Secure Shell (SSH) command line logins enables direct ESXi host administration without the need for vSphere Client or Web Client. The environment is similar to a Linux style machine.
One thing to note is that ESXi will allow direct root logins both on the console and via SSH. This is a security concern because it means that anyone in possession of (or who can guess) the root password can take control of your machine.
It is best to leave these services disabled – so they cannot be abused. You can turn these services on (as needed) through the DCUI.
Please note that ESXi will do exactly what you tell it (via the command line) without the normal 'are you sure?' prompts. This tool is suitable for those who are comfortable administering Linux servers from the command line and who also have knowledge and experience with ESXi added tools and commands.

Alt-F1 ESXi Command Line Login
02:08

The VMkernel records detailed log entries into a file called /var/log/messages. You can view this file by logging into the Local/Remote tech support prompts (as root) and issuing the command: # less /var/log/messages


You can see the most recent entries by hitting the Alt-F12 keys on your machine's console. This display shows one screen full of the most current additions to the VMkernel log file. You should check this file if you are troubleshooting problems and need more information than is available in the vSphere client.


Hit Alt-F2 to go back to the ESXi greeter screen when done.


Note

All command line commands entered using Local or Remote tech support are logged to /var/log messages. In this way, it is possible to reproduce the activities of prior command line sessions.

Alt-F12 VMkernel Log Entries
01:41

VMware makes log files and configuration files available for review in a number of different ways. The approach (above) is to use a web browser to log in to and view ESXi host configuration/web files.
VMware has a good knowledge base article on the files available using this approach here - http://kb.vmware.com/kb/2004201

Browse Host Log/Config Files
01:41

You manage your ESXi host directly with the vSphere Client. This is a separate download and install available from VMware (http://www.vmware.com/download). Alternatively, you can just point your web browser over to your ESXi host and follow the vSphere Client download link found there.
All VMware client to server connections are encrypted using strong encryption. The encrypted link is set up before any data is exchanged between the client and the back end server.

Login with vSphere Client
03:01

ESXi uses self-signed digital certificates to support end-to-end encryption. All communications between VMware client and VMware server software is encrypted using strong encryption.


Since self-signed digital certificates cannot be independently verified by a 3rd party Certificate Granting Authority (CA), a warning is issued. It is (usually) safe to permanently disregard this warning.


It is possible to acquire an SSL certificate from a Certificate Authority (CA) and then install that certificate onto your ESXi host. This would eliminate the warning messages because a trusted certificate can be used to verify that the host is who it says it is.


Normally trusted certificates are used on Internet facing hosts to ensure the integrity of web requests (e.g.: for secure banking/payment systems, etc.). Since your ESXi hosts won't be directly on the Internet, there is no need (and no benefit) to purchasing a trusted certificate for your machine.


CA generated certificates are also a good idea (and may be mandatory) in organizations where security is critical. Such organizations will run their own Certificate Authority and will have policies that all servers on their internal network must use digital certificates created by and verifiable from the central CA.

Security Warning
02:43

vSphere Client > ESXi Host
00:39

By default, the vSphere Client warns you whenever any command line service is enabled. To avoid the distraction, we have manually turned off these warnings. Since granting command line access is normally not a good idea, presenting these warnings makes sense.
There are some situations where you want to enable command line access and don't want to be bothered about the fact that these service(s) are turned on. To disable command line warning sin the vSphere Client, please check out the following Knowledge Base article:
kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2003637

vSphere Client > Inventory
01:41

You can create local ESXi user accounts with passwords to allow for local authentication (for both the vSphere client and Local/Remote Troubleshooting – if enabled). To do this click on the Local Users & Groups tab and then right-click the back ground and select Add.... You can make new groups by clicking the Groups button and then right-clicking the background.


Best Practice

You would create local accounts only if you do not have an Active Directory service available. Otherwise, it is a best practice to join an AD domain and use domain accounts.


Tip

To command line log into ESXi over the network (from Windows, ESXi Remote Troubleshooting Mode must be enabled) download the putty Secure Shell terminal emulator at http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

Creating Local ESXi Users
02:07

ESXi Host Roles
02:39

ESXi 6.0 can join an Active Directory domain. AD authentication allows you to set up access rules for ESXi login without having to create local user accounts on ESXi. To join an ESXi host to an AD domain, you must have a domain account with Add Host to Domain privileges set.


FYI

Joining an AD domain is the first step to allowing AD defined users to access ESXi directly. The second step is to select inventory items (your ESXi host, folders, VMs, Resource Pools) and assign these users rights on these items. Without specific permission assignments, AD based users will not be able to interact with ESXi – as the default permission for all AD users is No Access.

Joining ESXi to an AD Domain
03:17

ESXi reports on the properties of the CPUs found in your server, including:
- The make/model of the machine

- Make/model and speed of the CPUs

- Number of populated sockets

- Number of cores in the CPU

- Number of Logical Processors (sockets * cores * HT logical processors)

- Presence/Absence of Hyperthreading (Intel CPUs only)

- Presence/Absence of power management capabilities (newer CPUs only)
If you have Intel CPUs and Hyperthreading is reporting N/A you should check to see if Hyperthreading is active. To do this, click:


Properties > Hyperthreading > Enabled


This will turn on Hyperthreading support even if the machine's BIOS is set to disable it. You will need to reboot ESXi for this change to take effect.

Physical CPU Properties
01:40

Note: Hyperthreading is not supported on virtual ESXi hosts.
Hyperthreading is a feature baked into Intel CPUs that allows a single CPU Core to work on two tasks in lock step. The idea is to keep the CPU core busy by giving it a 2nd task when the Core would otherwise be idle waiting on a physical memory fetch (after a local Cache miss)
Hyperthreading provides a modest increase in performance under typical workloads (usually 5% to 20% increase over the same workloads on the same CPUs with Hyperthreading turned off).
Hyperthreading is especially useful when the VMkernel uses it to provide some CPU service to low priority VMs or VMs that would otherwise just run their Idle task (because they have nothing better to do).
If you use PC Servers powered with Intel CPUs, you should:
- Verify that Hyperthreading is available on your CPU

- Verify that Hyperthreading is turned on in your physical machine's BIOS

- Verify that ESXi recognizes that Hyperthreading is available and that ESXi will use Hyperthreading

Enable / Verify Hyperthreading
02:59

ESXi uses memory in 2 ways:
1. For the VMkernel hypervisor (approximately 40MB), and

2. For virtual machines (all remaining RAM).


ESXi needs a minimum of 2GB of RAM or it will refuse to run. Adding more RAM means more room for VMs to run which should result in good performance as your VM population and RAM requirements grow.
ESXi is very frugal and hands out memory to VMs only when needed and only for as long as needed. We will explore ESXi memory scavenging techniques later in this class.

Physical Memory Properties
02:02

ESXi uses Network Time Protocol to ensure that it's clock remains accurate. This is important because the ESXi host provides clock services to all VMs it runs. So, any clock drift in the ESXi host will result in clock drift in VMs. If VM clocks drift by more than 5 minutes they may not be able to join or remain members of Active Directory domains.


Click the Properties... link to review and configure NTP.


Best Practice

Always set your server's BIOS clock to UTC. That way, VMs will get a UTC clock and can then set their local time zone to any region they like.
If you set the hardware clock to your local time, then VMs must all operate in your local time zone only (because they cannot calculate time zone offsets from any time zone other than UTC).

Review/Set Time Configuration
02:06

ESXi installs with an unrestricted use 60-day evaluation license. This eliminates the need to contact VMware for temporary evaluation licenses.
ESXi can be activated using a stand alone host license. A host license is issued on a host by host basis and unlocks access to feature entitlements purchased for that host. Alternatively, ESXi can draw a license entitlement for needed features from vCenter.

Licensed Features in ESXi 6.0
02:28

The vSphere Client can report on most aspects of your system's hardware health including:
- CPU sockets, cores and cache size

- Power supply, motherboard, CPU and add-on card temperatures

- Fan location, health and speed

- Hardware firmware and driver health including chipset, NIC, storage controller, BIOS functionality

- Power supply count and health (connected, disconnected, missing, etc.) and

- System boards.


Use this view to get a quick assessment of your server's physical health.

System Health Status
01:33

Observed IP Ranges This value displays the IP address range observed by ESXi as frames flow through each physical NIC. Here's what it's used for.
In most corporate networks, different physical LAN segments are used to isolate different types of traffic such as Production traffic, storage traffic, management traffic, back up traffic, etc. It is a common practice to use different sub-net address blocks for each physical segment.
For example, your company may subnet its network traffic as follows:
10.1.0.0/16 – Production traffic including servers 10.

2.0.0/16 – Desktop PCs and printers

172.16.0.0/16 – Management LAN segment for direct PC server management

192.168.50.0/24 – Back Up LAN

192.168.100.0/24 – IP Storage LAN (for iSCSI servers)


In the above scheme, if a physical NIC reported Observed IPs in the 10.1/16 range, you would know it was physically connected to the management LAN. If another physical NIC reported Observed IPs in the 192.168.100/24 range, then it should be used to carry back up traffic.

Physical Network Adapters
02:21

It is important that your management network settings are correct. After installation, it is a good idea to review these settings and fix any errors you find.
Click Properties... to edit network settings for the management network. You may need to reboot your ESXi host before these changes take effect.

DNS and Routing Settings
00:51

ESXi System Logs
02:47

Sizing ESXi CPU, Memory
06:10

Sizing ESXi Storage, NICs
05:47

RDP, Web Remote Lab Access
01:10

Lab – Install ESXi 6.0
00:45

In this video I show you how to easily download the correct version of vSphere client for any ESXi host or vCenter Server. We'll use vSphere Client 5.x to reach out to the enterprise class PC server (running ESXi 5.5) that we will use to access our virtual lab environment.

Preview 05:35

In this video, I take you through the install steps to power on your ESXi host, boot from the ESXi 6 installer and complete a fresh install of ESXi 6. Once the install is complete we reboot our ESXi host and allow ESXi 6.0 to boot, initialize and get ready for service.

ESXi HowTo - How to Complete a Base Install of ESXi 6.0
12:34

In this video, we will use the Direct Console User Interface to set base ESXi host properties.

ESXi HowTo - Using the DCUI to set ESXi Host Networking Properties
09:51

The next step is to log back into the DCUI to confirm that our network settings are correct and that they work as expected. We will do that by using the Test Management Networking feature to ping the local gateway, the local DNS server(s) and by doing a Name-to-Address DNS lookup.

Next, we will review Troubleshooting options such as ESXi Shell (local command line console) and SSH (encrypted network command line sessions). These services can be used to administer and troubleshoot your ESXi host.

ESXi HowTo - Using The DCUI to Test Management Networking
05:30

In this video we take a brief tour of the ESXi host via vSphere Client. We check out the Configuration tab and use the Hardware and Software box to review the ESXi hosts' processors, memory, network adapters, storage controllers, storage volumes and available licensed features.

ESXi HowTo - Tour of ESXi Hardware and Software Settings
13:38

In this video we see how to create and test ESXi local user accounts, how to add ESXi to an Active Directory domain so that we can set domain account privileges, how to synchronize ESXi hardware clocks to a Network Time Protocol (NTP) time source to achieve accurate VM times and how to review and update the ESXi host CPU Power Policy.

ESXi HowTo - ESXi Local Users and AD Domain Users, Configure NTP, Balance Power
12:07

A short video on how to navigate to the system logs function, how to review the three ESXi system logs (hostd.log, vmkernel.log and vpxa.log) and how to create and download a log bundle

ESXi HowTo - Working with ESXi System Logs
04:16

ESXi Shell (host console command line access) and Secure Shell (network command line access) can be used to provide support and perform administration on an ESXi host. VMware recommends these services be disabled by default. If you decide to enable them, you will receive yellow warning banners in vSphere Client. In this video, I'll show you how to suppress these warnings while leaving ESXi Shell and SSH enabled.

ESXi HowTo - How to Supress ESXi Shell and Secure Shell (SSH) Warnings
03:46

In this video I show you how to upload ESXi Host Client to an ESXi host, how to install it using esxcli, how to access an ESXi host using a web browser and how to navigate the host with Host Client.

Host Client is important because VMware is retiring vSphere Client - there will be no vSphere Client software for vSphere 6.5 (expected late 2016). So it is a good idea to install Host Client now so that you have the time to become familiar with it.

ESXi HowTo - Install, Configure and Use ESXi's new Host Client
10:47

Review & Questions
01:55
+
Virtual and Physical Networking
27 Lectures 01:29:29
Introduction to Virtual Networking
00:09

The next step in our virtual infrastructure build out is to look at virtual and physical networking. In this chapter we will examine the role virtual switches play and see how they are created, configured and up-linked to physical LAN segments.
Project Plan
01:02

Chapter Outline
01:06

Standard vSwitches

Standard network virtual switches are configured within individual ESXi hosts. They provide VM – VM networking, VM – Physical networking, management traffic and VMkernel to NAS, iSCSI and VMotion networking. Standard network vSwitches are internal to each ESXi host and have no visibility to other hosts.


Distributed vSwitches

Distributed vSwitches are a new type of vSwitch that spans multiple ESXi hosts. Distributed vSwitches create the illusion of a single large, flat LAN segment that can be used for direct VM to VM networking regardless of the host on which each VM resides. In this manner, Distributed vSwitches greatly simplify network design and deployment.


Distributed vSwitches support vLAN's. So, a single Distributed vSwitch can have multiple VLAN s, and VMs can connect to the same VLAN across multiple hosts. This capability gives the virtual network administrator the ability to create isolated VLAN segments on top of a large, Distributed vSwitch.

vNetwork Switches
03:32

The above example shows a single ESXi host with multiple, independent standard vSwitches. Each vSwitch can be configured to carry any of Management, VMotion, iSCSI, NAS or VM network traffic. For a VM connected to one vSwitch to talk to a VM connected to a different vSwitch, network traffic would have to flow through a physical switch.

Standard vSwitches
02:17

Distributed vSwitches are software objects that emulate a standard layer 2 network switch – that is, they forward Ethernet frames by destination MAC address and switch port number (which is maintained by a MAC/Port table on the switch). Distributed vSwitches span multiple ESXi hosts and provide consistent network functionality across all VMs, etc. that are plugged into the distributed vSwitch.
A distributed vSwitch has a single common MAC table. It has a unified set of performance counters. It's configuration spans all ESXi hosts. This last property is especially helpful for VMotion because the VM will find exactly the same Port Group (configured exactly the same way) on any ESXi host that shares the distributed vSwitch.
Distributed vSwitches are created and managed with the vSphere Client. You must have vCenter to create and use a distributed vSwitch.

Distributed vSwitches
02:23

The VMkernel owns all hardware resources including NICs. The VMkernel has native NIC drivers for a limited number of physical NICs including the Intel EtherPro series and also the Broadcom NetXtreme family of NICs and others. Note that consumer NICs (Realtek, Via, SIS, etc. are not supported by ESXi)
ESXi 6.0 supports for Jumbo Frames (up to 9,000 byte MTUs rather than the standard 1,500 byte MTU). If your physical network switches and physical peer devices (iSCSI SAN, File Sharing appliances, routers, etc.) also support Jumbo Frames, you should see a substantial performance gain from this feature. The author has experienced iSCSI SAN performance increases of 5-40% with Jumbo frames over standard (1500 byte MTU) frames. (Yes, sometimes the performance improvement is negligible – so you have to test your environment to find out if it will benefit you.)
In vSphere 5.5, VMware introduced support for Mellanox ConnectX 40 Gigabit Ethernet NICs. VMware 6.0 supports up to 4 Mellanox Technologies InfiniBand HCA devices with the nmlx4_en driver provided directly from Mellanox. See http://www.mellanox.com

Physical Networkin
03:54

Virtual machine hardware can include up to 10 virtual NICs. Virtual NICs are implemented in software that faithfully emulates hardware. VMware supports the Intel EtherPro MT GB NIC. This NIC is a native Gb NIC that is supported by most modern OS' and the vmxnet3 virtual NIC

Virtual NICs (vNICs)
03:54

Virtual NICs plug into Virtual Switches. Virtual Switches are software objects that emulate physical NICs. They work by mapping NIC MAC addresses to switch ports. Like a physical switch, a virtual switch will, upon receipt of a frame, look up the port associated with that MAC address and forward the frame to that port.
For now, it is best to think of Virtual Switches as 'dumb', unmanaged switches. In reality they are anything but dumb and, as we will see later on, virtual switches contain security and redundancy capabilities that even the best physical switches lack.
vSphere 5.0 and higher also supports the Cisco Nexus v1000 software distributed vSwitch. This is a for-cost add-on distributed virtual switch that behaves like a standard Cisco managed switch. Organizations that have standardized on Cisco managed switches will appreciate the ability to manage Cisco Nexus virtual switches with the same tools used to manage Cisco physical switches.
Configuration maximums from VMware's vsphere-60-configuration-maximums.pdf document. Search VMware.com for this document.

Virtual Switches
01:29

Virtual switches can be configured in three modes:


Internal Only

Internal only virtual switches inter connect virtual machines and create isolated VM to VM LAN segments. Use internal virtual switches whenever you need to create a DMZ, want to create a truly private test segment or whenever you want two or more VMs to network together at the fastest possible speed.


Uplinked

Like physical switches, virtual switches can be up linked – but only to a physical switch. When you uplink a virtual switch to a physical switch, you create a larger common LAN segment across the switches. When you assign a physical NIC to a virtual switch, that NIC acts to uplink the virtual switch with the physical switch to which the physical NIC is connected.
The result is a larger LAN segment that contains both physical nodes (the physical switch and devices plugged into the physical switch) and virtual nodes (VMs plugged into the Virtual Switch). The result is a heterogeneous network of both virtual and physical devices all operating at Network Layer 2.


Teamed Virtual Switches

When a NIC is used to uplink a Virtual Switch to a physical switch, all virtual-physical network traffic must flow through that one NIC. This may limit performance and it creates a single point of failure. By adding more NICs to the virtual switch a NIC team is created that provides both improved performance and redundancy.

ESXi Networking
01:39

Internal/Isolated Virtual Switches

Internal or Isolated virtual switches create internal, private network segments for the exclusive use of Virtual Machines. These software-only devices forward packets between VMs that are plugged into the virtual switch.
Because the virtual switch is 100% software, none of the undesirable attributes of physical networking are present. There are no transmission errors, no collisions and, no network signaling speed limits.
The result is that all traffic on internal virtual switches is perfect (free of collisions, errors). The rate at which packets flow through virtual switches is determined by the speed of the host CPU and RAM (as virtual switches are software entities).
The result is that internal only virtual switches should be able to forward packets at a much higher rate of speed than a corresponding physical NIC

Isolated Virtual Networking
03:03

Outbound Virtual Switches

Outbound virtual switches are virtual switches that own a physical NIC. Physical NICs assigned to a virtual switch act like an up link port because they act to connect the virtual switch to the physical switch.
On the virtual side, VMs must connect their vNIC(s) to a virtual switch in order to exchange network traffic. If the VM's peer is a physical device, the virtual switch will forward the packet to the uplinking physical NIC.
When a packet flows though the virtual switch to the physical NIC, the physical switch learns the MAC address of the VM and adds that MAC address to its MAC table. This is how the physical NIC learns that there is a VM (or many VMs) behind the port used by a physical NIC.
So, when the physical switch receives a reply packet destined for the VM, it looks up the VM's MAC address in its MAC table and then forwards the packet to the physical NIC. The virtual switch runs the physical NIC in promiscuous mode. This allows the virtual switch to receive packets destined for many VMs through one physical NIC.
You can customize the MAC address of a virtual NIC. To do this, power down your VM > Edit Settings > NIC and replace the MAC address with one that suits your needs.
All Organizationally Unique Identifiers (first 3 bytes of a MAC address) are listed here: http://standards.ieee.org/develop/regauth/oui/oui.txt

Outbound Virtual Networking
05:24

Teamed Outbound Virtual Switches

One problem with outbound virtual switches is that all network traffic that flows between virtual and physical network nodes must flow through one NIC. This creates the potential for performance problems and a single point of failure. An easy way to resolve both problems is to promote the outbound virtual switch to a teamed virtual switch. This can be done easily by adding additional physical NICs to the virtual switch.
When a virtual switch is promoted to a team, it distributes network traffic (using various policies) throughout all NICs in the team. This provides the virtual switch with more bandwidth thereby reducing the chance that network traffic will bottleneck at the virtual switch.
You can assign up to 8 physical NICs to a team. NICs can be hot added( while the switch is in use) without the risk of packet loss. When a NIC is added to a virtual switch, the switch will rebalance the NIC team to distribute network traffic across all physical NICs.
NIC Teams also provide redundancy. If a NIC in a team fails (cable pull, switch port failure, NIC failure) the virtual switch will remove the failed NIC from the team and rebalance network traffic across the surviving NICs. This is completely transparent to VMs. VMware's implementation of NIC teaming is fully 802.3AD NIC Team, and also Link Aggregation Control Protocol (LACP) standards compliant.

Teamed Networking
07:58

Virtual Switch Properties

Virtual switches have two distinct sets of properties; what happens on the virtual side of the virtual switch (where VMs connect to the virtual switch) and what happens on the physical side of the virtual switch (on the physical NIC(s) assigned to the virtual switch).


Virtual Side

The virtual side of a virtual switch is perfect... no errors, no collisions and no wirespeed limitations. Packets are forwarded at host CPU and RAM speed and performance should far exceed the capabilities of physical networking.
Because virtual networking is implemented using host CPU and RAM, packets will move more quickly through your virtual switches if host CPU and RAM are not over committed. Also, faster host CPUs including larger caches may contribute to improved virtual networking performance.


Physical Side

Network traffic flowing between the virtual switch and uplinked physical switches, through assigned physical NICs, is limited by the realities of Ethernet networking. This includes the possibility of errors and collisions (on busy segments) as well as speed set by the negotiated (or assigned) signaling speed (10/100, GB or 10GB).

vSwitch Frame Forwarding
03:08

Multi-homed VMs

You can plug up to10 virtual NICs into a VM. Because the NICs are not subject to failure or hardware speed limitations, there is no need to add virtual NICs to a VM for performance or redundancy purposes (i.e.: a virtual NIC team). The only reason to add another NIC to a VM is because you want to plug your VM into another LAN segment.
In the example above, the multi-homed VM could function like a Network Address Translation (NAT) firewall, forwarding some packets from the Production physical network to the protected (isolated) VM but filtering others. In this way, it is possible to protect VMs that run sensitive workloads from direct network access.
Because we are using a VM to protect our private LAN segment, we gain advantages that a physical firewall cannot offer. In our firewall VM, we could also run:
- Web and other proxy services to reduce the load on the protected VMs

- Intrusion detection software (IDS) to look for attempted malicious network packets

- Enhanced logging

- etc.
Furthermore, firewall VMs can be replicated easily, deployed at little to no cost and customized to meet your needs – making them useful for protecting your network from corporate backbone traffic or even the Internet. A great example of a simple virtual Firewall appliance is IPCop, available at www.ipcop.org

Multi-homed Networking
06:40

Virtual Switch Connection Types

Virtual switches support two distinct connection types; VMkernel Ports and VM Port Groups.


VMkernel Ports

The VMkernel uses it's own network stack to implement VMotion, IP Storage and NFS access. Before using any of these services you must have (or create) a VMkernel port on the virtual switch that will connect you to the network peer (VMotion peer, iSCSI SAN or NFS server).
VMkernel ports are also used as management ports that connect the ESXi host to physical LAN segments. You created a VMkernel management port implicitly when you installed ESXi. You can add additional VMkernel ports as needed for VMware Fault Tolerance (real time replication of a VM for hot recovery purposes), additional management ports (on different LAN segments), etc.
Port Groups

A Port Group is a named collection of virtual switch ports that share common properties. VMs plug into port groups (and inherit the port group properties) rather than plugging into virtual switch ports directly. It is sufficient to associate a VM's NIC with a port group; the port group takes care of selecting the virtual switch port and setting its properties appropriately

vSwitch Connection Types
02:07

Ports and Port Groups

As previously mentioned, VMs plug into predefined port groups. Port Groups can be created for any purpose and generally serve to assign VMs a common set of properties (Security, Traffic Shaping, NIC teaming and VLAN properties). That way, you don't have to assign these properties on a port by port basis (like physical switches).
Properties that can be set at the Port Group level include:
● VLAN tag ID

● Security settings

● Primary and stand by physical NICs

● Traffic shaping (rate limiting outbound network bandwidth)
VMkernel Ports

The VMkernel implements its own networking service through VMkernel ports on virtual switches. Every time you define a new service for the VMkernel you must have available (or create) a VMkernel port, on the appropriate virtual switch through which the VMkernel can connect to it's network peer.
VMkernel ports are shared. That is, a VMkernel port configured for VMotion can also be used for iSCSI SAN connectivity if the VMkernel can reach both physical peers on the same network, through the same physical NIC(s).

Port Groups
01:42

Add Network Wizard

The Add Network Wizard is the tool for changing your current virtual network configuration. You can launch the Add Network wizard as follows:
Inventory > Click your ESXi host > Configuration Tab > Networking > Add Networking...
The first question the Wizard asks is, what type of connection do you wish to define? Once you select the type of connection you wish to add, the Wizard adjusts so that you can supply the information needed to create your new network connection.

Add Network Wizard
01:25

Current Network Configuration

To view your current network configuration: Configuration > Networking
When naming your Ports and Port Groups, use only Alphabetics, Digits, and +, -, _ and blank.
Networking provides a pictorial view of your ESXi hosts current network configuration. This view is organized by virtual switches (named vSwitch#) and then by ports and/or port groups defined on each vSwitch.
You can add virtual switches with the Add Networking... link in the upper right hand corner of this view (clipped). You can review or edit the properties of any virtual switch by clicking the Properties... link beside each virtual switch.
Note the call out icons to the right of a vSwitch on this view. These icons, when clicked, pop up a window that provides Cisco Discovery Protocol (CDP) properties for the virtual switch. CDP is an industry standard protocol for querying switch properties. VMware has implemented a subset of CDP to make virtual switches more compatible with popular enterprise network management software. For this to work, it must be enabled on the ESXi command line and you must have your virtual switches uplinked to Cisco managed switches.
Clicking call outs to the left of a vSwitch Port or Port Group displays the configuration of the associated Port or Port Group.

Networking View
02:13

AKA – also known as
In vSphere 5.5, VMware gives you the ability to change the MTU for Ethernet frames. The default is 1500 bytes. This is compatible with all Ethernet devices but is not optimized for storage (iSCSI) over Ethernet. If your physical networking gear (NICs and switches) support Jumbo frames, you can now increase the MTU at the vSwitch level to allow for full 4k (4096 byte) or 8k (8192 byte) block transfers in frame.

Change vSwitch Properties
03:24

Create/Update a NIC Team
02:22

Cisco Discovery Protocol (CDP) Properties
CDP is a way for Cisco aware devices to either Advertise their properties to other devices, listen for other device property broadcasts or to perform both functions (Advertise and listen). You need to enable CDP support on the ESXi command line interface as follows:
# esxcfg-vswitch --set-cdp advertise vSwitch0

# esxcfg-vswitch --set-cdp listen vSwitch0

# esxcfg-vswitch --set-cdp both vSwitch0
Example 1 (above) enables vSwitch CDP advertising (of properties) to other CDP aware devices but won't listen. Example 2 will listen but not advertise. Example 3 will both listen and advertise CDP properties.
To get command line access, either log in to your physical machine console or ssh (perhaps using the Windows Putty application) to your ESXi host and log in as root with root's password.
Note that it may take some time before CDP information is fully updated once any of the above commands are run.

vSwitch Properties
04:35

Physical NICs

You can use the vSphere Client to view the physical NICs installed in your ESXi server. To do this click the Configuration Tab > Network Adapters. What you see is a roster of NICs identified by:


- Make/Model of NIC. In the example above there are three Broadcom NICs

- Speed and Duplex setting for the NICs - Configured – NIC auto-configured or forced to a specific setting

- vSwitch – the virtual switch that is using this NIC (or none)

- Observed IP ranges – Packet headers flowing through the NIC are examined and ESXi attempts to infer the (sub)network range of IP addresses being handled by the NIC. This is useful in determining which NIC is plugged into which network segments. Note that ESXi only looks at packet headers and not payload. No attempt is made to capture data or derive any information that would not be visible to any physical switch.

ESXi Physical NICs
02:25

Virtual Switch Rules
A physical NIC can be owned by only one Virtual Switch at a time. If you need more ports than the virtual switch provides by default (24), you need to change the size of the virtual switch (edit its properties) and then reboot the ESXi box.
Virtual switches cannot uplink to each other. The only reason to do this would be to add more ports to an existing virtual switch. The correct way to do this is to edit the virtual switch properties, change the number of ports to something larger and then reboot ESXi.
You must have at least one physical NIC for each separate physical LAN segment that you need to connect to. You can connect a virtual switch to a number of vLANs through a single physical NIC through port trunking at the physical switch.
Because NIC teams distribute traffic across physical NICs, all physical NICs in a team must be plugged into the same (virtual) LAN segment.

vSwitch Rules
03:37

Networking Lab
01:20

In this Video HowTo, I will show you how to use vSphere Client to create, update and configure Standard Virtual Switches including:

  • How to change the name of a Port Group or VMkernel Port
  • How to review and change the properties of a vSwitch, Port Group or VMkernel Port
  • How to create additional Standard vSwitches
  • How to promote a vSwitch to a Physical NIC (pNIC) team
  • And, how to manually enable Cisco Discovery Protocol on a vSwitch
Networking HowTo - Create, Configure and Update Standard Virtual Switches
14:23

Review & Questions
02:18
+
NAS Storage and NFS Shares
25 Lectures 01:07:09
NAS/NFS
00:10

The next step in our virtual infrastructure build out is to look at virtual and physical networking. In this chapter we will examine the role virtual switches play and see how they are created, configured and up-linked to physical LAN segments.

Project Plan
01:43

Network Attached Storage
00:42

Problems & Opportunities
02:22

Network Attached Storage (NAS)

NAS is a generic term for file shares. Of the many file sharing technologies available, the two dominant technologies are SMB and NFS. Both of these technologies are widely available. You can provide file sharing services on storage appliances (many Storage Area Networks offer NAS capabilities as an option). Please check your storage appliance product to determine if:
- NAS/NFS services are available

- The cost (if any) to enable the service

- If the vendor supports NFS version 3 or higher over TCP

- NFS services are listed as supported on VMware's compatibility guides
Popular operating systems such as Windows or Linux can provide NAS shares. There are both technical & non-technical aspects to consider when selecting a NAS server.
You can support more than 8 concurrent NFS shares; but only if you configure ESXi to do so. To increase the number of concurrent shares:
- Log into ESXi with the vSphere Client - Click your host > Configuration Tab > Advanced Settings (Software box)

- In the pop up, click NFS (left column)

- Scroll the right box. Look for NFS.MaxVolumes. Increase value beyond 8

- Close all windows. Reboot ESXi

Network Attached Storage
02:13

NAS/NFS Uses

NFS shares can be used by ESXi as general purpose datastores. With NFS you can implement low cost, high density storage shares that can be used to hold useful content including:


ISO Images – Share your OS, application and utility images with multiple ESXi hosts


Templates– Templates are VM images that have been marked as no power on. Templates are used to rapidly deploy virtual machines. To make a new VM from a template, the template is copied to a new location, it's name is updated and guest OS customizations are applied.


VMotion – VMotion is hot migration of a VM from a source to a target ESXi host. A basic requirement for VMotion is that the VM must reside on a shared storage resource (such as an NFS datastore).

NAS/NFS Uses
03:38

NAS Options

While it is true that there are two dominant NAS protocols, VMware only offers one – NFS. The reason for this is that Microsoft (the de facto owner of the SMB protocol) does not provide sufficient information and services to allow third parties to certify compatibility with Microsoft's SMB implementation. For example, Microsoft does not
- Publish the details of their SMB standard

- Provide a test suite to validate a 3rd party implementation of SMB

- Certify 3rd party implementations of SMB

- Offer any guarantees that future changes will be compatible with 3rd party implementations of SMB


Consequently, all third party SMB implementations are best-effort implementations that may be incorrect and/or incomplete. Furthermore, Microsoft asserts intellectual proprietary rights to SMB and has threatened patent defense of those rights.
Since VMware software is used in production environments where compatibility and compliance to standards is critical, the uncertainties surrounding SMB make it unsuitable for use in a Virtual Infrastructure implementation.
For more detail on the SMB protocol: en.wikipedia.org/wiki/Server_Message_Block

NAS Protocols
03:21

Network File System (NFS)

NFS was designed to provide low-overhead file sharing between UNIX NFS clients and UNIX NFS servers. Originally developed by Sun Microsystems, NFS has been given over to a vendor neutral standards body – the Internet Engineering Task Force (www.ietf.org). This organization updates and publishes the full NFS specification, provides test suites to validate NFS implementations, etc.
NFS server software can be downloaded at no charge from a number of sources:
- Included in the Windows Services for UNIX download from Microsoft

- Usually included in UNIX releases from major UNIX vendors

- Included with Linux, BSD, Mac OS/X and other UNIX derivatives or clones
For a list of VMware certified NFS servers/services, check the VMware Hardware Compatibility Guide (HCG).

Network File System
02:34

NFS Components

NFS includes both a server side (the machine offering the NFS shares) and a client side (the ESXi box wanting to connect to an NFS share).
NFS servers work by publishing a share. A share is a sub-directory on the server along with share options that dictate who can use a share and under what conditions. The details of setting up an NFS share are beyond the scope of this book. For more information consult your Linux/NFS server documentation.
ESXi acts as an NFS client. As such, it connects to the NFS server through the network and mounts a published NFS share. In order to connect to an NFS server the ESXi machine must have a virtual switch that includes a physical NIC that provides either direct or routed connectivity to the NFS server.
A VMkernel port must exist on that virtual switch so that the VMkernel can direct I/Os through the VMkernel port on the virtual switch to the NFS server.

NAS Components
02:45

Server Side NFS

This slide assumes that server side NFS is being set up on a RedHat style Linux server (RedHat Enterprise Linux, Fedora Linux, CentOS Linux). The mechanics of configuring NFS on FreeBSD, Solaris, UNIX and other systems will differ.
Linux NFS shares are defined by text records that identify:
/dir – the directory that is being offered as a share by the NFS server IP - The IP address(es), (sub-)netblocks, domain(s), etc. who can use this share. Other systems not on this list will have no access to the share

rw - This share is offered Read/Write. The other option is ro (read only). NFS shares used for virtual machines must be offered rw. NFS shares used for ISO images, templates, etc. should be shared read only.

sync – Important for read/write shares. This option tells NFS that any updates made to files in the NFS share are posted immediately (rather than being delay written to disk). This is slower but much safer.

no_root_squash – This option tells NFS that it is OK for the Linux root user to access the share. Normally the NFS server would not allow direct root access for safety and security reasons. Since all VMs are owned by root, access to NFS by all ESXi boxes is performed as root. This option is critical for correct operation.


Note: It is very important that there be no space between the round bracketed share options and the netblock specifier. A space changes how share options are applied.

Defining NFS Shares on Linux
05:19

The VMkernel uses it's own TCP/IP networking capabilities to redirect storage I/Os to the NFS datastore. To communicate with the NFS server the VMkernel needs a VMkernel Port on a virtual switch that uplinks to a physical LAN segment that can reach the NFS server either directly (NFS server is on the same LAN segment) or indirectly (a static route is available for the NFS server).
Before configuring NFS datastores, you must have a suitable VMkernel Port. NFS traffic will flow through existing VMkernel ports or you may need to add a new vSwitch/VMkernel Port/NIC (connected to your isolated NAS LAN segment) to connect to your NFS server
To add a new VMkernel port for NAS/NFS connectivity:
- Verify that a suitable vSwitch exists (or make one if necessary)

- Ensure that the selected virtual switch has a NIC that up links to the correct LAN segment

- Click Properties and add a VMkernel port. The new VMkernel port will need a unique IP address on the NFS LAN segment. Complete the wizard.


Note: In the screen shot above, the NFS-iSCSI VMkernel port is not necessary because:

- there is already a Management Network VMkernel port

- the Management Network VMkernel port uplinks through the same NIC

- NAS/NFS traffic can flow through any VMkernel port

Isolated Virtual Networking
03:54

Once a suitable VMkernel port is available it is time to define the NFS datastore. To do this:
- Click your ESXi server

- Click the Configuration Tab

- Click Storage in the Hardware box

- Click the Add Storage... link in the upper right hand corner

- Click NAS/NFS as the desired storage type


This launches the Add Storage wizard

Define an NFS Share
00:41

To define an NFS datastore, we must supply:
- The FQDN or IP address of the NFS server

- The directory path being shared by the NFS server (case sensitive)

- Is the share Read Only or Read Write

- The local Datastore name for the NFS share


Note that ESXi does not validate the information provided. It is absolutely imperative that the information provided is accurate. Also be aware that unlike Windows shares, NFS does not broadcast its available shares and that there is no network Master Browser that collects network services information.


You can verify your NFS share connectivity information from a UNIX or Linux console:
- Log in as root to a machine on the same LAN segment as ESXi

- Ping the NFS server

- Enter the command: # showmount -e <IP of NFS Server>


If you get an RPC timeout error or no response then your share information is incorrect or a local firewall is blocking NFS. Otherwise you should receive a list of available shares on the remote host.

Define an NFS Share
02:52

The newly defined NFS share is now available. Note the Type column and also the share definition (IP:/path rather than the hardware path).
NFS Share in Storage Roster
01:36

When you unmount an NFS share, you are disconnecting the ESXi host from the share. You are not damaging/deleting the share itself or the files on the share. Note that this is very different from Deleting a VMFS volume – which deletes the partition and files contents of the VMFS volume.
You can only umount an NFS share if it is not being used (no files on the share are being referenced). If the share is in use, you will not be permitted to remove the share.


Before you disconnect from an NFS share, you will have to:
- Power off or migrate any VMs that live on the NFS share

- Disconnect any ISO images in use by powered on VMs (that live on the share) and

- Disconnect any floppy images in use by powered on VMs (that live on the share)


Note: ESXi will NOT tell you which VMs are connected to an NFS share if you try to unmount a busy NFS share. It will simply throw up an error message that the share is in use. The detective work is up to you!

Unmounting an NFS Share
02:26

NFS and Multipathing
Currently, NFS v3 as used by VMware vSphere does not support multipathing. Multipathing is multiple (network) paths to the same resource. Multipathing could be achieved with NFS if you have multiple interfaces, each with its own IP addresses, on the NFS server. Clients could mount the same share using multiple IPs and take advantage of the added network bandwidth of each path.


Unfortunately, vSphere does not support this configuration. vSphere requires each NFS mount to be the same IP/FQDN and share path (e.g.: IP:/path/to/share). If you use different target IP addresses, vSphere considers these to be different NFS volumes – so vMotion would not work (as VMotion requires the VM to be on a common shared volume addressed by the same IP:/path

Troubleshooting NFS
02:20

NFS is a great entry level storage option. However, NFS won't scale to enterprise requirements and NFS lacks many of the storage, management, back up, replication, multipathing and recovery options of shared SAN storage. Consequently NFS is only appropriate for smaller production deployments, testing, QA, development, home labs, etc.
NAS/NFS Pros
05:03

NAS/NFS Cons
01:37

NFS 3 Multipathing, Speed
01:57

A NIC Bond is a network configuration that:
- uses two or more physical NICs

- sets one NIC as the active NIC and the second as a standby NIC

- creates a virtual interface that maps to the bond

- assigns an IP address on virtual interface


NIC Bonds protect against a physical NIC failure or cable pull. If a failure happens, the Bond:
- shuts down the failed interface - maps the virtual interface to the second NIC in the Bond


The advantage of a Bond is that you get continued network service in the event of a single component failure.
Note that a Bond does not do load balancing.

NAS NFS 3 Network Design
05:01

NFS 3 Performance & Reliability
01:27

NFS Best Practice
02:36

NAS/NFS Lab
00:53

In this Video HowTo, I will show you how to use vSphere Client to configure your ESXi host to use to NFS datastores. I'll also show you how to use the Datastore Browser to access the contents of a share.

NFS HowTo - Connecting ESXi to NFS Datastores. Browse with Datastore Browser
08:12

Review and Questions
01:47
+
Creating and Cloning Virtual Machines
35 Lectures 02:31:43
Virtual Hardware and Virtual Machines
00:10

Virtual Machines
00:38

Project Plan
00:36

A VMware ESXi virtual machine is a complete machine that consists of virtual hardware, an operating system and any applications.


Virtual Hardware

VMware creates virtual hardware (software that faithfully emulates real hardware). Virtualizing hardware provides many advantages including:
- Virtual hardware looks and functions like real hardware so the guest operating system cannot tell it is not running on real hardware. This transparency ensures that there are little to no compatibility issues running operating systems and applications.

- Virtual hardware emulates popular physical hardware. Because VMware virtualizes popular physical hardware a guest OS can identify the virtual hardware natively and use native drivers on virtual hardware. This ensures a high level of compatibility with a wide range of operating systems

- Virtual hardware is simple. VMware chose simple motherboard, network, video, SCSI and other hardware. This means that the guest OS can drive this hardware without the need for complex drivers or configurations.

- Virtual hardware maps to physical hardware. When a guest OS attempts an I/O against virtual hardware, that IO is handed to the VMkernel to be completed by physical hardware. In this way the VM remains ignorant of the complexities of the real hardware.

Virtual Machines
02:47

All virtual machines include a common hardware base including:
- A virtual motherboard based on the Intel BX/ZX chip set

- A PS/2 keyboard controller

- A PS/2 mouse controller

- A single Floppy controller that can have one or two drives (one is the default)

- A single IDE controller (IDE Primary) that can connect 2 CD/DVD devices (Master and slave)

- An optional IDE controller (IDE Secondary) that can support two more devices

- An optional SATA controller with up to 4 virtual SATA ports

- A PCI video controller that takes up a PCI slot. This controller acts as a 2D or 3D video card

- Room for 4MB to 1TB of RAM

- The ability to accept 1-128 virtual CPUs (depending on vSphere license)

- 0-4 virtual SCSI HBAs with up to 15 virtual SCSI disks per controller

- 0-10 virtual Ethernet NICs - 0-20 USB devices. ESXi supports USB 1.1, 2.0 and 3.0 compatible devices


SCSI HBAs (either LSILogic or BusLogic) can accept up to 15 virtual SCSI disks.


NICs are AMD PCNet/32 devices, Intel EtherPro 1000 NICs, VMware vmxnet NICs or flexible (ESXi chooses the best NIC). They are used to connect to virtual LAN segments. Different NICs have different compatibility and performance characteristics. Generally you should use vmxnet3 vNICs unless your OS/application requires compatible hardware

Virtual Hardware Version 11
04:50

The default configuration for your new VM is to assign it a single Virtual CPU. Virtual CPUs represent discrete CPU resources and normally map to a physical CPU core or Intel Hyperthreaded logical processors.

When a VM boots, the VMkernel presents one physical CPU resource to the VM for each virtual CPU the VM declares. So, when a uniprocessor VM boots, it runs with a single physical CPU resource. When a dual-processor VM boots, it runs with two physical CPU resources. And, when a quad-processor VM boots, it runs with four physical CPU resources (4 cores), etc.


The VMkernel partially virtualizes physical CPU resources as follows:

- Each single core physical CPU appears as one physical CPU resource

- Each dual core physical CPU appears as 2 independent CPU resources

- Each quad core physical CPU appears as 4 independent CPU resources


The VMkernel then assigns one (or more) of these CPU resources to a VM at run time to match the VM's declared virtual CPU resources.


A quick note on Intel's Hyperthreading. Hyperthreading is a CPU trick that leads an operating system to believe that a Hyperthreaded physical processor has two processing cores when, in fact, only one exist. Hyperthreading provided some modest performance benefits only on some Xeon processors.

Base Virtual Machine HW
03:02

The default configuration for your new VM is to assign it a single Virtual CPU. Virtual CPUs represent discrete CPU resources and normally map to a physical CPU core or Intel Hyperthreaded logical processors.
When a VM boots, the VMkernel presents one physical CPU resource to the VM for each virtual CPU the VM declares. So, when a uniprocessor VM boots, it runs with a single physical CPU resource. When a dual-processor VM boots, it runs with two physical CPU resources. And, when a quad-processor VM boots, it runs with four physical CPU resources (4 cores), etc.


The VMkernel partially virtualizes physical CPU resources as follows:
- Each single core physical CPU appears as one physical CPU resource

- Each dual core physical CPU appears as 2 independent CPU resources

- Each quad core physical CPU appears as 4 independent CPU resources


The VMkernel then assigns one (or more) of these CPU resources to a VM at run time to match the VM's declared virtual CPU resources.
A quick note on Intel's Hyperthreading. Hyperthreading is a CPU trick that leads an operating system to believe that a Hyperthreaded physical processor has two processing cores when, in fact, only one exist. Hyperthreading provided some modest performance benefits only on some Xeon processors.

Virtual CPUs Sockets, Cores
03:57

pCPUs and vCPUs
03:06

VMware vSphere supports multi-core vCPUs. This was actually permitted in vSphere 4.x but the feature was not exposed to the vSphere Client. Instead, you had to add the cpu.coresPerSocket configuration parameter.
Multi-core vCPUs allow you to break the performance barrier imposed by the traditional 2 vCPU limit of Windows desktop operating systems and Windows Standard and Enterprise operating systems. For Windows VMs that need more than two physical cores of cycles, you can now declare:
- 1, 2, 4 or 8 sockets

- 1-8 cores per socket (actual configuration determined by installed Guest OS)


This allows you to address performance issues by adding cores to a one socket VM without adding vCPUs as new sockets (which may incur licensing costs with some 3rd party software).
The maximum number of virtual sockets and cores you can have in a VM is dictated by:
- What the guest OS permits (e.g.: W2k8 Server Standard allows 4 sockets max)

- What your vSphere license permits (ESXi (free) and Standard: 8 vCPU cores, Enterprise: 32 vCPU cores, Enterprise+: 64 vCPU cores max)

vCPU Sockets, Cores
03:04

The VMkernel owns all memory and hands RAM out to VMs on demand. When a VM is created, you declare the maximum amount of RAM the VM can use. This setting is passed to the VM's Guest OS through the VM's BIOS in the same way a physical machine's BIOS would report physical memory to an OS.
The VMkernel provides the VM with virtual memory that looks, to the VM, as physical memory. That is, from the VM's perspective, RAM appears to start at physical address zero and increases until the declared RAM size.
While the VMkernel provides the illusion that the VM has a full allocation of RAM, the reality is that RAM is mapped into the VM's memory space dynamically – on first use. That way, a VM cannot hog memory simply by declaring it.
Since most VMs will not use all of their declared RAM, the result is that the VMkernel holds back declared but unreferenced memory. This memory can be used to run other VMs making it reasonable to boot and run VM's whose total declared memory size is 1.2 to 1.4 times the physical memory size of your server.
To ensure effective memory utilization, do not over provision a VM with RAM. That is - declare what the VM really needs and no more.

VM Wizard - Memory
02:46

Supported virtual host bus adapter types:
BusLogic SCSI HBA – for legacy VMs like Windows NT/2000 and older Linux

LSILogic SCSI HBA – for newer operating systems like Windows 2003 and newer Linux. VMware supports both Parallel and SAS HBAs

SATA – for operating systems that do not support SCSI storage (very well)


Virtual SCSI HBAs look, act and function exactly like physical hardware so a guest operating system can easily detect them and correctly select and initialize the correct driver for the hardware. However, when the guest OS attempts to perform I/Os against the SCSI HBA, those I/Os are handed to the VMkernel to be completed on real hardware. Consequently, there is no performance difference between either of the above two virtual SCSI HBAs.
SCSI HBAs are single bus, non-accelerated, non-RAID storage controllers. The reason for the lack of 'brains' is that all real storage management is performed by the underlying physical hardware so there is no need to recreate this functionality at the virtual hardware level.
Virtual disks are represented as files that live in a storage volume. These virtual disks can be preallocated to their declared size so exercise care when sizing a disk or Thin Provisioned – so that they use what they need now and grow when more space is needed. Later we will see that it is easy to increase the size of a virtual disk to deal with any unanticipated storage growth.
Disk modes control Snapshot behavior. Snapshots are allowed in Independent mode but not in Persistent mode. Non-persistent mode creates a snapshot at boot but deletes it at power off


VM Wizard - Disk
04:54

VMware supports the Paravirtual SCSI controller in modern operating systems including:
- Windows Server 2003, 2008, 2012, Windows XP, 7 and 8.x

- Linux – RedHat Linux 5 & 6, SuSE Enterprise Linux 11, Ubuntu 10.04


The Paravirtual controller can be either a boot controller (not RedHat 5) or a controller for secondary storage volumes.
The Paravirtual controller is a 100% virtual device with no physical counterparts. VMware designed it for high throughput and low overhead. Benchmarks (see: http://longwhiteclouds.com/2014/01/13/vmware-vsphere-5-5virtual-storage-adapter-performance/) show the Paravirtual controller at 20+ % faster than the LSI Logic controller (with same backing hardware) and 300% faster than the VMware virtual SATA controller.
The Paravirtual controller driver is not included with your OS. To add the driver, start the installer, when prompted for the storage controller:
Click VM > Edit Settings > Floppy Image in Datastore > vmimages > floppies
Pick pvscsi-Windows2008.flp for Windows 7 & 8 installs

VMware Paravirtual Controller
04:07

When you complete the New Virtual Machine wizard, you specify additional properties for your VM. This includes:
- Number of virtual NICs and the network Port Groups to which each vNIC is attached

- The properties of each CD/DVD device connected to your VM

- The properties of each Floppy device connected to your VM


CD/DVD devices and floppies can connect to the ESXi host's physical CD/DVD or floppy device. They can connect to media images (ISO images or floppy images) of ripped media or they can connect to desktop devices (your PC's local CD/DVD device or floppy device).
The best thing to do for removable media is to run disconnected. When a virtual CD/DVD device or floppy is disconnected, it is not associated with any physical device or media image. If the guest OS queries the virtual CD/DVD or floppy device when it is disconnected, the virtual device will report that there is no media in the device. This setting is safest (no accidental boots of install media) and also the most efficient (lowest virtualization overhead).

Complete the Virtual Machine
03:18

The VMware Remote Console application is a Windows application that provides full console access to your VM. Similar to an IP based KVM (remote Keyboard, Video, Mouse device), the Remote Console lets you:
- Edit the virtual machines properties (The VM menu item)

- Power manage your VM (power on/off, suspend/resume)

- Send a Ctl-Alt-Del (VM > Guest > Send Ctl-Alt-Del or hit Ctl-Alt-Ins)

- Interact with the VM's BIOS during boot


All network connections between VMware client software and VMware server software is handled through secure (encrypted) connections so there is no security risk to interacting with your VM over the network.
For the best Remote Console experience,
- Keep the resolution of the guest OS reasonable (e.g.: 1024x768)

- Keep the color depth of the VM low (16 bits should be sufficient)

- Disable all screen savers, especially 3D screen savers (as these just burn CPU)

- Turn off screen effects like menu animations, etc.

Remote Console
02:33

VMware uses a licensed Phoenix BIOS for all VMs. The Phoenix BIOS has been trimmed to provide only functions needed by a virtual machine. For example, there is no place in the BIOS to monitor fans, CPU temperatures, processor voltage, etc.
The virtual Phoenix BIOS boots very quickly, making it hard to hit the F2 key (Setup) or the ESC key (boot menu) in time to activate the feature. If you have this problem, edit the VM's settings and adjust the Boot Options of the VM to force it to sit in the BIOS Power On Self Test (POST) screen for a desired number of seconds before continuing.
You can use the BIOS Setup screen (F2 key) to modify the VM's boot behavior, the BIOS date/time and other hardware properties. You can use the BIOS Boot menu (ESC key) to change the boot device for the current boot.

Virtual Machine BIOS
02:23

Once you have completed the New Virtual Machine wizard, the next step is to install an OS onto your new virtual hardware. It is a good idea to install your operating system through ISO images rather than physical media because:
- ISO images do not require physical access to the ESXi server's CD/DVD device

- ISO images deliver data 5-10x faster than a CD/DVD device

- ISO images cannot get lost, scratched, dirty, etc.


To boot off of your ISO image, edit the VM's settings, click the CD/DVD Drive 1 device, select Datastore ISO image and then Browse over to the ISO file you wish to use to install your operating system. Be sure to check the Connect at power on option to present this ISO to your virtual CD/DVD device at boot time.

Install Guest OS
02:16

VMware virtual hardware emulates popular physical hardware. Because the motherboard chipset, keyboard controller, mouse controller, SCSI HBA, NIC and other resources are based on very popular physical hardware, your guest OS should be able to identify virtual hardware without the need for additional (e.g.: 3rd party) drivers.
Proof of this is easy to establish. In the above screen grab, Windows Device Manager is displayed on a freshly installed Windows VM. A quick review of Device Manager's inventory shows that Windows has correctly identified all virtual hardware, selected the correct devices drivers for that hardware and correctly initialized the drivers. The result is that you could run Windows without the need for updated drivers.
While stock Windows drivers are adequate, they are not optimal. VMware provides an enhanced driver set under the name VMware Tools. VMware Tools provides improved drivers for virtual Video, Mouse, NIC and SCSI HBAs. VMware Tools includes additional functionality that will greatly improve the resource efficiency of your ESXi server.
Because of the many benefits of VMware Tools, it is recommended that you install VMware Tools into all guest operating systems. In fact, many VMware shops go so far as to establish a policy that states:
If VMware Tools is not available for the Guest OS then we will not allow the virtualization of that operating system.

VM Running with Stock Drivers
01:47

VMware Tools provides a set of virtual hardware specific drivers built specifically for your Guest OS of choice. VMware Tools is available for:
- Windows NT 4, 2000, 2003, XP, Vista, Windows 2008/2012, Windows 7/8, etc.

- Many Linux releases including RedHat Enterprise Linux, SuSE Linux and Ubuntu

- Solaris 8,9,10,11

- Free BSD

- Netware 5.1 and 6.x

- SCO OpenServer 5 and SCO UnixWare


VMware Tools includes additional drivers that improve your virtual machine experience. These include:
- A Heart Beat driver that continuously reports your VM's health back to the VMkernel

- A File System Synchronization driver. This driver resides in the Guest OS but is under the control of the VMkernel. At the VMkernel's request, the Synchronization driver will force the Guest OS to post all pending writes to disk. This is usually performed just before Snapshotting a VM and is used to ensure the integrity of the virtual disk

- A Guest OS Busy/Idle indicator. The VMkernel VM scheduler uses this status to determine if the VM is actively running tasks (busy) or running the VM's idle task (idle). If the VMkernel is told that the VM is idling, then the VMkernel will reduce the VM's scheduling priority (as the VM would just waste whatever cycles it would receive).

- A Memory Management driver that is officially called the vmmemctl driver (but unofficially called the Ballooning driver). This driver lets the VMkernel take back any over allocation of RAM the VM happens to have without negatively impacting the VM

VMware Tools
02:47

ESXi now supports USB pass through to physical USB devices. Before a VM can use a USB device, you must add a USB controller to the VM. You can hot-add USB controllers (depending on Guest OS support).
Once you've added a USB controller, you can connect to physical USB devices like security dongles and USB storage devices. A USB device can be used by only one VM at a time. But, if it is disconnected from one VM, it an be connected to a different VM.
VMware provides USB Passthrough capabilities for USB devices. This means that you can VMotion a VM to a new ESXi host and it will still be able to use the USB key assigned to it on the original ESXi host. This also works for DRS. However, USB pass through does not work for VMware HA (because the original host may have failed) or Fault Tolerance (again because the original host may have failed). Also, USB pass through does not work for Distributed Power Management because DPM may power off the host with the USB key attached.
You can add a USB Device to your VM only after you've added a USB controller to the VM. Furthermore, you can only add a USB device to a VM if:
- there is a physical USB device in your server, or

- there is a physical USB device on your PC running the vSphere Client
Otherwise, the USB Device add function is disabled!

USB Virtual Device Support
01:43

USB 2.0, 3.0 Device Support
01:29

Simple Windows changes can improve VM responsiveness and prevent the waste of CPU cycles. It is a good practice to turn off animations, fades and other windows transitions (for menu functions, opening/moving windows, etc.) that look nice but chew up bandwidth and CPU. Since CPU is now a shared resource and fancy screen updates chew up network bandwidth, turning them off should improve VM responsiveness.
Here are some performance saving suggestions:


Turn off Screen Animations, etc. (Windows 2003)

The steps below will turn off most screen effects (shadows, animations, etc.) My Computer (right click) > Properties > Advanced > Performance / Settings > Adjust for best performance


Medium Color Depth

Reducing the color depth (bits/pixel) can improve screen refresh times and cut network bandwidth in half. Background (right click) > Properties > Settings > Color Quality > 15 or 16-bit > OK


Screen Saver

Screen savers chew up cycles keeping virtual screens up to date. It can cost you 50+Mhz/VM to keep a screen saver going. Never use 3D screen savers (pipes, etc.) as they can be 10x as expensive as simple screen savers. Background (right click) > Properties > Screen Saver > Select Blank > OK

Windows Performance Tips
03:45

Snapshots are a great tool that greatly facilitates testing, development, patching, configuration change testing etc. because you can always back out of a Snapshot if you don't like what the change does to your VM.
Snapshots can either capture the virtual disk state only or both the virtual disk state and the current VM's RAM state. By capturing the VM's RAM state, you can revert the VM back to the saved state discarding all changes to both disk and RAM.
Snapshots do have overhead so don't run production VMs with active Snapshots. To minimize Snapshot overhead try to keep the size of the Snapshot volume to under 1GB in size. If the Snapshot volume (disk that holds the changes to your virtual disk) grows beyond 1GB, or if the number of Snapshots active on your VM is more than 1, then the performance of your VM may degrade.


Snapshot Rule

If you can't afford to lose data in a snapshot – then commit the snapshot!

Virtual Machine Snapshots
07:54

How Snapshots Work
03:21

VMware's Snapshot Manager is the tool for managing Snapshots. With the Snapshot Manager, you can:
- Commit a Snapshot (Delete)

- Revert back to a past snapshot throwing away changes (Go to)

- Fork a Snapshot (for two or more sub-Snapshot branches)


The Snapshot Manager is a very flexible tool, supporting up to 32 active Snapshots on a single VM.

Snapshot Manager
02:41

By default, VMs live in a sub-directory that matches the VM's name. This may change if the VM was copied, cloned, etc.
The primary file for describing a VM is it's VMX file (i.e.: VMname.vmx). This is a text file that records the properties of the VM, such as...
- What hardware is present

- How is each device configured

- Any special properties assigned to a device (e.g.: a virtual NIC's MAC address)

- Any special tuneables set for the VM

- Etc.


You can review and even edit this file with a Linux based text editor. Be careful if you do because if the format of the file is not honored e.g.: it (contains missing or invalid characters), you may break your VM.
Some files are only present if the features they support are being used. For example:
- a VMname.vswp file is created at VM boot time to provide storage for VMkernel paging. If the VM is powered off, this file will not be present

- a VMname.vmsd file is present whenever snapshot(s) exist on a VM. If there are no snapshots, this file may be empty or missing

- Other files will be created/used as needed


The purpose of all VMware VM files can be found here: http://www.vmware.com/support/ws5/doc/ws_learning_files_in_a_vm.html

Powered Off VMs
04:04

The Datastore Browser is a special file browser created especially for VMFS and NFS datastores. The Datastore Browser has limited functionality (from the perspective of a general file manager such as Windows Explorer or Linux's Nautilus) but it does include functions suitable to working with VMs.
Perhaps the most useful function provided by the Datastore Browser is the ability to Import (take ownership of) a powered off VM.
For example, if you had a VM whose files live in a shared datastore and that VM failed because the host it was running on failed (e.g.: hardware failure), you could easily recover the VM as follows:
- Point the vSphere Client at another ESXi box or VirtualCenter

- Click on an ESXi host

- Click on the Configuration tab

- Click the Storage link

- Right click the Datastore name that holds the failed VM

- Launch the Datastore browser

- Find and go into the VM's unique sub-directory

- Right click on the VM's VMX file and select Add to Inventory...

- Complete the Import VM wizard to assume ownership of the VM


Note you cannot assume ownership of a powered on VM because of the presence of VM locks. These locks are removed when a VM powers off or crashes.

Take Ownership of a VM
03:14

A powered off virtual machine is stored as a number of files in a data store. By default all of the constituent files for a VM live in a sub-directory (the VM's name). Consequently, tasks that may be very difficult on a physical machine become very simple on a virtual machine. For example:
If you copy all of a VM's files into a new directory, you can effectively clone that virtual machine creating a complete image of the VM's virtual hardware, virtual disks, etc.
If you copy the VM's constituent files to near line storage (e.g.: high-density SATA SAN LUN or storage device), you create a full image back up of the VM (which includes the virtual hardware, disk, configuration, etc.). This image could be then restored on another ESXi box, moved to your Disaster Recovery site, used to create training, testing, development and other environments that exactly match the original VM.
If you move a VM's directory and files to a new LUN you are effectively cold migrating that VM to a new location. This would be useful in a non-vCenter environment where you wanted to get a VM off of local server storage and on to shared SAN/NAS storage.
Note: You can only file copy powered off VMs. Powered on VMs have read/write file locks that prevent you from copying or editing a VM's .vmdk files and .vswp files.

VM File Copy
03:02

Because a VMware virtual machine starts with virtual hardware, VMware supports a wide range of popular Guest OS's including Windows, Linux, Solaris, Free BSD, SCO and OS/X.
The VMkernel is a 64-bit hypervisor that is capable of running 32-bit and 64-bit guest operating systems on hardware that supports 64-bit instructions (AMD Opterons and Intel EMT64 or newer Xeons).
VMware Tools is available for all supported guest operating systems. If an operating system is not supported then VMware Tools usually isn't available (exception is Linux where you can custom compile VMware Tools for unsupported releases).
For the best overall virtualization experience including scalability, performance, etc., it is recommended that you only run guest operating systems with VMware Tools installed.

Supported Guest OS'
02:25

Create, Configure a VM Lab
01:32

In this video HowTo, I'll show you how to create a new Virtual Machine. I'll talk about the various factors you need to consider when specifying the CPU and memory size of your VM, how to select virtual NICs for compatibility or performance, how to set your virtual disk type and size, why the VMware Paravirtual controller is your best choice for a SCSI Host Bus Adapter and how to specify snapshot settings for your new VM.

VM HowTo - How to Create a New Virtual Machine
10:30

In this video HowTo, I'll show you how to install Windows Server 2008 into our newly built VM. Then, once Windows has booted up, I'll show you how to improve your VMs responsiveness by replacing Microsoft stock drivers with the VMware Tools driver set.

VM HowTo - How to Install Windows Server 2008 and VMware Tools into a new VM
18:16

In this Video HowTo we will Repair VMware Tools so that it works perfectly, Install QPI (CPU load generator) for future lab use, Install BGInfo to paint key VM properties on our desktop wallpaper, and we will see how to give up ownership and retake ownership of a powered off VM.

VM HowTo - Repair VMware Tools, Install QPI & BGInfo and Take Ownership of a VM
17:13

In this video HowTo I'll show you how to use the Snapshot Manager to take VM snapshots, to review them and then how to commit or delete them.

VM HowTo - Working with VM Snapshots
07:14

OVF (Open Virtual Machine Format) and OVA (Open Virtual Machine Archive) format VMs are easily exchanged between VMware's infrastructure (vSphere) and desktop (VMware Player, Fusion, Workstation) environments. You can easily export VMs in either format, copy them to a remote site or place them on a Windows fire share for other people to use. You can import these VMs using a simple wizard that will create a new VM in your target environment from the contents in the OVF / OVA file(s). 

In this HowTo, I'll show you how to export a VM in OVA format, what an exported OVA format VM looks like to Windows and then how to import the OVA back into your vSphere 6 environment. I'll end up by powering on the new VM and demonstrating that our new VM works perfectly.

VM HowTo - How to Export and Import OVF / OVA Format VMs
12:23

Review and Questions
01:56
About the Instructor
Larry Karnis
4.4 Average rating
888 Reviews
3,306 Students
6 Courses
VMware vSphere Consultant/Mentor, VCP vSphere 2, 3, 4 and 5

Get VMware vSphere and View trained here... on Udemy!

What do you do if you need to learn VMware but can't afford the $4,000 - $6,000 charged for authorized training? Now you can enroll in my equivalent VMware training here on Udemy!

I have created a six courses that together offer over 32 hours of VMware vSphere 6 lectures (about 8 days of instructor lead training at 4hrs lecture per day). With Udemy, I can provide more insight and detail, without the time constraints that a normal instructor led training class would impose. My goal is to give you a similar or better training experience - at about 10% of the cost of classroom training.

I am an IT consultant / trainer with over 25 years of experience. I worked for 10 years as a UNIX programmer and administrator before moving to Linux in 1995. I've been working with VMware products since 2001 and now focus exclusively on VMware. I earned my first VMware Certified Professional (VCP) designation on ESX 2.0 in 2004 (VCP #: 993). I have also earned VCP in ESX 3, and in vSphere 4 and 5.

I have been providing VMware consulting and training for more than 10 years. I have lead literally hundreds of classes and taught thousands of people how to use VMware. I teach both introductory and advanced VMware classes.

I even worked for VMware as a VMware Certified Instructor (VCI) for almost five years. After leaving VMware, I decided to launch my own training business focused on VMware virtualization. Prior to working for VMware, I worked as a contract consultant and trainer for RedHat, Global Knowledge and Learning Tree.

I hold a Bachelor of Science in Computer Science and Math from the University of Toronto. I also hold numerous industry certifications including VMware Certified Professional on VMware Infrastructure 2 & 3 and vSphere 4 & 5 (ret.), VMware Certified Instructor (ret.), RedHat Certified Engineer (RHCE), RedHat Certified Instructor (RHCI) and RedHat Certified Examiner (RHCX) as well as certifications from LPI, HP, SCO and others.

I hope to see you in one of my Udemy VMware classes... If you have questions, please contact me directly.

Thanks,

Larry

Larry Karnis