vSAN Basic Architecture

A free video tutorial from Rick Crisci
Making Cloud, Dev, and Virtualization easy to understand
Rating: 4.5 out of 5Instructor rating
41 courses
200,030 students
vSAN Basic Architecture

Lecture description

Understand the basic concepts and networking requirements of an ESXi host cluster. Learn about the vSAN VMkernel port, and what it is used for. Understand how vSAN objects are mirrored across multiple hosts for redundancy. Learn about the Hyrbid and All-Flash vSAN architectures.

Learn more from the full course

Clear and Simple VMware vSAN 6.7 (Virtual SAN)

Configure, manage, troubleshoot, and optimize vSAN in your VMware vSphere environment

03:07:25 of on-demand video • Updated December 2020

Configure, Monitor, Optimize, and Design VMware vSAN deployments
Create shared storage for vSphere Clusters using the local capacity of ESXi hosts
English [Auto]
In this video, I'll walk you through some of the basic architecture of virtual sand and we'll start with the very most basic, the host, Kluster. So a cluster is simply a logical grouping of ESX high hosts, so let's say you have a group of XY hosts and you want to allow virtual machines to automatically failover to another host if their host fails, that's high availability. And we have to create a cluster in order to enable high availability, we have to create a cluster in order to enable DRC so DRC we can have a virtual machines that automatically get the emotions from host to host for load balancing purposes. Those are a couple of features that require a host cluster in order for us to enable them and another feature that requires it is Virtual San. So step one of setting up VSAN is to create an ESX host cluster that's going to be the very first step in our process. Now, that being said, there are some prerequisites we have to be at the right version of this fair, we have to have the right version of the center and we also have to have some supported hardware. And we also need to set up some Veum Colonel Ports. So on each of these FXI hosts here, you can see we've got a couple of things going for us. Let's focus on Essex's zero one for a moment. FXI zero one has to VMAX. And a Vernick is a physical Ethernet port on the East Coast. So this host has two physical Ethernet adapters, let's say that they're 10 gigabit per second Ethernet adapters and each one of these physical adapters is connected to a different physical switch. And you can say the same thing on host zero two and the same thing on FXI zero three, so all three hosts have this in common. They have two physical 10 gig VMAX and each of those VMAX is connected to a different physical switch. And then what we've also done on each of these sexi hosts is we have created a PVM kernel port and we have tagged that pvm kernel port for VSAN traffic. So if you're not really familiar with PVM, Colonel Ports, what this basically means is we've created this little port and we've given an IP address and we've said, hey, if there is traffic related to VSAN, if a verdure machine needs to transmit traffic related to VSAN from host to host, use this VM Colonel Port. So we have to have that network under the surface in order for the sand to work properly and we'll see it in action here in a couple of slides. Now, one final thing that I want to know in regards to this network that I've shown you here, there's a couple designed best practices that I have incorporated. Number one, I've got physical redundancy. If either of these switches fails, there is still another switch up and running that can be used to pass all of the necessary traffic. So I've got physical redundancy enabled. I've also got nothing else connected to these switches. This is a dedicated physical network, specifically just for visa and traffic. OK, so how are my virtual machine objects actually stored and how do these VM colonel ports come into this picture? So here we see VM one and VM one is one of my virtual machines that is stored on VSAN and as VM one has a reads or writes that need to be executed, they are going to be pushed over the physical network using this VSAN VM kernel POR. To the appropriate destination host, so here we can see the active VDC for this particular virtual machine. And there's also going to be another copy of the verdict over here. This is a mirror copy just in case the primary copy is on a host that fails. So the VSAN PVM Colonel Port is there to basically handle all of the traffic that's going to have to flow over this weekend network, the VirTra machine is running on one host. Its virtual disk is on another host. So when it wants to read and write to and from that virtual disk, we're going to leverage a VM kernel port to push that traffic over the network. And hopefully what will end up happening is the majority of the Reed operations will be satisfied by our flash capacity. So what we see here is something called a hybrid configuration. We're going to have a lesson that breaks down the difference between hybrid and all flash. But for the moment, we're focused strictly on what we call the hybrid configuration. So what does that mean while on each of these SSI hosts, we have some traditional magnetic storage devices, these are what we call our capacity devices. We've got traditional hard disks. And then we've also got a cache here, which is SSD. And the SSD is a lot faster than the traditional hard disks. So on each of these hosts, I've got kind of these big capacity devices, these hard disks that are going to store a whole lot of data. And then sitting in front of them, I've got this cash here of SD, which is much faster and more expensive. So now let's look at what happens when Virtual Machine one wants to read some sort of data from its virtual disk. The VM Colonel Port is used to push that read over the physical network, and it eventually hits the destination host where its active VDC resides. And look what's happening. It's hitting this SSD on host SEXI zero two. And you notice it's happening very quickly. Right? This read is happening very fast. It's hitting the SSD and the SSD is acting as a read cache. So the purpose of the Reid cash is to store the most frequently read data on SSD. So 70 percent of this SSD is going to be dedicated to read cash, a copy of all of the most frequently read data is going to be located in that SSD. There's also a copy of that same data, along with a whole lot of other data here on this capacity device. But the hope is that when data is read from the VDC, most of the time the data will get red from that SSD because it's so fast. If the data is not present on the SSD, this is what we call a cache miss, and you can see this Reed operation is happening much more slowly. The virtual machine needed some sort of data that actually was not present in the reed cache and therefore the data had to get served up by the capacity device in this case. And the hybrid configuration, our capacity device is a hard disk. And so this reed is going to be much slower than the reed from SSD. How about rights we've been talking about reads so far, what if my virtual machine needs to write some sort of data to disk? Well, here's the first thing we have to consider. Number one, there are multiple copies of this VDC. This virtual machine has one copy of the verdict s zero two, but we have to prepare for the possibility that FXI zero two could fail. So in this case, another copy of that VMD K is being mirrored to ESSI zero three, and that way if he XXXI zero two fails, my VirTra machine's data is not lost. So when the right occurs, here's what's going to happen when the virtual machine needs to execute a right, the right is going to be sent to both of those high hosts. Right. It's going to be mirrored. If you're familiar with Ra'ed, this is very similar to the way that rights are mirrored across a red herring. One copy of the data. Listen to each of these sexi hosts. That way, they both always have a current version of that virtual machines Vidic just in case one of the hosts fails. And the other thing that you may notice here is watch this. Right. It's going to hit the SSD first. That's what we call the right buffer, so what happens is any time these virtual machines that are on VSAN need to write some sort of data, the rights are carried out against the right buffer against 30 percent of my SSD is dedicated to being a right buffer. And I sort of equate this to checking a book back into the library. So if I want to check a book into the library, I can just walk in, drop it on the front desk and I'm done. The librarian is going to take that book and Rechelle, that they're going to do the hard work, the time consuming work. My experience is I just simply drop it on the desk and walk away, it's very quick for me. And is the same thing with his right operation, when the Verdure Machine needs to write some sort of data to its VMT, it's going to be written to the right buffer and that's going to happen very quickly. So from the perspective of the virtual machine, once this Wright hits the right buffer, it's done. And then on the back end, the data is actually written from the right buffer to the capacity device. So to our virtual machines, it always feels like they're writing to SSD, the right speeds are always really quick and then after the fact, virtual sand handles getting that object written from the SSD to the capacity to. OK, so in review, virtual sand can only be enabled on a cluster of FXI hosts and each one of those hosts has to have a VM kernel port that is marked for VSAN traffic. That's where all of our virtual Sien reads and writes are going to flow over that VM kernel network. Virtual machine objects are striped and mirrored across hosts just in case we have a host failure and read caches and write buffers are used to improve performance. Then on the back end, we have the actual capacity devices.