vSphere Distributed Switches

Rick Crisci
A free video tutorial from Rick Crisci
VMware Certified Instructor, Virtualization Consultant
4.5 instructor rating • 30 courses • 121,531 students

Lecture description

Understand the architecture of the vSphere 6.5 Distributed Switch. Differentiate the vSphere Distributed Switch from the vSphere Standard Switch.

Learn more from the full course

Clear and Simple VMware vSphere 6.5 VCP-DCV (VCP 2019)

Prepare for the VMware vSphere VCP 6.5 DCV 2V0-622 exam. Learn all about how to administer and design vSphere 6.5.

10:29:02 of on-demand video • Updated September 2020

  • Pass the VMware VCP 6.5 DCV 2V0-622 exam.
  • Administer and Design a vSphere 6.5 deployment.
English [Auto] In this video we'll take a look at the Wii's fair distributed switch and talk about how it's different than the Wii's fair standards which we'll look at some scalability characteristics of the Wii's for distributed switch and we'll also take some time to examine private villans Allaah ACP and Raut based on physical Nickled and some other features that are specific to the vees for distributed switch now the primary benefit of the VSE for distributed switch is scalability. The vse for distributed switches is only available with the Enterprise plus licensing addition and it provides a lot of features that a standard virtual switch does not. Now for scalability here in the slide we see for ESX II hosts and let's assume that we're using a first Daynard switch right so on the first host I create a virtual switch and I create a couple of Port groups and the process repeats itself every time I add a new ESX I host. So if I want a matching configuration on some new host I'm going to need to manually recreate that standard virtual switch configuration on each and every host rain on each house. I'll also have to create VM kernel poor it's an associate physical adapters with the virtual switches as well. So this process can be time consuming and it's also prone to error if you're manually creating many standard virtual switches. The odds of you making a mistake somewhere along the line get increased as you create more and more. Now the goal of the VSE for distributed switch is to automate and centralize a lot of this process so that we don't have the same likelihood of human error. So let's say that instead of a VSE fair standards which we decide to go with a distributed switch and what do I mean by this term distributed Well it really just means that the switch is created in one place and then copies of that virtual switch are going to be distributed to all of my ESX hosts and that centralized copy of the VSE for distributed switch is going to be contained in the center. So the center is the management plain for the distributed virtual switch and it essentially gives us the feel of only having a single virtual switch to configure. Now just to make it very clear the center is the management plane and no traffic flows through the center. If the center fails are virtual machines don't lose connectivity. Our traffic is going to flow in the data plane which is actually made up of hidden virtual switches that V Center distributes to all of our ESX hosts. So now we can create a distributed port group in the center and it will be pushed out to all of those little hidden virtual switch instances that run on all of my hosts. Now when I create a distributed port group I can configure settings like villans like security settings like traffic shaping settings. And I only have to configure that once and those settings are distributed to all of my ESX I hosts. And again not only does this speed up the process of creating these configurations but it also greatly reduces the likelihood of human error. Another benefit is that if we have a specific security policy it's much easier now to ensure that all of our ESX hosts are compliant with whatever network security policy that we've identified. Now one side note VM kernel ports don't really change when you start using these very distributed switch. Those are still managed on a par ESX I host basis. So we still need to create a unique management VM kernel port for every host with its own unique IP address that part of the process doesn't really change much when we go from a VCR standard switch. Two of these for distributed switch. Now with a VSE fair distributed switch there is a huge list of features that are supported that are not supported out of these for standard switch. So let's take a look at a few of these. And then the next video will take a look at some more the first feature that I want to talk about is called The Private villans private villans are a feature that again is supported only on the Vee's fair distributed switch. And we can use a private VPN to isolate traffic within avellana. So let's break down our diagram here. Here we see these fair distributed switch and we've configured primary V LAN 10. Now within that VPN we've configured a number of secondary villans on the far left we see secondary veel and 110 that's an isolated VLAN in the middle. We see secondary in 111 and 112. Those are community secondary villans and on the far right we see secondary Vili's 10 which is a promiscuous secondary Vilayat and down a little bit lower. If you look we have a bunch of IP addresses those represent my virtual machines and noticed something about those IP addresses. They're all in the same address range. All of these VMs are still on the same primary LAN. They can still all be part of the same IP address range. What we're looking to accomplish here with these private villans is to create some controls within Vili's and 10 to govern which virtual machines within that Villon are actually allowed to communicate. So maybe these virtual machines are owned by different departments or different tenants and we need to create some level of isolation while still maintaining a contiguous addressing scheme. So let's start by looking at my VM on the left and we have two victims that are in an isolated secondary villin any virtual machine that's in a secondary isolated Villon can only communicate with promiscuous ports. If you notice that last animation Let's watch that one more time. The virtual machine on the left attempts to communicate with 10.0 1.1 eleven. And that's not allowed even though they're on the same secondary plan. It's an isolated relay. And so therefore those virtual machines can't communicate with each other. They can only communicate with devices on the promiscuity of the lab. That's what an isolated villanous now in the middle. We have a couple of community Valeant and so on the left we have attended one on one at 12 and 10 got 1.1 about 13 that are in secondary land on 11 and in blue we have 10.0 1.1 not 14 and 10 not want that one at 15 in secondary veel and 1:12 and the effect of a community plan is that members of the same community villin are able to communicate with each other. Right so via SMS in secondary VPN what 11 can communicate with each other. But if they try to communicate with virtual machines in some other community that's not allowed. And of course the community villans can also communicate with any VMP connected to a promiscuous secondary Villon. Think of that almost like your default gateway. That's going to be a secondary Villalon and that everything is allowed to communicate with another additional feature of the VSE distributed which is the nicknaming method called route based on physical neck load or sometimes you'll hear a called load based teeming. Now when we were looking at the Vee's for standard switch lesson we looked at Nick teaming modes like originating virtual port ID source Mac hash and IP hash and all of those Nick teaching methods had something in common they're not very intelligent. Right. None of these methods can detect the fact that a physical neck is being overwhelmed and adjust accordingly. Base teaming or Raut based on physical Nick Lowe is a little bit more complicated than those prior methods that we talked about. So here we see three virtual machines V.M. one VM to V.M. three and each of those VMT is bound to a specific V.M. neck or physical adapter. Right so at the top V.M. one is flowing through the VM kind of a top and V.M. to nvm three are both utilizing the same V.M. neck towards the bottom of the diagram. And let's assume that V.M. two and V.M. three generate a lot of traffic. The second V.M. neck is likely to be very busy significantly busier than the first V.M. neck if that's the case. If this physical neck exceeds 75 percent usage what will happen is virtual machines will be migrated to a less busy physical adapter with the goal of reducing that workload on the really busy physical neck. Now in this case each virtual machine will only be using a single physical adapter and this means that when we configure our physical switch we want to make sure not to enable either Channel port channel LCP we don't want to enable any of those Nic teaming configurations in the actual physical switch. LA C.P. is a feature that is only available within the vees fair distributed switch. It's kind of like either channel if you're familiar with Cisco switches. This is kind of similar to that. It's a way to bond together multiple physical adapters and make them essentially act like one big pipe. All right so here we see two V.M. next that are connected to a physical switch. And in order to enable LCP We'll start out by configuring link aggregation groups link aggregation groups is essentially a way to identify ports that are going to participate in LCP. And it's important that both sides match once this is configured the two physical adapters can act as one large pipe and we can take advantage of a huge number of nic teaming methods that are built into LCP and here on the right hand side we see a list of all of these different nic teaming algorithms. Look at all the different ways we can load balancing traffic. This gives us a ton of options and allows us to choose a method that's really ideally suited to the type of traffic that our virtual machines generate L.A. C.P. is an open standard. It's supported by a wide variety of physical switch vendors. So in review we learned about the Vee's for distributed switch. We learned about how it's centrally managed by the center and how hidden virtual switches are distributed to all of the ESX hosts. And that's the data plan. So if the center fails there is no impact on traffic. We can create distributed port groups that span many ESX hosts and that have identical settings such as security settings or traffic shaping settings. VM kernel ports and uplink still need to be configured on a host by host basis. Those objects are unique to an individual host and then we also have the private villans and how they can be used to create logical segmentation within a Villon. We learned about Raut based on physical neck load or load based teaming that can intelligently migrate traffic from one physical adapter to another based on workload. And then we learned about a nicknaming method called La C.P that can be used to bomb multiple ether that connections together and provides a wide variety of load balancing algorithms.