Kubernetes Architecture

Houssem Dellai
A free video tutorial from Houssem Dellai
Currently working at Microsoft, ex Microsoft MVP
4.2 instructor rating • 5 courses • 33,630 students

Learn more from the full course

Kubernetes for developers

Learn how to deploy, manage and scale dockerized apps in Kubernetes

03:51:46 of on-demand video • Updated September 2020

  • You will be able to deploy, manage and scale your apps in Kubernetes.
  • You will be able to deploy your apps on Azure AKS.
  • You will be able to create and deploy Deployment, Service, ConfigMap, Secret
  • You will be able to deploy and connect a web app (ASP NET Core) with a database (SQL Server).
English [Auto] And this video I give you a short introduction on how Coburn neatness works. The main objective behind Coburn needs is to run and orchestrate containers those containers could be docker containers Creo container D. Or any other type of container cougar need as we run those containers inside virtual machines our physical servers so at the bottom architecture of October need us. We have actually a cluster cluster composed of multiple the EMS or physical servers. So let's start growing the architecture of Kobani this idea. So we say we have multiple vertical machines our physical servers those are called nodes and the coordinators Janet so this one will be called node number 2. For example we said this could be a VM or a physical server and it is a cluster so we'll have multiple um multiple nodes running inside our cluster here will have the node number two for example and then add a node number uh three in each of those nodes. We want to run the containers for docker containers for example we need to have the container runtime and started and each of those nodes. So here one of the components that should be available is the node is the container runtime. So let's draw that here container runtime. So this should be started in all the nodes now with this architecture. I can actually access to any of those nodes and run a container from here I can run the command Docker run. And this will deploy a container on one of those nodes but maybe if I access the node number two maybe I don't still have enough memory and you in order to run that container so that's one pain point. Another one is that I don't want to access my node directly if I have here a cluster that is composed of one hundred nodes. I don't want to access all those nodes in order to run and deploy my different our 200 containers for example. So here where we'll have another layer added back over natives. This will facilitate the deployment and running of those different containers. We'll talk to this component in order to deploy our different containers. And this one is called the master or the control plate. So on top of those nodes or those virtual machines will have another component. That's the control plane. So let's throw that here. So this is gonna be the control plate and this is actually the brain of Coburn it is here where we have all the main Copernicus components. They will be installed here. The role of the control plane is to control which container will run on my different nodes. So this will control my different nodes so which it knows where are my nodes and will issue commands in order to deploy different uh containers inside those nodes but how would the control plane works. So here inside uh the control plane I have my developer who will be using cube City area in order to issue commands to the control plane to tell that I want to uh deploy uh five containers for example and then the control plane will take that um will take down control over there to deploy the five containers on those three nodes it will choose. On which node to install which container. So let's say here I'll have my uh developer from here and my developer will be using cube city El to interact with my company this cluster. So it is throw here cube City Hall is the CLIA or the command line interface that will be used by my developer in order to issue a request to the control plane. So this uh using cube city l will um will send configuration files are the Yemen manifest files both are as I said these are the Yemen files that contains the configuration for the application and this contains for example the name of the container that I want to run the number of the or the replica of those containers the different ports that that should be opened in fat containers. The configuration of the services to use load balancer for example or to use node ports and the connection between the different components so this Yammer file will be sent to the control plane through the cube city and here will be using the command cube still apply minus f the name of our configuration Yaman file. So this Yaman file will arrive here to the control plane and inside the control plane will have different components that will handle the developer request. So first of all here will have the cube lit API or the API server. Got it the API server is the component that will um get that request then it will save the configuration for the cluster inside another component called it LCD LCD is the store where we have the configuration for the cluster. This contains for example which are from here we can know which are the containers that runs inside my cluster which is and which is the configuration of those different containers then from this API server will read this. I will file it will read that here. For example my developer wants to run my 5 containers so then it will talk to another component which is the scheduler so I'll have here the scheduler and also the controller manager. Those two will help to deploy the containers and to my different notes so they will see the configuration for my notes. They will see where I have enough sleep you and memory in order to deploy the five containers and then it will say for example and the first uh and the node number one I go to deploy maybe one container so here will go to uh deploy that container. But here the way to deploy the container is not uh is actually by talking to another component of Coburn it is which is the cube bullet. So here inside this uh Node 2 will need to uh use the cube lid so the cube lid actually will get the request from issued by the scheduler and the control manager and we read that um command and then it will um start the container inside this uh node. So from here for example to Iran. Container maybe container 1 and also maybe a container of 2 to Iran 2 containers and this uh node and maybe another one container here and 2 on the node number 3. So in each node will have also a cube yet the same goes here and the same for Node 3 so the cube it is like the client side for the coordinate is uh control plane uh started inside our uh nodes so here we run another container let's say this is gonna be the container number if you will do it will work it will run here and container 4 and container 5 will run inside the node for 3 container for and container 5 so it will see the available ship you and memory and then we'll deploy the containers depending on those uh different factors then if I have for example uh Node 2 that will crash for example then I will not have this entire No this means the container number three would also crash and in this case because carbonate uses what we call the DST the desired state configuration so carbonate is will always read the configuration or the desired state that uh we want the developer wants here for example he wants to run five containers and then at each instance at each instant the latest cluster should be running five containers and if one of those containers go down it crashes. Then Cuban leaders will try to reschedule this crash at container inside the other available nodes. It will start by saying the node number one if it does have enough resources to run that container right here. If it does to a run it here. If it doesn't then it will go to look for the other available content uh nodes. This means we look for a node number three for example. If it does not have enough resources then it will go to Iran. That container. Uh no. 3 and this uh node in this case will go to run it here. This way Coburn it is will make sure that all my containers are always up and running. Now what about my users who wants to access the application and started in this cluster here so that the developer go through the control plane. But my users actually wants to access the application that is installed inside dos nodes so they don't need to access the app through the control plane. But they need to access it through directly through those notes. And here let's draw those uh users here. Let's say I'm gonna have my users with us multiple users and then those users want to access my up but here instead of accessing directly the content to the containers they will go through something like a load better serve for example because here I am I might have multiple instances of my application and where they go to the container number four or container one or two or five. It's the load balancer that will decide so here let's draw with this load balancer so the role of the load balancer here is to um to distribute the traffic or the request to the different um containers that I have. So uh maybe first the request would go here. The second one maybe two would go here and a third uh request will go right here. And something like the load balancer for example might not be as big a part of my Cuban it is a cluster because we don't have something like call it load barrister inside the um our a cluster but this resource could be provisioned by the cloud that host my cluster. We know that coordinate is a good run on premise or it could also run as a manager server uh manager service inside the cloud provider. It could run for example on Asia using the ACARS service as Rickover need the service or it could run. Also and Google using Google it is engine or an Amazon using elastic Cuban this service and those uh different cloud providers Amazon Google Cloud and Amazon. I w s out of them they have um other managers services that could be use it by the Cuban neatest cluster like the load balancer for example. And also like the managed disks. So if I have if I'm using databases for my up or I want to uh store files then maybe I want to use the manager disk which offers um high availability high SLA and backup and so on and does manager disks could be provisioned by the Cuban Interest cluster here using another component and carbonates which is the cloud manager. So let's throw that here the cloud manager a component will talk to the uh cloud the different cloud providers and we'll ask them to provision a monitored disk for example or load balancer or any other component that we want to uh provision and to attach to the uh cluster. I hope this gives you a clear explanation on how Copernicus works.