Kubernetes on AWS
- 2 hours on-demand video
- 1 downloadable resource
- Full lifetime access
- Access on mobile and TV
- Certificate of Completion
Get your team access to 4,000+ top Udemy courses anytime, anywhere.Try Udemy for Business
- Provision a production-ready Kubernetes cluster on AWS
- Deploy your own applications to Kubernetes with Helm
- Discover strategies for troubleshooting your cluster
- Explore the best ways to monitor your cluster and the applications running on it
- Supercharge your cluster by integrating it with the tools provided by the AWS platform
- Architect your cluster for high availability
- Though any previous knowledge of Kubernetes is not expected, some experience with Linux and Docker containers would be a bonus.
You’ll start the course by learning about the powerful abstractions of Kubernetes - Pods and Services - that make managing container deployments easy. You’ll also learn how to set up a production-ready Kubernetes cluster on AWS while learning several techniques you need to successfully deploy and manage your own applications..
You’ll start the course by learning about the powerful abstractions of Kubernetes - Pods and Services - that make managing container deployments easy. You’ll also learn how to set up a production-ready Kubernetes cluster on AWS while learning several techniques you need to successfully deploy and manage your own applications.
By the end of the course, you’ll plenty of hands-on experience with Kubernetes on AWS. You’ll also have picked up some tips on deploying and managing applications, keeping your cluster and applications secure, and ensuring that your whole system is reliable and resilient to failure.
About the Author
Ed Robinson works as a senior site reliability engineer at Cookpad's global headquarters in Bristol, UK. He has been working with Kubernetes for the last three years, deploying clusters on AWS to deliver resilient and reliable services for global audiences. He is a contributor to several open source projects and is a maintainer of Træfɪk, the modern HTTP reverse proxy designed for containers and microservices.
Robin Verstraelen works as a systems and network engineer at a hosting company in Belgium with over two years of experience. He has been working with Kubernetes for about a year. He specializes in DevOps and Cloud.
- If you’re a cloud engineer or a cloud solution provider looking for an extensive guide to running Kubernetes in the AWS environment, this book is for you.
- Sysadmin, site reliability engineer or developer with an interest in DevOps will also find this course useful.
Kubernetes on AWS guides you in deploying a production-ready Kubernetes cluster on the Amazon Web Services (AWS) platform. You will discover how to use the power of Kubernetes, which is one of the fastest growing platforms for production-based container orchestration, to manage and update your applications. By the end of the course, you will have gained plenty of hands-on experience with Kubernetes on AWS. You will also have picked up some tips on deploying and managing applications, keeping your cluster and applications secure, and ensuring that your whole system is reliable and resilient to failure.
This lesson helps you understand how Kubernetes can give you some of the same superpowers that the site reliability engineers at Google use to ensure that Google's services are resilient, reliable, and efficient.
At its core, Kubernetes is a container scheduler, but it is a much richer and fully featured toolkit that has many other features. It is possible to extend and augment the functionality that Kubernetes provides, as products such as RedHat's OpenShift have done. Kubernetes also allows you to extend its core functionality yourself by deploying add-on tools and services to your cluster. Let’s look at it in more detail.
Let's begin our look at Kubernetes by looking at some of the fundamental concepts that most of Kubernetes is built upon. Getting a clear understanding of how these core building blocks fit together will serve you well as we explore the multitude of features and tools that comprise Kubernetes. It can be a little confusing to use Kubernetes without a clear understanding of these core building blocks so, if you don't have any experience with Kubernetes, you should take your time to understand how these pieces fit together before moving on.
Now we have learned a little about the functionality that Kubernetes provides to us, the user, let's go a little deeper and look at the components that Kubernetes uses to implement these features. Kubernetes makes this task a little easier for us by having a microservice architecture, so we can look at the function of each component in a certain degree of isolation. We will get our hands dirty over the next few chapters by actually deploying and configuring these components ourselves.
This lesson helps you take your first steps with Kubernetes. You will learn how to start a cluster suitable for learning and development use on your own workstation, and will begin to learn how to use Kubernetes itself.
Minikube is a tool that makes it easy to run a simple Kubernetes cluster on your workstation. It is very useful, as it allows you to test your applications and configurations locally and quickly iterate on your applications without needing access to a larger cluster. For our purposes, it is the ideal tool to get some practical hands-on experience with Kubernetes. It is very simple to install and configure, as you will discover. Let’s look at it in more detail.
This lesson teaches you how to build a Kubernetes cluster running on AWS from first principles.
The cluster we are going to set up in this chapter will be formed of two EC2 instances—one that will run all the components for the Kubernetes control plane and another worker node that you can use to run your applications. Because we start from scratch, this section will also lay out one method for isolating your Kubernetes cluster in a private network while allowing you easy access to the machines from your own workstation. Let’s look at it in more detail.
The network model of a Kubernetes cluster is somewhat different from that of a standard Docker installation. There are many implementations of networking infrastructure that can provide cluster networking for Kubernetes, but they all have some key attributes in common. Let’s look at it in more detail.
This lesson gets into depth with tools that Kubernetes provide to manage the Pods that you run on your cluster.
Updating batch processes, such as jobs and CronJobs, is relatively easy. Since they have a limited lifetime, the simplest strategy of updating code or configurations is just to update the resources in question before they are used again. Long-running processes are a little harder to deal with, and even harder to manage if you are exposing a service to the network. Kubernetes provides us with the deployment resource to make deploying and, more importantly, updating long-running processes simpler.
This lesson teaches you about how you can deploy a service to your cluster using a community-maintained chart.
Let's start by installing an application by using one of the charts provided by the community. Helm charts can be stored in a repository, so it is simple to install them by name. By default, Helm is already configured to use one remote repository called Stable. This makes it simple for us to try out some commonly used applications as soon as Helm is installed.
In this section, we are going to look at how, as the user of a chart, you might go about supplying configuration to Helm. Later in the chapter, we are going to look at how you can create your own charts and use the configuration passed in to allow your chart to be customized.
This lesson gives you an idea of the myriad different options and decisions you can make when deciding to run Kubernetes in a production environment.
Availability, capacity, and performance are key properties that we should consider when preparing for production. When gathering the functional requirements for your cluster, it can help to categorize which requirements imply some consideration of these properties. Let’s look at it in more detail.
Your definition of availability can depend on the sorts of workload that your cluster is running and your business requirements. A key part in planning a Kubernetes cluster is to understand the requirements that the users have for the services you are running. Let’s look at it in more detail.
Running a system such as Kubernetes means that you can respond to additional demand for your services literally within the time it takes for your applications to start up. This process can even become automated with tools such as the Horizontal Pod Autoscaler. Let’s look at it in more detail.
This lesson helps you build a fully functional cluster that will serve as a base configuration to build upon for many different use cases.
Now that we have prepared an image for the worker nodes in our cluster, we can set up an autoscaling group to manage the launching of the EC2 instances that will form our cluster. EKS doesn't tie us to managing our nodes in any particular way, so autoscaling groups are not the only option for managing the nodes in our cluster but using them is one of the simplest ways of managing multiple worker instances in our cluster.
This lesson delves into configuring pods with a different quality of service so important workloads are guaranteed the resources they need, but less important workloads can make use of idle resources when they are available without needing dedicated resources.
When Kubernetes creates a pod, it is assigned one of three QoS classes. These classes are used to decide how Kubernetes schedules and evicts pods from nodes. Broadly, pods with a guaranteed QoS class will be subject to the least amount of disruption from evictions, and pods with the BestEffort QoS class are the most likely to be disrupted.
Resource quotas allow you to place limits on how many resources a particular namespace can use. Depending on how you have chosen to use namespaces in your organization, they can give you a powerful way to limit the resources that are used by a particular team, application, or group of applications, while still giving developers the freedom to tweak the resource limits of each individual container.
Horizontal Pod Autoscaling allows us to define rules that will scale the numbers of replicas up or down in our deployments based on CPU utilization and optionally other custom metrics. Before we are able to use Horizontal Pod Autoscaling in our cluster, we need to deploy the Kubernetes metrics server; this server provides endpoints that are used to discover CPU utilization and other metrics generated by our applications.
This lesson is all about using the deep integration that Kubernetes has with the AWS native storage solution Elastic Block Store (EBS).
So far, we have seen how we can use Kubernetes to automatically provision EBS volumes for PersistentVolumeClaim. This can be very useful for a number of applications where we need a single volume to provide persistence to a single pod. If you are running an application where you want each replica to have its own unique volume, we can use a stateful set. Stateful sets have two key advantages over deployments when we want to deploy applications where each replica needs to have its own persistent storage.
This lesson helps you understand how to leverage the AWS Elastic Container Registry (ECR) service to store your container images in a manner that tackles all these needs.
ECR is AWS's approach to a hosted Docker registry, where there's one registry per account, uses AWS IAM to authenticate and authorize users to push and pull images. By default, the limits for both repositories and images are set to 1,000. As we'll see, the setup flow feels very similar to other AWS services, whilst also being familiar for Docker Registry users.