Kubernetes on AWS
5.0 (1 rating)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
23 students enrolled

Kubernetes on AWS

Deploy and manage production-ready Kubernetes clusters on AWS
5.0 (1 rating)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
23 students enrolled
Created by Packt Publishing
Last updated 5/2019
English
English [Auto]
Current price: $139.99 Original price: $199.99 Discount: 30% off
5 hours left at this price!
30-Day Money-Back Guarantee
This course includes
  • 2 hours on-demand video
  • 1 downloadable resource
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
Training 5 or more people?

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business
What you'll learn
  • Provision a production-ready Kubernetes cluster on AWS
  • Deploy your own applications to Kubernetes with Helm
  • Discover strategies for troubleshooting your cluster
  • Explore the best ways to monitor your cluster and the applications running on it
  • Supercharge your cluster by integrating it with the tools provided by the AWS platform
  • Architect your cluster for high availability
Requirements
  • Though any previous knowledge of Kubernetes is not expected, some experience with Linux and Docker containers would be a bonus.
Description

You’ll start the course by learning about the powerful abstractions of Kubernetes - Pods and Services - that make managing container deployments easy. You’ll also learn how to set up a production-ready Kubernetes cluster on AWS while learning several techniques you need to successfully deploy and manage your own applications..

You’ll start the course by learning about the powerful abstractions of Kubernetes - Pods and Services - that make managing container deployments easy. You’ll also learn how to set up a production-ready Kubernetes cluster on AWS while learning several techniques you need to successfully deploy and manage your own applications.

By the end of the course, you’ll plenty of hands-on experience with Kubernetes on AWS. You’ll also have picked up some tips on deploying and managing applications, keeping your cluster and applications secure, and ensuring that your whole system is reliable and resilient to failure.

About the Author

Ed Robinson works as a senior site reliability engineer at Cookpad's global headquarters in Bristol, UK. He has been working with Kubernetes for the last three years, deploying clusters on AWS to deliver resilient and reliable services for global audiences. He is a contributor to several open source projects and is a maintainer of Træfɪk, the modern HTTP reverse proxy designed for containers and microservices.

Robin Verstraelen works as a systems and network engineer at a hosting company in Belgium with over two years of experience. He has been working with Kubernetes for about a year. He specializes in DevOps and Cloud.

Who this course is for:
  • If you’re a cloud engineer or a cloud solution provider looking for an extensive guide to running Kubernetes in the AWS environment, this book is for you.
  • Sysadmin, site reliability engineer or developer with an interest in DevOps will also find this course useful.
Course content
Expand all 62 lectures 02:07:42
+ Google's Infrastructure for the Rest of Us
6 lectures 17:35

Kubernetes on AWS guides you in deploying a production-ready Kubernetes cluster on the Amazon Web Services (AWS) platform. You will discover how to use the power of Kubernetes, which is one of the fastest growing platforms for production-based container orchestration, to manage and update your applications. By the end of the course, you will have gained plenty of hands-on experience with Kubernetes on AWS. You will also have picked up some tips on deploying and managing applications, keeping your cluster and applications secure, and ensuring that your whole system is reliable and resilient to failure.

Preview 02:59

This lesson helps you understand how Kubernetes can give you some of the same superpowers that the site reliability engineers at Google use to ensure that Google's services are resilient, reliable, and efficient.

Preview 02:18

At its core, Kubernetes is a container scheduler, but it is a much richer and fully featured toolkit that has many other features. It is possible to extend and augment the functionality that Kubernetes provides, as products such as RedHat's OpenShift have done. Kubernetes also allows you to extend its core functionality yourself by deploying add-on tools and services to your cluster. Let’s look at it in more detail.

Why Do I Need a Kubernetes Cluster?
05:57

Let's begin our look at Kubernetes by looking at some of the fundamental concepts that most of Kubernetes is built upon. Getting a clear understanding of how these core building blocks fit together will serve you well as we explore the multitude of features and tools that comprise Kubernetes. It can be a little confusing to use Kubernetes without a clear understanding of these core building blocks so, if you don't have any experience with Kubernetes, you should take your time to understand how these pieces fit together before moving on.

The Basics of Kubernetes
02:48

Now we have learned a little about the functionality that Kubernetes provides to us, the user, let's go a little deeper and look at the components that Kubernetes uses to implement these features. Kubernetes makes this task a little easier for us by having a microservice architecture, so we can look at the function of each component in a certain degree of isolation. We will get our hands dirty over the next few chapters by actually deploying and configuring these components ourselves.

Under the Hood
03:08

Let us summarize what we learnt from this chapter.

Summary
00:25
Test your knowledge
2 questions
+ Start Your Engines
4 lectures 13:33

This lesson helps you take your first steps with Kubernetes. You will learn how to start a cluster suitable for learning and development use on your own workstation, and will begin to learn how to use Kubernetes itself.

Preview 00:32

Minikube is a tool that makes it easy to run a simple Kubernetes cluster on your workstation. It is very useful, as it allows you to test your applications and configurations locally and quickly iterate on your applications without needing access to a larger cluster. For our purposes, it is the ideal tool to get some practical hands-on experience with Kubernetes. It is very simple to install and configure, as you will discover. Let’s look at it in more detail.

Your Own Kubernetes
05:01

Let's take our first steps to building a simple application on our local minikube cluster and getting it to run.

Building and Launching A Simple Application on Minikube
07:36

Let us summarize what we learnt from this chapter.

Summary
00:24
Test your knowledge
4 questions
+ Reach for the Cloud
6 lectures 26:10

This lesson teaches you how to build a Kubernetes cluster running on AWS from first principles.

Preview 00:40

The cluster we are going to set up in this chapter will be formed of two EC2 instances—one that will run all the components for the Kubernetes control plane and another worker node that you can use to run your applications. Because we start from scratch, this section will also lay out one method for isolating your Kubernetes cluster in a private network while allowing you easy access to the machines from your own workstation. Let’s look at it in more detail.

Cluster Architecture
10:49

In this section, we are going to launch an instance in which we will install all the software that the different nodes that make up our cluster will need. We will then create an AMI, or Amazon machine image, that we can use to launch the nodes on our cluster.

Kubernetes Software
05:24

This section will describe the principles behind how our cluster works and the software that we used.

What Just Happened?
03:47

The network model of a Kubernetes cluster is somewhat different from that of a standard Docker installation. There are many implementations of networking infrastructure that can provide cluster networking for Kubernetes, but they all have some key attributes in common. Let’s look at it in more detail.

Setting Up Pod Networking
05:09

Let us summarize what we learnt from this chapter.

Summary
00:21
Test your knowledge
4 questions
+ Managing Change in Your Applications
7 lectures 13:36

This lesson gets into depth with tools that Kubernetes provide to manage the Pods that you run on your cluster.

Preview 00:42

In this video, we will look at how we can launch pods in different ways with Kubernetes, depending on the workloads we are running.

Running Pods Directly
02:46

The simplest use case for a job is to launch a single pod and ensure that it successfully runs to completion.

Jobs
01:25

Now you have learned how to run one-off or batch tasks with jobs, it is simple to extend the concept in order to run scheduled jobs. In Kubernetes, a CronJob is a controller that creates new jobs from a template on a given schedule. Let’s look at it in more detail.

CronJob
02:52

Kubernetes CronJob, in contrast to the traditional CronJob, allows us to decide what happens when a job overruns and we reach the scheduled time while the previous job is still running. Let’s look at it in more detail.

Concurrency Policy
02:04

Updating batch processes, such as jobs and CronJobs, is relatively easy. Since they have a limited lifetime, the simplest strategy of updating code or configurations is just to update the resources in question before they are used again. Long-running processes are a little harder to deal with, and even harder to manage if you are exposing a service to the network. Kubernetes provides us with the deployment resource to make deploying and, more importantly, updating long-running processes simpler.

Managing Long Running Processes With Deployments
03:38

Let us summarize what we learnt from this chapter.

Summary
00:09
Test your knowledge
6 questions
+ Managing Complex Applications with Helm
6 lectures 11:29

This lesson teaches you about how you can deploy a service to your cluster using a community-maintained chart.

Preview 00:31

If you have already set up your own Kubernetes cluster and have correctly configured kubectl on your machine, then it is simple to install Helm.

Installing Helm
01:28

Let's start by installing an application by using one of the charts provided by the community. Helm charts can be stored in a repository, so it is simple to install them by name. By default, Helm is already configured to use one remote repository called Stable. This makes it simple for us to try out some commonly used applications as soon as Helm is installed.

Installing A Chart
04:49

In this section, we are going to look at how, as the user of a chart, you might go about supplying configuration to Helm. Later in the chapter, we are going to look at how you can create your own charts and use the configuration passed in to allow your chart to be customized.

Creating Your Own Charts
01:10

This directory contains the templates that will be rendered to produce the definitions of the resources that this chart provides. When we run the helm new command, several skeleton template files are created for us. Let’s look at it in more detail.

Templates
03:25

Let us summarize what we learnt from this chapter.

Summary
00:06
Test your knowledge
5 questions
+ Planning for Production
10 lectures 11:38

This lesson gives you an idea of the myriad different options and decisions you can make when deciding to run Kubernetes in a production environment.

Preview 00:32

When you think about preparing to use Kubernetes to manage your production infrastructure, you shouldn't think about Kubernetes as your end goal. It is a foundation for building a platform on which to run systems.

The Design Process
02:49

Availability, capacity, and performance are key properties that we should consider when preparing for production. When gathering the functional requirements for your cluster, it can help to categorize which requirements imply some consideration of these properties. Let’s look at it in more detail.

Discovering Requirements
00:34

Your definition of availability can depend on the sorts of workload that your cluster is running and your business requirements. A key part in planning a Kubernetes cluster is to understand the requirements that the users have for the services you are running. Let’s look at it in more detail.

Availability
00:23

Running a system such as Kubernetes means that you can respond to additional demand for your services literally within the time it takes for your applications to start up. This process can even become automated with tools such as the Horizontal Pod Autoscaler. Let’s look at it in more detail.

Capacity
01:50

The key components of your cluster that impact performance are: CPU, Storage and Networking. Let’s look at it in more detail.

Performance
00:19

When running distributed systems, network performance can be a key factor on the overall observable performance of an application.

Networking
00:46

Securing the configuration and software that forms the infrastructure of your cluster is of vital importance, especially if you plan to expose the services you run on it to the internet.

Security
01:59

Being able to monitor and debug a cluster is one of the most important points to bear in mind when designing a cluster for production. Luckily, there are a number of solutions for managing logs and metrics that have very good support for Kubernetes.

Observability
02:16

Let us summarize what we learnt from this chapter.

Summary
00:10
Test your knowledge
2 questions
+ A Production-Ready Cluster
7 lectures 10:50

This lesson helps you build a fully functional cluster that will serve as a base configuration to build upon for many different use cases.

Preview 00:36

The information contained within this section is just one possible way that you could approach building and managing a cluster

Building A Cluster
00:52

Terraform is a command-line tool that you can run on your workstation to make changes to your infrastructure. Terraform is a single binary that just needs to be installed onto your path. Let’s look at it in more detail.

Getting Started With Terraform
02:54

In order to provide a resilient and reliable Kubernetes Control Plane for our cluster, we are going to make our first big departure from the simple cluster that we built in Chapter 3, Reach for the Cloud.

Control Plane
01:58

As we did in Chapter 3, Reach for the Cloud, we will now be preparing an AMI for the worker nodes in our cluster. However, we will improve this process by automating it with Packer. Packer is a tool that makes it simple to build machine images on AWS (and other platforms).

Preparing Node Images
01:27

Now that we have prepared an image for the worker nodes in our cluster, we can set up an autoscaling group to manage the launching of the EC2 instances that will form our cluster. EKS doesn't tie us to managing our nodes in any particular way, so autoscaling groups are not the only option for managing the nodes in our cluster but using them is one of the simplest ways of managing multiple worker instances in our cluster.

Node Group
02:02

Let us summarize what we learnt from this chapter.

Summary
01:01
Test your knowledge
3 questions
+ Sorry My App Ate The Cluster
7 lectures 10:45

This lesson delves into configuring pods with a different quality of service so important workloads are guaranteed the resources they need, but less important workloads can make use of idle resources when they are available without needing dedicated resources.

Preview 00:39

Kubernetes allows us to achieve high utilization of our cluster by scheduling multiple different workloads to a single pool of machines. Let’s look at it in more detail.

Resource Requests and Limits
03:13

When Kubernetes creates a pod, it is assigned one of three QoS classes. These classes are used to decide how Kubernetes schedules and evicts pods from nodes. Broadly, pods with a guaranteed QoS class will be subject to the least amount of disruption from evictions, and pods with the BestEffort QoS class are the most likely to be disrupted.

Quality of Service (QoS)
01:59

Resource quotas allow you to place limits on how many resources a particular namespace can use. Depending on how you have chosen to use namespaces in your organization, they can give you a powerful way to limit the resources that are used by a particular team, application, or group of applications, while still giving developers the freedom to tweak the resource limits of each individual container.

Resource Quotas
01:40

Kubernetes provides the facility for default requests and limits to be provided at the namespace level. You could use this to provide some sensible defaults to namespaces used by a particular application or team.

Default Limits
02:24

Horizontal Pod Autoscaling allows us to define rules that will scale the numbers of replicas up or down in our deployments based on CPU utilization and optionally other custom metrics. Before we are able to use Horizontal Pod Autoscaling in our cluster, we need to deploy the Kubernetes metrics server; this server provides endpoints that are used to discover CPU utilization and other metrics generated by our applications.

Horizontal Pod Autoscaling
00:36

Let us summarize what we learnt from this chapter.

Summary
00:14
Test your knowledge
2 questions
+ Storing State
5 lectures 08:33

This lesson is all about using the deep integration that Kubernetes has with the AWS native storage solution Elastic Block Store (EBS).

Preview 00:19

Let's start by looking at how we can attach volumes to our pods.

Volumes
04:54

On AWS, there are several different types of volume available that offer different price and performance characteristics. Let’s look at it in more detail.

Storage Classes
01:44

So far, we have seen how we can use Kubernetes to automatically provision EBS volumes for PersistentVolumeClaim. This can be very useful for a number of applications where we need a single volume to provide persistence to a single pod. If you are running an application where you want each replica to have its own unique volume, we can use a stateful set. Stateful sets have two key advantages over deployments when we want to deploy applications where each replica needs to have its own persistent storage.

StatefulSet
01:03

Let us summarize what we learnt from this chapter.

Summary
00:33
Test your knowledge
2 questions
+ Managing Container Images
4 lectures 03:33

This lesson helps you understand how to leverage the AWS Elastic Container Registry (ECR) service to store your container images in a manner that tackles all these needs.

Preview 00:20

ECR is AWS's approach to a hosted Docker registry, where there's one registry per account, uses AWS IAM to authenticate and authorize users to push and pull images. By default, the limits for both repositories and images are set to 1,000. As we'll see, the setup flow feels very similar to other AWS services, whilst also being familiar for Docker Registry users.

Pushing Docker Images to ECR
01:45

IAM users' permissions should allow your users to perform strictly only the operations they actually need to, in order to avoid any possible mistakes that might have a larger area of impact. Let’s look at it in more detail.

Setting Up Privileges For Pushing Images
01:19

Let us summarize what we learnt from this chapter.

Summary
00:09
Test your knowledge
1 question