Docker over AWS with ECS. Implementing IaaS, CI and CD
4.2 (25 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
315 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Docker over AWS with ECS. Implementing IaaS, CI and CD to your Wishlist.

Add to Wishlist

Docker over AWS with ECS. Implementing IaaS, CI and CD

Develop High availability architecture apps for million users. Using Docker, Cloudformation, CodePipeline, ECS Cluster
4.2 (25 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
315 students enrolled
Created by Alberto Eduardo
Last updated 4/2017
English
Curiosity Sale
Current price: $10 Original price: $30 Discount: 67% off
30-Day Money-Back Guarantee
Includes:
  • 2.5 hours on-demand video
  • 33 Supplemental Resources
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Students will be able to learn to setup a Infrastructure as a Code Architecture using Amazon Cloudformation
  • Students will be able to learn to setup, use and monitor an automated Software Pipeline with Codepipeline
  • Students will be able to learn how to setup use and monitor an ECS Cluster with Docker all over the place
  • Students will be able to setup a Elastic Load Balancer version 2 (Application Load Balancer) with high security and redundancy
  • Students will be able to learn how to implement and monitor a scalable and high available architecture based on Docker
  • Students will be able to learn to setup and monitor an Aurora Database Cluster with Primary server and replicas
  • Students will be able to learn to setup and use and CodeCommit and Container repository in AWS
  • Students will be able to learn how to implement Build and Deploy to production stage in an Automated Pipeline using CodeBuild and CodeDeploy in a Codepipeline pipeline defined in a Cloudformation template
View Curriculum
Requirements
  • Network & Basic Cloud security
  • At least Basic AWS Knowledge or some interaction experience with theAmazon Web Services Console
  • Basic git version control system knowledge
  • Some exposure to a Relational Database engine
  • AWS account for practices
Description

The idea of this course is to master a really cool way to implement a scalable and highly available base architecture supported by an automated development pipeline.

In this course, you will learn how to set up a continuous integration and continuous delivery (CI/CD) pipeline on AWS. A pipeline helps you automate steps in your software delivery process, such as initiating automatic builds and then deploying to Amazon EC2 instances. You will use AWS CodePipeline, a service that builds, tests, and deploys your code every time there is a code change, based on the release process models you define. Use CodePipeline to orchestrate each step in your release process. As part of your setup, you will plug other AWS services into CodePipeline to complete your software delivery pipeline. This guide will show you how to create a very simple pipeline that pulls code from a source repository and automatically deploys it to an Amazon EC2 instance.

During the course we are going to be using all this AWS services: Cloudformation, CodeCommit, CodePipeline, EC2 Container Service (ECS) over Docker, CodeDeploy and CodeBuild between other Amazon web services like...

This course is totally practical: You will code some shell scripts, build Cloudformation templates from scratch, create and edit Dockerfiles, monitor containers execution over ECS and many more.

Who is the target audience?
  • Anyone who wants to learn how to implement a Continuous Delivery and Continuous Integration over AWS using Docker and Cloudformation.
  • Anyone in the IT field interested in learning IAAS (Infrastructure as a code) for a high available and autoscalable architecture
  • Anyone interested in learning to setup and use an automated Software pipeline, from code push to production
  • Anyone interested to use docker in a development and production environment over EC2 Container Service
  • Anyone interested in how to setup and monitor Container Cluster and Database Cluster with multi Availability zones and redundancy
Students Who Viewed This Course Also Viewed
Curriculum For This Course
22 Lectures
02:33:34
+
Introduction
7 Lectures 11:01

Introduction to the Course

Master AWS CI & CD. Scalable Architecture & Automated Pipeline. Using Cloudformation, CodeCommit, CodePipeline, EC2 Container Service (ECS) over Docker, CodeDeploy, CodeBuild, ECR and more.

Transcript

Hello everybody and welcome to the Course:

Master AWS CI & CD. Scalable Architecture & Automated Pipeline.

My name is Alberto Eduardo and I’m currently the CTO of Elab Innovation a Boston based Startup dedicated to the first non-profit social sharing platform.

The idea of this course is to master a really cool way to implement a scalable and highly available base architecture supported by an automated development pipeline.

During the course we are going to be using all this AWS services:

- Cloudformation,

- CodeCommit,

- CodePipeline,

- EC2 Container Service (ECS) over Docker,

- CodeDeploy and

- CodeBuild between other Amazon web services like...

This course is totally practical: You will code some shell scripts, build Cloudformation templates from scratch, create and edit Dockerfiles, monitor containers execution over ECS and many more.

All the files used are available in my github account github.com/elpasticho and I hope you enjoy and learn a lot from this course.

Thanks

Preview 01:27

What You Should Know

In our first lecture we talk about the AWS services we are going to be using in this course: Cloudformation, EC2 Container Service, EC2 Container Respository, Codebuild & VPC between others. In this lecture we are going to see what prior knowledge you should have in order to get the best from this course.

Transcript.

In our first lecture we talk about the AWS services we are going to be using in this course: Cloudformation, EC2 Container Service, EC2 Container Respository, Codebuild & VPC between others. In this lecture we are going to see what prior knowledge you should have in order to get the best from this course.

Lets check this list

- Network & Basic Cloud security: In this point will be nice to know whats an Elastic Load Balancer, whats a Security Group, and Whats a EC2 Cluster.

- At least Basic AWS Knowledge or some interaction experience with theAmazon Web Services Console will be excellent for this course because we are going to be using the AWS conle.

- Basic git version control system knowledge

- Some exposure to a Relational Database engine, for example: MySQL, PostgreSQL, etc

- For this course you are going to need at least a free AWS account.

Preview 01:18

Basic Concepts and some technologies involved.

This Lecture explore and explain all the basic concepts and technologies involved in the course. Benefits and more.

Transcript.

Whats AWS?

- AWS or Amazon Web Service have been around since 2006 and offers a reliable, scalable, and inexpensive cloud computing services. Free to join for the first year and payable on demand service. There is a huge clients list or how aws call it, “study case”: Netflix, Kellogs, GE, Adobe, Coinbase and many more.

Whats CI & CD? & Why?
- Continuous integration (CI) is the practice of merging all developer working copies to a shared mainline several times a day.
- Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time.

Benefits of using CI & CD.
- Errors introduced can be detected and fixed earlier
- Developers tend to be more cautious whenever they push their code to the repository
- When developers are more cautious, they tend to commit more often and in smaller units
- When you commit in smaller units, you're able to revert modifications more easily
The errors are detected earlier and the errors can be fixed earlier
- Developer confidence levels tend to grow during development: "If it didn't fail, it's ok"

Lets talk about Whats Docker?
Docker containers wrap up a piece of software in a complete filesystem that contains
everything to run code Using containers everything required to make a piece of software runs, its packaged into isolated containers.
Unlike Virtual Machines, Containers do not bundle a full operating system Only libraries
and settings required to make the software works are needed.
Its make for an efficient and lightweight self contained systems and guarantees that software will always run the same regardless of where its deployed.

Basic Concepts and some Technologies involved
02:27

Course Structure.

Get to know how is this course organized.

Transcript.

This course is divided in 2 main parts, the first part explores an scalable and highly available architecture based on EC2 container services cluster with Docker, Application Load Balancer, VPC, Aurora DB Cluster, all generated in Cloudformation Templates in Yaml format.

The second part deeps dive into a CodePipeline template, implementing CI/CD to be able to automate the deploy to the Architecture explained in the first part of the course.

This pipeline model is composed with 3 stages: Source provided by CodeCommit, In the Build stage we are gonna be using CodeBuild & CodeBuildspec and in our last stage we are going to be able to deploy everything to our production architecture.

This Pipeline will be also cloduformation template in a yaml format.

Preview 01:19

Architecture Overview.

Before going into the code, lets talk a little about  our Architecture. This base architecture is applicable to a wide variety of different scenarios where you expect to have a highly available system over a scalable platform, of course,  with docker all over the production environment.

Transcript.

Before going into the code, lets talk a little about  our Architecture. This base architecture is applicable to a wide variety of different scenarios where you expect to have a highly available system over a scalable platform, of course,  with docker all over the production environment.

- At the top of this architecture we will have a couple of AWS service: Route53 for domain/host zone... and Cloudfront for the static caching....

- Then we have a S3 Bucket to keep all our static files (images, videos, js, htmls)

- Next we have our compute layer, composed by an ECS cluster with Docker images running inside a VPC and Multiple Avazilability Zones. We also include autoscaling configuration for the EC2 instance inside the cluster.

- For our database will be using an Aurora Cluster, with a master and a replication server, also configured in a cross AZ replication environment.

All this base architecture will be generated with a nested Cloudformation template, ideally we are going to be able to use this template to generates our microservices or replicate our monolitic architecture in different regions or scenarios.

In the next video we are going to check the CodePipeline stages to deploy all our code and changes to this architeture.


Preview 01:53

CodePipeline Model overview

This lecture explores the 3 stages we are going to use in our Development Pipeline.


Transcript.

This codepipeline specifies 3 different stages from the repo until the deploy to the ECS cluster:

- The Source will be provided by CodeCommit.

- The build stage will be a CodeBuild project and will built the dockerfile and push it to the ECR

- The Deploy stage will deploy the Docker image to the ECS cluster.

Of course we want to have all this template automated, thats why we will have a cloudformation template to generate all this pipeline ready to automate.

Development Pipeline Overview: CodePipeline, CodeBuild, CodeDeploy
00:51

Cloudformation and IAAS (IaaS)

- AWS Cloudformation: gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Transcript.

- AWS Cloudformation: gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

You can use AWS CloudFormation to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don’t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work.

After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software. You can also visualize your templates as diagrams and edit them using a drag-and-drop interface with the AWS CloudFormation Designer.

We are going to using a nested cloudformation structure, for good understanding and readability we are going to have all our services templates in a modular way and using the yaml format. Remember that json format is also supported for this CFN templates.

Cloudformation and Infrastructure as a Code (IaaS)
01:46
+
Content & Coding - Architecture.
6 Lectures 01:09:37

Main Architecture Overview & CFN Template 

- In this lecture we are going to start building our base architecture, we are gonna be using a nested architecture, or, a template file that reference other template files, we do this to have a modular cloudformation definition.

Transcript.

- In this lecture we are going to start building our base architecture, we are gonna be using a nested architecture, or, a template file that reference other template files, we do this to have a modular cloudformation definition.

Lets find the main parts and build our main architecture template:

Going from inside to outside we have:

 * Network & Security: where we define security groups, VPC

 * Load Balancer: where we define all the load balancer settings

 * Compute Area - ECS Cluster: here we will have the ECS cluster configuration

 * Service & Task Definition: for the ECS

 * Database: we will define the DB cluster here

Knowing this key points, lets start building our yaml file to reflects all. Here comes a tip, everytime I start to build a template file I follow this architecture:

Description: We use the description part to explain about our cloudformation template. Because this is the main template is a nice practice to include all the architecture details here, services, architecture type, etc

Parameters: we use the parameters section to pass values into our template resources. With parameters, we are able to create templates that are customized each time we create a stack. Each parameter must contain a value. And also we can specify a default value to make the parameter optional so that you don't need to pass in a value when creating a stack.

Optional Metadata: we are going to be using the metadata fields to guide the users during the cloudformation creation.

Resources: The Resources section declares the AWS resources that you want to include in the stack, such as an Amazon EC2 instance or an Amazon S3 bucket. We must declare each resource separately; however, if we have multiple resources of the same type, we can declare them together by separating them with commas.

Outputs: declares output values that you can import into other stacks (to create cross-stack references), return in response (to describe stack calls), or view on the AWS CloudFormation console. For example, you can output the S3 bucket name for a stack to make the bucket easier to find.

Following this 4 sections, lets analyzes the parameters needed to start:

First we should need the repository where everything resides (Code, Dockerfiles, images, etc).

Second  will be the default repository branch and

Third our Database Password.

The codeCommitRepo variable should be a String and we are going to specify a description for this parameter. This will be our codecommit repository name, so we type: CodeCommit Repository Name.

The next parameter is the RespositoryBranch, this one is also a String and the description will be: CodeCommit Reporsitory Branch.

The last parameter is DbPassword, this one is also a String and lets use the description field: Backend DB Password.

Lets continue with our template:

- metadata: the metadata resource type is AWS::Cloudformation::Interface this is a metadata key that defines how parameters are grouped and sorted in the AWS CloudFormation console. Normally, when you create or update stacks in the console, the console lists input parameters in alphabetical order by their logical IDs. By using this key, you can define your own parameter grouping and ordering so that users can efficiently specify parameter values.

In addition to grouping and ordering parameters, you can define labels for parameters. A label is a friendly name or description that the console displays instead of a parameter's logical ID. Labels are useful for helping users understand the values to specify for each parameter.

We define the label with the key ParameterLabels and we are going to use the first 2 parameters we define before:

First CodeCommitRepo, we use the tag “default” to specify the message. As we add in the previews description  we will type CodeCommit Repository Name.

Next RepositoryBranch, using the default tag we specify: CodeCommit Repository Branch Name (master).

Remember we are going to be launching this template to build a stack through the aws console so its nice for us to group the set up parameters during the beginning to help even more the people, and for that we use the tag “ParameterGroups”. The ParameterGroups tags give us the option to label the group, we use the same default for that and specify: CodeCommit Repository Configuration, lastly we specify which parameters are we going to have in the group, because we are going to group everything related with the CodeCommit, we will have the CodeCommitRepo and the RepositoryBranch and with this we finnish our parameters in this template.

Now we are going to define our resources, as we talked before, we will have 5 main resources:

Cluster, LoadBalancer, Service, VPC, Database.

Lets start with the VPC, because is the one without dependencies from other resources. First in VPC we are going to define the type, we use the AWS:Cloudformation::Stack resource type that allow us to nest a stack as a resource in this top-level template.

Next we define the properties needed for this resource, first, the templateURL, this one specify The URL of a template that you want to create as a resource. The template must be stored on an Amazon S3 bucket, so the URL must have the form: https://s3.amazonaws.com/

Lastly we will need the parameters needed for this resource to work, because we are building a VPC here we will need the  IPv4 CIDR block for the whole network, a couple of subnets with an IPv4 CIDR block and of course we will need a name for this VPC resource.

To do this we declare a Name for this VPC, we will use a Pseudo Parameters for this purpose, The Pseudo parameters are parameters that are predefined by AWS CloudFormation. You do not declare them in your template. Use them as the argument for the Ref function.

The pseudo parameter we are going to use here is AWS::StackName, and as I said we need the Ref function to use it, so we type here: !Ref AWS::StackName. With this we get the name of the stack and we are going to use it in the VPC.

Lets now define our LoadBalancer resource, this one will be also a nested stack, so we will have the same structure, but slightly different, because the load balancer will need parameters from the VPC resource once this one is created. So the first part is the same, this will be a AWS:Cloudformation::Stack type, in properties, we will have the TemplateURL and in parameters starts the interesting part, in our loadbalancer we will need the VpcId or VPC logical id and the Subnets list.

We will do this getting the results from the VPC resource, We will need to get the specific attribute from the VPC output or result, we use for that the !GetAll VPC.Outputs.Subnets and will do the same for the VPCId !GetAll VPC.Outputs.VpcId

We will follow the same process for the Service & the cluster, Service will need the cluster name parameter from the cluster resource and the targetgroup parameter from the loadbalancer. Cluster will need SecurityGroup from the loadbalancer resource and Subnet list, VpcID and subnet 1 & 2 and the VPC security group from the VPC resource output.

Finally we have the Database this one will need the TargetGroup from the LoadBalancer, the dbpassword from the setup parameters, the Vpc Logical Id, Subnet 1 & 2, and Availability Zone 1 & 2.

Our last key point is the output, here we specify the general stack output, here we will need our LoadBalancer URL with this we can take Route53 and make a hosted zone pointing to this url, and we are ready to serve from a domain or subdomain.

In the next videos we are going to explore in deep each of this nested templates.


Preview 10:36

VPC Cloudformation Definition

- VPC

A virtual private cloud (VPC) is a virtual network that closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS

Transcript.

- VPC

A virtual private cloud (VPC) is a virtual network that closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS


The following diagram shows the architecture that we'll create as you complete this template.  The security group that we will set up and associate with the instance allows traffic only between instance in the same security group and through specific ports, locking down communication with the instance according to the rules that we specify.

Our VPC will composed by 2 Subnets, and 2 Availability Zones, so Subnet 1 will be in the Availaibility Zone 1 and Subnet 2 will be in the Availability Zone 2. With this we are able to be online in cases where the Datacenter AZ1 or Datacenter AZ2 presents an issue, guaranteeing high availability.

In front of our VPC we will have a Elastic Load Balancer

Lets now built our VPC Cloudformation Template. As we talked in the last lecture we can start with our 5 main parts:

Description, Parameters, Metadata, Resources and Outputs.

In this case we are not going to use the optional metadata key.

Lets type in a simple description:

VPC Cloudformation Definition, 2 Subnets, 2 availability zones. Security Group.

Next the parameter section, remember this template is referenced in the main architecture template we define in the last lecture, and from there we know what will be the parameters needed.

Lets take a look at that template to know which parameter we are going to declare here.

Here we have:

Name: !Ref AWS::StackName

VpcCIDR: 10.215.0.0/16

Subnet1CIDR: 10.215.10.0/24

Subnet2CIDR: 10.215.20.0/24

As we talked before the VPC name is set thanks to the pseudoparameter AWS::StackName,

but lets take a look to the VPCCIDR block. We are using 10.215.0.0/16, for the whole VPC block range, and 10.215.10.0/24 - 10.215.20.0/24 for the Subnet range inside this VPC

So, going back to our VPC template, we should type in our 4 parameters:

Name, we want our name to be a String, so we type Type: String, and the same for the other 3 parameters, VpcCIDR, Type String, Subnet 1, Type String and Subnet 2 Type String.

With this we finnish our parameters section and start with our resources section.

The first resource we need to declare is our VPC itself, so we type in VPC.

Next we will need all the resources we declare when normally build a new VPC through the AWS console, we want this vpc to access the internet, so we need an internetgateway, then

we need to attach this Internet Gateway to the VPC, so we need an internetgatewayattachment,

then we will have our 2 subnets, Subnet 1 and subnet 2.

Then all VPC needs to create a route table for our VPC, and of course a default route table

with all this component inside route table, internet gateway , and finally we will need to associate our subnets to the route table.

Ok, now we have all our resources, lets define each one, starting from the VPC resource,

we will need as usual, its type, we want this one to be a VPC type, for that we type AWS::EC2::VPC, with this we Created a Virtual Private Cloud (VPC) we just need now to specify  the CIDR block and and the name, for that we need to define the resource properties,

first the CIDRblock, we already have this data from our parameters, so we just reference it using the Ref function and the variable name.

With the name is a little bit different, we use the tags key and the set the name with the same process we just did in the CIDRblock, using the ref function.

Next its the Internet Gateway turn, the IG type is AWS::EC2::InternetGateway with this sentence we create a new Internet gateway in our AWS account.

After creating the Internet gateway, we will need to attach it to our VPC.

Lets now attach our recently created Internetgateway to our VPC, to do it, we declare the type VPCGatewayAttachment, and in properties we just need to referene the 2 resources, we will start with the InternetGatewayId, we just need to reference with the Ref function and use the same resource, in this case we just type ref InternetGateway, and repeat the same process for the VPC logical id.

Now we have our VPC and InternetGateway attached,  lets define our subnets.

Subnet 1, as usual we are going to first give it a type, in this case AWS::EC2::Subnet, lets jump into the Subnet 1 Properties, here we will do the same we did in the internetgatewayattachment to reference our VPCid, the next step is to include our subnet in an AZ, because the Availability zones are related to a specific region, we want to get the Availability zone list and select one,

in order to do that we use GetAZs, this  function returns an array that lists Availability Zones for a specified region.

Because all users have access to different AZs, the intrinsic function Fn::GetAZs

enables us to write templates that adapt to the calling user's access.

That way we don't have to hard-code a full list of Availability Zones for a specified region.

To select the First Availability zone, we just use the first position in the array, !Select [ 0, !GetAZs ], we want our subnet to have ipv4 public address, so we use the sentence MapPublicIpOnLaunch true, next we stablish the cidrblock for the subnet, from our parameter section, finally we give a name to our subnet.

For the subnet 2 we follow almost the same process we did for the subnet 1, but we want our subnet 2 to be in the Availability zone 2, to achieve this we just change !Select [ 0, !GetAZs ] to !Select [ 1, !GetAZs ] and of course we want this subnet to use our its own cidrblock.

Next we are going to create our route table, we are going to use the type AWS::EC2::RouteTable

with this type we Create a new route table within a VPC. After we create the route table, you can add routes to it. In properties we need to define the VPC and the route table name, we are going to use the same process we did before several times.

Next we are going to add our default route to our route table, to do that we use the type AWS::EC2::Route this creates a new route in a route table within a VPC. The route's target can be either a gateway attached to the VPC or a NAT instance in the VPC. In our case the target will be an internet gateway. In properties we will need to specify our recently created routetable logical id, as usua we use the ref function to achieve this.

To finnish the with our default route we just need to add the outside route, for this we set the DestinationCidrBlock to  0.0.0.0/0 and of course we are going to be using our internet gateway to achieve that.

The last reources we have pendings are our subnets and the route table association, in order to do the association we use the AWS::EC2::SubnetRouteTableAssociation type, and in properties we declare our route table logical id, and our subnet logical id, the same way we did with the InternetGatewayAttachment.

Finnally we are going to complete our outputs section. We just finnished with the vpc resources and  we will need outputs for other resources/templates, so, from the VPC we will need:

Our subnets in a list and our subnets separated, our two availability zones, our VPC logical id and of course our VPC security group.

Lets start with the subnets grouped: We can call it Subnets, to group or append both subnet we are going to use the Join function, this function appends a set of values into a single value, separated by the specified delimiter. If a delimiter is the empty string, the set of values are concatenated with no delimiter. In our case we are going to use the comma as a delimtiter.

For the Subnet1 and Subnet 2, we just reference the resource created in the template with the same name, using the ref function as usual.

Next we will need the Availability zones, so, to get the AZ1 we will use the attribute from the subnet1 resource created above, so again we use the function GetAttribute, and we type in: GetAtt Subnet1.AvailabilityZone and we do the same with the AZ2.

Net we get the VPC logical id in the same way we did it in the InternetGatewayAttachment or the RouteTable resource.

Finnally we will need a parameter generated in the VPC resource, the VPC DefaultSecurityGroup, we will use the get atribute function to get it, and just type in GetAtt VPC.DefaultSecurityGroup.

With this we complete our VPC cloudformation template and its ready to be used. In the next lecture we are going to build our loadbalancer cloudformation template.

- In this lecture we are going to start building our base architecture, we are gonna be using a nested architecture, or, a template file that reference other template files, we do this to have a modular cloudformation definition.

Lets find the main parts and build our main architecture template:

Going from inside to outside we have:

 * Network & Security: where we define security groups, VPC

 * Load Balancer: where we define all the load balancer settings

 * Compute Area - ECS Cluster: here we will have the ECS cluster configuration

 * Service & Task Definition: for the ECS

 * Database: we will define the DB cluster here

Knowing this key points, lets start building our yaml file to reflects all. Here comes a tip, everytime I start to build a template file I follow this architecture:

Description: We use the description part to explain about our cloudformation template. Because this is the main template is a nice practice to include all the architecture details here, services, architecture type, etc

Parameters: we use the parameters section to pass values into our template resources. With parameters, we are able to create templates that are customized each time we create a stack. Each parameter must contain a value. And also we can specify a default value to make the parameter optional so that you don't need to pass in a value when creating a stack.

Optional Metadata: we are going to be using the metadata fields to guide the users during the cloudformation creation.

Resources: The Resources section declares the AWS resources that you want to include in the stack, such as an Amazon EC2 instance or an Amazon S3 bucket. We must declare each resource separately; however, if we have multiple resources of the same type, we can declare them together by separating them with commas.

Outputs: declares output values that you can import into other stacks (to create cross-stack references), return in response (to describe stack calls), or view on the AWS CloudFormation console. For example, you can output the S3 bucket name for a stack to make the bucket easier to find.

Following this 4 sections, lets analyzes the parameters needed to start:

First we should need the repository where everything resides (Code, Dockerfiles, images, etc).

Second  will be the default repository branch and

Third our Database Password.

The codeCommitRepo variable should be a String and we are going to specify a description for this parameter. This will be our codecommit repository name, so we type: CodeCommit Repository Name.

The next parameter is the RespositoryBranch, this one is also a String and the description will be: CodeCommit Reporsitory Branch.

The last parameter is DbPassword, this one is also a String and lets use the description field: Backend DB Password.

Lets continue with our template:

- metadata: the metadata resource type is AWS::Cloudformation::Interface this is a metadata key that defines how parameters are grouped and sorted in the AWS CloudFormation console. Normally, when you create or update stacks in the console, the console lists input parameters in alphabetical order by their logical IDs. By using this key, you can define your own parameter grouping and ordering so that users can efficiently specify parameter values.

In addition to grouping and ordering parameters, you can define labels for parameters. A label is a friendly name or description that the console displays instead of a parameter's logical ID. Labels are useful for helping users understand the values to specify for each parameter.

We define the label with the key ParameterLabels and we are going to use the first 2 parameters we define before:

First CodeCommitRepo, we use the tag “default” to specify the message. As we add in the previews description  we will type CodeCommit Repository Name.

Next RepositoryBranch, using the default tag we specify: CodeCommit Repository Branch Name (master).

Remember we are going to be launching this template to build a stack through the aws console so its nice for us to group the set up parameters during the beginning to help even more the people, and for that we use the tag “ParameterGroups”. The ParameterGroups tags give us the option to label the group, we use the same default for that and specify: CodeCommit Repository Configuration, lastly we specify which parameters are we going to have in the group, because we are going to group everything related with the CodeCommit, we will have the CodeCommitRepo and the RepositoryBranch and with this we finnish our parameters in this template.

Now we are going to define our resources, as we talked before, we will have 5 main resources:

Cluster, LoadBalancer, Service, VPC, Database.

Lets start with the VPC, because is the one without dependencies from other resources. First in VPC we are going to define the type, we use the AWS:Cloudformation::Stack resource type that allow us to nest a stack as a resource in this top-level template.

Next we define the properties needed for this resource, first, the templateURL, this one specify The URL of a template that you want to create as a resource. The template must be stored on an Amazon S3 bucket, so the URL must have the form: https://s3.amazonaws.com/

Lastly we will need the parameters needed for this resource to work, because we are building a VPC here we will need the  IPv4 CIDR block for the whole network, a couple of subnets with an IPv4 CIDR block and of course we will need a name for this VPC resource.

To do this we declare a Name for this VPC, we will use a Pseudo Parameters for this purpose, The Pseudo parameters are parameters that are predefined by AWS CloudFormation. You do not declare them in your template. Use them as the argument for the Ref function.

The pseudo parameter we are going to use here is AWS::StackName, and as I said we need the Ref function to use it, so we type here: !Ref AWS::StackName. With this we get the name of the stack and we are going to use it in the VPC.

Lets now define our LoadBalancer resource, this one will be also a nested stack, so we will have the same structure, but slightly different, because the load balancer will need parameters from the VPC resource once this one is created. So the first part is the same, this will be a AWS:Cloudformation::Stack type, in properties, we will have the TemplateURL and in parameters starts the interesting part, in our loadbalancer we will need the VpcId or VPC logical id and the Subnets list.

We will do this getting the results from the VPC resource, We will need to get the specific attribute from the VPC output or result, we use for that the !GetAll VPC.Outputs.Subnets and will do the same for the VPCId !GetAll VPC.Outputs.VpcId

We will follow the same process for the Service & the cluster, Service will need the cluster name parameter from the cluster resource and the targetgroup parameter from the loadbalancer. Cluster will need SecurityGroup from the loadbalancer resource and Subnet list, VpcID and subnet 1 & 2 and the VPC security group from the VPC resource output.

Finally we have the Database this one will need the TargetGroup from the LoadBalancer, the dbpassword from the setup parameters, the Vpc Logical Id, Subnet 1 & 2, and Availability Zone 1 & 2.

Our last key point is the output, here we specify the general stack output, here we will need our LoadBalancer URL with this we can take Route53 and make a hosted zone pointing to this url, and we are ready to serve from a domain or subdomain.

In the next videos we are going to explore in deep each of this nested templates.

Virtual Private Cloud (VPC) Cloudformation Definition
11:07

Elastic Load Balancer Cloudformation Definition

In the last lecture we built the VPC module of our architecture, lets now build the Load Balancer Module of our Architecture.

Transcript.

Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve fault tolerance in your applications, and provide the required amount of load balancing capacity needed to route application traffic.

Elastic Load Balancing offers two types of load balancers that both feature high availability, automatic scaling, and robust security. These include the Classic Load Balancer that routes traffic based on either application or network level information, and the Application Load Balancer that routes traffic based on advanced application level information that includes the content of the request. The Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances, while the Application Load Balancer is ideal for applications needing advanced routing capabilities, microservices, and container-based architectures. Application Load Balancer offers ability to route traffic to multiple services or load balance across multiple ports on the same EC2 instance.

As we can see in our diagram, we are going to be using the Application Load Balancer, the ideal choice  to define routing rules based on content across multiple services or containers running on one or more Amazon Elastic Compute Cloud (Amazon EC2) instances and of course, over ECS or EC2 container service.

In the last lecture we built the VPC module, lets now build the Load Balancer Module of our Architecture.

As usual we start with our 4 parts: Description, Parameters, Resource and Outputs (as you can seewe are not going to define the metadata section in this template)

Description, lets add a simple description for our template: Application Load Balancer configuration template.

Now parameters section, Looking at our main architecture yaml file, we are passing 2 parameters to this template:

- the Subnets, in a list format

- and the VPC logical ID

Both parameters are outputs from the VPC resource, we specify in the last lecture, and you will see this pattern a lot in the nested or modular cloudformation templates.

Lets go back to our load balancer template, so in parameters we should add, VpcId, with a String type and Subnets with the type List of subnet ids. With this sentence we are declaring that our parameter Subnets is an array of subnet IDs.

Now, Resources section, Lets start defining the ALB (Application Load Balancer) Security group. The goal of this security group is to define a rule that only allow traffic to and from port 80 or web traffic.

Here we can also add 443 or https traffic.

So we start declaring the type: "AWS::EC2::SecurityGroup" this sentence will create an Amazon EC2 security group, we want to make it a VPC Security Group, we will need to add the VPC logical ID in our properties, lest first finish with the Security Group set up and add at the end the VPC id.

Going more in detail A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign the instance to up to five security groups. Security groups act at the instance level, not the subnet level. I want to create a Security Group description, to achieve it we use the sentence GroupDescription. And then I want to allow traffic coming from port 80 from any IP, this will guarantee our web traffic. Finally we just add the VPC logical id.

Next we are going to defining our Load Balancer itself.

We want our balancer to be an Application balancer, so the type will be AWS::ElasticLoadBalancingV2::LoadBalancer with this sentence we are going to create an Elastic Load Balancing Application load balancer that distributes incoming application traffic across multiple targets (such as EC2 instances) in multiple Availability Zones. Next we need to define the properties for this resource, first we specify the subnets property. The value for the subnets will be came from our setup parameters, this value Specifies a list of at least two IDs of the subnets to associate with the load balancer. The subnets must be in different Availability Zones. and the other property will be Securitygroup, for this we are going to make a reference to the resource created before. this property allow us to specify a list of the IDs of the security groups to assign to this load balancer.

Next we will need to declare a listener for our Application Load Balancer, we use the AWS::ElasticLoadBalancingv2::listener resource type, with this we creates  listener able to check for connection requests and forwards them to one or more target groups. The rules that you define for a listener determine how the load balancer routes requests to the targets in one or more target groups.

We are going to add our Listener Properties, first we will need to reference the loadbalancer we just created before in our LoadBalancerArn variable, next we will need the port and protocol for our rule and the last thing will be to specify the default actions the Elastic Load Balancing listener action taken when handling incoming requests. We are going to forward the traffic to a valid target group, we are going to cheat a little bit here and reference the targetgroup we are going to define next.

Now its the turn for our target group, we are able to build target group thanks to the AWS::ElasticLoadBalancingV2::TargetGroup type, its creates an Elastic Load Balancing target group that routes requests to one or more registered targets, such as EC2 instances.

Before continuing with the properties, we are going to define an specific order creation, for that we are going to use the attribute DependsOn, With the DependsOn attribute you can specify that the creation of a specific resource follows another. When you add a DependsOn attribute to a resource, that resource is created only after the creation of the resource specified in the DependsOn attribute, in this our case, the load Balancer resource.

Now, the properties, first, the targetgroup will need the VPC logical ID in which your targets are located, we use the same sentence we did in the Security Group Resource, next we are going to define The port on which the targets receive traffic, 80,  then The protocol to use for routing traffic to the targets, http,  next we are going to specify The HTTP codes that a healthy target uses when responding to a health check, next we are going to set up the healthcheck parameters:

HealthCheckIntervalSeconds: The approximate number of seconds between health checks for an individual target, 10 seconds.

HealthCheckPath: The ping path destination where Elastic Load Balancing sends health check requests, lets use root or the slash /

HealthCheckProtocol: The protocol that the load balancer uses when performing health checks on the targets, such as HTTP.

HealthCheckTimeoutSeconds: The number of seconds to wait for a response before considering that a health check has failed. 5 seconds

HealthyThresholdCount:The number of consecutive successful health checks that are required before an unhealthy target is considered healthy. 2

With this our load balancer health check configuration is ready, now its time to define our target group attributes:

1.- deregistration_delay.timeout_seconds key, The amount of time for Elastic Load Balancing to wait before changing the state of a deregistering target from draining to unused. The range is 0-3600 seconds. The default value is 300 seconds. We are going to be using 30 seconds

2.- Next we get into a crucial part, if our app is needing user sessions, we will need to specify it here. Sticky sessions are a mechanism to route requests to the same target in a target group. This is useful for servers that maintain state information in order to provide a continuous experience to clients. To use sticky sessions, the clients must support cookies. When a load balancer first receives a request from a client, it routes the request to a target and generates a cookie to include in the response to the client. The next request from that client contains the cookie. If sticky sessions are enabled for the target group and the request goes to the same target group, the load balancer detects the cookie and routes the request to the same target. To achieve this we set:

Key: stickiness.enabled

         Value: true

       - Key: stickiness.lb_cookie.duration_seconds   (cookie life time)

         Value: 86400

       - Key: stickiness.type

         Value: lb_cookie

Now, the targetgroup is ready, lets build our last resource, The load Balancer listener rule, with the AWS::ElasticLoadBalancingV2::ListenerRule   type we define which requests an Elastic Load Balancing listener takes action on and the action that it takes. The First properties will be of course our AWS::ElasticLoadBalancingV2::ListenerRule, so we reference it. Next the priority Elastic Load Balancing evaluates rules in priority order, from the lowest value to the highest value. If a request satisfies a rule, Elastic Load Balancing ignores all subsequent rules. Next we will have the conditions for the ELC actions, we want here the path-pattern (which forwards requests based on the URL of the request). Using the root / as value.

Lastly the Action, we already specify this one on target group, so we reference it:

- TargetGroupArn: !Ref TargetGroup and we want to forward the traffic, so we type in the value forward.

Finally the outputs we are going to need from this module:

We will need The target group, the security group and the loadbalancer url, to build the url we are going to be using the function Sub, this one substitutes variables in an input string with values that we specify. we are going to use this function to construct the url. getting the DNSName attribute.

Now our Load Balancer and target group template is ready to use. in the next lecture we are going to build our Cluster ECS module.

Elastic Load Balancer (ELB) Cloudformation Definition
13:46

ECS Cluster Cloudformation Definiton

In the past lecture we define our VPC architecture with the Cloudformation syntax, lets now build the ECS Cluster in charge of our compute layer.

Transcript.

Amazon EC2 Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances. Amazon ECS lets you launch and stop container-based applications with simple API calls, allows you to get the state of your cluster from a centralized service, and gives you access to many familiar Amazon EC2 features.

You can use Amazon ECS to schedule the placement of containers across your cluster based on your resource needs, isolation policies, and availability requirements. Amazon ECS eliminates the need for you to operate your own cluster management and configuration management systems or worry about scaling your management infrastructure.

In the past lecture we define our VPC architecture with the Cloudformation syntax, lets now build the ECS Cluster in charge of our compute layer.

Following our initial template we have: Description, Parameters, Resource and Outputs (as you can see we are not going to define the metadata section in this template)

Description, here we are going to type in: ECS cluster architecture template.

Our next section is the parameter section. Here we will have a mix between the ones referenced in the main architecture template and some we will add directly here.

First we have InstanceType, this one specify the EC2 Instance type we are going to be using in our Cluster, we specify a default value of t2.large.

Next we have ClusterSize, this parameter is also a string type and we are going to use as the default amount of instance running in the cluster.

We are going to be using the same Subnets list we use in the Load Balancer template, and also the security group generate by the LoadBalancer template.

Lastly we will need the VPC logical Id and the VPC Security Group both as a String type as usual.

This template will have a section called Mappings: The optional Mappings section matches a key to a corresponding set of named values. For example, if you want to set values based on a region, you can create a mapping that uses the region name as a key and contains the values you want to specify for each specific region. You use the Fn::FindInMap intrinsic function to retrieve values in a map. We are going to be using the mappings to have Amazon ECS Optimized images to use in our cluster and give the options to use differents regions to the users.

Now, the resources section. Due to our EC2 instance in the cluster we are going to need an IAM role that have access to certain AWS resources for this instances. So we will have

ECSRole: as I already explained, this will be an AWS::IAM::Role resource type, lets define the properties por the ECSRole resource.

Lets focus on the main properties, We are going to give a name to the role, for that we are going to use the RoleName key, we are going to construct the name with the Sub function, joinning the stackname with ecs- string. This will help you to identify the role in our AWS console.

Its important to know that The Amazon ECS service makes calls to the Amazon EC2 and Elastic Load Balancing APIs on your behalf to register and deregister container instances with your load balancers. Before you can attach a load balancer to an Amazon ECS service, you must create an IAM role for your services to use before you start them. This requirement applies to any Amazon ECS service that you plan to use with a load balancer.

We are going to create the property AssumeRolePolicyDocument first we want to enable the Amazon EC2 service to assume a role and then we are going to use ManagedPolicyArns to specify which role we want our EC2 intance to assume, in our case, will be the pre-baked AmazonEC2ContainerServiceRole. So we type in: ManagedPolicyArns:

and the arn service-role  AmazonEC2ContainerServiceforEC2Role.

We finnish with our role, now lets add the role to an instance profile to be able to use. To achieve that we are going to declare the parameter InstanceProfile, with the same InstanceProfile type. And in Properties we just add the Role referencing the resource we created before.

We want to set up now the security group we are going to use  in our ec2 instances, we add the AWS::EC2::SecurityGroup and the type will be AWS::EC2::SecurityGroup, then we need the properties, First lets add a description to this security group with the GroupDescription tag and use the Sub function to build a name composed with the stackname and the word hosts.

We want to allow traffic comming from the security group we declare in the load balancer, so we will need SecurityGroupIngress key referencing the parameter SourceSecurityGroup in the SourceSecurityGroupId key, and the IpProtocol with a - 1 value to allow all type of traffic from the resources in the security group.

Lastly we specify the VPC logical ID as usual to use it as a VPC Security Group.

Then we have the Cluster Itself. we use the AWS::ECS::Cluster type. In properties we only define the Cluster name, with referencing the Stack name.

The next resource is one of the largest definition we need to build, we want our cluter to autoscale, in EC2 instances, scale up and down, we need to follow the same process we normally do to declare autoscaling in AWS. We will need an AutoScalingGroup and a Launch Configuration.

Beginning with the Autoscaling resource we have the type: AWS::AutoScaling::AutoScalingGroup  And the properties will be:

VPCZoneIdentifier: A list of subnet identifiers of Amazon Virtual Private Cloud (Amazon VPCs) for the group.

LaunchConfigurationName: Here we are going to reference our LaunchConfiguration resource, (the one we are going to define after finnish this one).

MinSize: The minimum amount of cluster.

Maxsize:  The maximum amount of cluster during the autoscaling.

DesiredCapacity: Specifies the desired capacity for the Auto Scaling group.

Then we define the Autoscaling group name with the Sub function and the key propagateatlaunch to true thanks to this we specify that the new tag will be applied to instances launched after the tag is created. Next we have CreationPolicy with this we specify the maximum time to wait for the resource creation.

Lastly the updatepolicy property, we want our update policy to have the same behavior that the creation one so we just want  MinInstancesInService: !Ref ClusterSize, MaxBatchSize: 20 PauseTime: PT15M WaitOnResourceSignals: true.

Our last resource will be the LaunchConfiguration, the type will be: Type: AWS::AutoScaling::LaunchConfiguration, here comes an interesting part, we need to execute a couple of commands in the cluster instance, during launching to set up everything we need to make the instance part of our cluster,  to accomplish it we are going to use the cfn-init helper script, this one is able to do all this action in our instance:

Fetch and parse metadata from CloudFormation

Install packages

Write files to disk

Enable/disable and start/stop services

Remember that our container instance was launched with the Amazon ECS-optimized AMI, and we can set environment variables in the file /etc/ecs/ecs.config, the first thing we want to set is the ECS CLUSTER environment variable with our Cluster Name.

Then we will need a couple of files with more environment configuration data and finally we want to execute all this commands during the instance launching.

Cloudformation counts with the AWS::CloudFormation::Init type to include metadata on an Amazon EC2 instance for the cfn-init helper script. If our template calls the cfn-init script, the script looks for resource metadata rooted in the AWS::CloudFormation::Init metadata key.

So we type in:

Metadata:

AWS::CloudFormation::Init:

The metadata needs to be organized in config keys, so inside our config key we have:

Commands and files, in Commands we only want to add  ECS_CLUSTER variable to the /etc/ecs/ecs.config , we are going to use the echo command for it.

Then files, we are going to add 2 files, one will be the cfn-hup.conf configuration file with 0400 permissions, owner and group root, with the stack and the region variable.

The second file is the cfn-auto-reloader.conf configuration file this one is just a hook file to launch the cfn-init command with specific configuration, during launch or updates.

The last property we will need in the cloud formation init is the services, the services key define which services should be enabled or disabled when the instance is launched. On Linux systems, this key is supported by using sysvinit. And of course we want our cfn-hup running here using the configuration files we just explained before.

To finnish with this resource we are going to specify the Property for our LaunchConfiguration resource, here we will need the imageID, the instance type, The IAM instance profile, this 3 values are referenced from our set up parameters, next we can specify here a ssh key if we want to access this instance via SSH. The security group, and last but not less important our User data,

a shell script that first install the cfn bootstrap, to have access to the cfn commands, the we want to execute the cfn init and the cfn signal, the first to set all the configuration needed and the last one to to indicate whether Amazon EC2 instances have been successfully created or updated. If you install and configure software applications on instances, you can signal AWS CloudFormation when those software applications are ready.

Our last section is the output, and we are just going to output our cluster name. Now we are ready to add our service/task definition into our Ecs cluster

EC2 Container Service (ECS) Cluster Cloudformation Definition
13:17

Service & Task Definition Cloudformation Definition

Lets start now with our template yaml file for the Service & Task definition:

Transcript.

Amazon ECS allows you to run and maintain a specified number  of instances of a task definition simultaneously in an ECS cluster. This is called a service. If any of your tasks should fail or stop for any reason, the Amazon ECS service scheduler launches another instance of your task definition to replace it and maintain the desired count of tasks in the service.

Our diagram explains the whole process, A service is composed for a desired count of task definitions, the Amazon ECS scheduler is in charge of placing the task in the instance inside the cluster with the help of th ECS agent and keep the desired count of task running of the service.

Remember that our Aplication load balancer distributes traffic across the tasks that are associated with the service.

Lets start now with our template yaml file for the Service & Task definition:

We can see our 4 main start parts:

First Description: lets type in Service & task definition configuration Template.

Second Parameters: going back to our main architecture template we are passing 2 parameters to the Service module:

Cluster and TargetGroup: The cluster will be an output from the Cluster Template and the TargetGroup will be a result from the Load balancer template. So going back to our Service Template we should have this 2 parameters and we are going to add a couple more, one will be an environment variabl for our container, let call it tag, will be a String Type and will have a default value of latest. The other parameter will be the desirent amount of task, we are going to use a Number type and the default will be 0. In the next we are going to figure out why this will be 0 in the defautl.

The 2 parameters reference it from the main-arch are going to be String type. Target Group and Cluster.

Next the resource section. Its important to notice that we want to allow our Amazon ECS container agent to make calls to the application load balancer. So the first resource we are going to create is the ECSServiceRole, this one will be a AWS::IAM::Role type and the properties will be really similar to the one we used in the ECS created on the ECS template. So as usual our path group is the slash, the role name property will use the sub function to join the ecs-service string with the stack name, next we have the assume role policy document, that will be allowing the assume role functionality and the last property will be the ManagedPolicyArns same as before we are going to assume the pre baked AmazonEC2ContainerServiceRole as we did before in the ECS Cluster template.

Next resource will be Repository, we need to specify a ECR repository for our template. Amazon EC2 Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with Amazon EC2 Container Service (ECS), simplifying your development to production workflow. Amazon ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure. Amazon ECR hosts your images in a highly available and scalable architecture, allowing you to reliably deploy containers for your applications.

For this repository resource we are going to use the type AWS::ECR::Repository, so we will be able to push and pull Docker images from our ECR, finally the deletionpolicy

With the DeletionPolicy attribute you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource by default.

To keep a resource when its stack is deleted, specify Retain for that resource. You can use retain for any resource. So we type in Retain.

The next resource is the service itself, this one will be a AWS::ECS::Service  this creates an Amazon EC2 Container Service (Amazon ECS) service that runs and maintains the requested number of tasks and associated load balancers.

Now the properties for this resource, First the Cluster where this Service is going to operate, this will be a reference for the Cluster parameter we declare in the parameters section, Second property will be the Role, with this one we allow the Amazon ECS container agent to make API calls to our load balancer. The third property will be the DesiredCount, with this one we stablish the number of simultaneous tasks that we want to run on the cluster. We again reference the parameter DesiredCount from our parameter section. The next property  is the task definition, here we are going to reference the next resouce we are going to create. The task definition. The last property is the Loadbalancers here will specify  a container name , The port number on the container to direct load balancer traffic to, and An Application load balancer target group Amazon Resource Name (ARN) to associate with the Amazon ECS service.

The last resource we are going to define is Task Definition resource, with the type AWS::ECS::TaskDefinition.In properties we will have first the family, The name of a family that this task definition is registered to. A family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add. We are going to use the Sub function to join the stack name with the string app, then the core part or the container definition. First we have a name I will use the same I defined in the previous resource, then Image here we will add a route to our repository url containning the image and will use the tag parameter we define in the parameters section to specify the docker image version. In our cae will be the latest. Then the CPU units here we specify The minimum number of CPU units to reserve for the container. Remember that Containers share unallocated CPU units with other containers on the instance by using the same ratio as their allocated CPU units. The same happend with the memory, here we are giving a  number of MiB of memory to reserve for the container. If your container attempts to exceed the allocated memory, the container is terminated. We can also use the MemoryReservation The number of MiB of memory to reserve for the container. When system memory is under contention, Docker attempts to keep the container memory within the limit. If the container requires more memory, it can consume up to the value specified before in the Memory property or all of the available memory on the container instance, whichever comes first. We are going to use a Portmappings for our container to expose the port 80 and will set up a environment variable with the tag parameter.

And its done, we have our service and task definition ready, lets now fill in our last section, the outputs, as a result from this module we will need, the ECR Repository we just created and thats all.

We are now ready to build our stack, there is one more part we need to define, that will be our database and this the one we are going to be explainning in our next lecture.

Service & Task Definiiion Cloudformation Definition
10:58

Aurora Database Cluster Cloudformation Definition.

Its time to build our Aurora DB cluster in our Yaml file.

Transcript.

An Amazon DB cluster is made up of DB Engine instances and a cluster volume that represents data copied across the Availability Zones as a single, virtual volume. There are two types of instances in a DB cluster: a primary instance and Replicas.

The primary instance performs all of the data modifications to the DB cluster and also supports read workloads. Each DB cluster has one primary instance. AReplica supports only read workloads. Each DB instance can have up to 15 Replicas. You can connect to any instance in the DB cluster using an endpoint address.

To implement our Database cluster we are going to use the Aurora DB Engine, Amazon Aurora (Aurora) is a fully managed, MySQL-compatible, relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to five times the performance of MySQL without requiring changes to most of your existing applications.

The following diagram illustrates the relationship between the Amazon Aurora cluster volume and the primary and Aurora Replicas in the Aurora DB cluster.

Its time to build our Aurora DB cluster in our Yaml file. We started with our 4 main parts: Description, Parameters, Resources and Outputs.

The description will be a sentence as usual, Aurora Database Cluster with one replication instance and Multi Availability Zones.

Lets check our main architecture template to verify the parameters we passed to our DB Cluster template:

Almost every parameter we are going to need in our DB cluster come as an output from the VPC module, and only one parameter come from the stack setup, so, from the VPC output we have:

The Subnet one , the Subnet two, the Availability Zone 1 and the Availability Zone 2.

From the stack setup we will need the DB Password access.

Going back to our DB Cluster template file we will declare this 5 parameters with type String.

Next section is the Resource, The Aurora documentation specify that we will need a VPC with at least 2 subnets in at least 2 different availability zones, so the first resource we are going to need is the RDS Subnet group.

We are going to call this resource DBSubnetGroup and will be an AWS::RDS::DBSubnetGroup type, this one, creates an RDS database subnet group. Subnet groups must contain at least two subnet in two different Availability Zones in the same region. And we already guarantee this in the VPC module implementation.

The properties for this resource will be:

- DBSubnetGroupDescription, as the Description section, we just type in a description here, Database Subnet Groups.

- Next the SubnetIds, here we are going to list the The EC2 Subnet IDs for our Subnet Group. Referencing from our parameter section we have Subnet1 and Subnet2.

The next resource we need to declare is the DB cluster itself. We are going to named RDSCluster and will be an AWS::RDS::DBCluster creates a cluster, such as an Aurora for Amazon RDS (Amazon Aurora) DB cluster.

Next we declare the properties for our Database Cluster. The first we are going to need is the Database master username, we are going to type in here, admin.

Second we have the database master password, this will be a reference from our parameter section.

Third the database name, lets call it MyDB

Next we are going to set up our database engine, with the Engine property. Here we are going to type in Aurora.

Then we have the DBSubnetGroupName, at this point we just reference the resource we just created before this one, and the last property will be

The DBClusterParameterGroupName allows us to define parameters that apply to the whole cluster, there is also the possibility to implement parameters to specific instances inside the cluster. Cluster-level parameters are managed in this DB cluster parameter groups property and we are going to reference here the next resource we will create in the template. In other hand  the Instance-level parameters are managed in the DB parameter groups.

Although each instance in an Aurora DB cluster is compatible with the MySQL database engine, some of the MySQL database engine parameters must be applied at the cluster level, and are managed using DB cluster parameter groups.

Our next resources are the DB instances, we are going to create 2 similars DB instance, but one we want to live in the Avalability Zone 1 and the other will be in the Availability zone 2, the one in the AZ1 will be our primary instance, and the other in the AZ2 will be our replica.

Lets start with the RDSDBInstance1: the type for this instance will be AWS::RDS::DBInstance, and the properties will be:

- DBSubnetGroupName, or the Database Subnet Group Name, as you are probably guessing, this will be a reference to the resource we created above.

- DBParameterGroupName, this one will be a reference to the resource we need to create next.

- We specify the Engine, and will type in Aurora.

- DBClusterIdentifier, here we have a reference for the DB cluster created before.

- We DONT want this DB Instance to be publicly Accesible, so we set this property to false, because we are going to be accesing only over the Subnet.

- Next the Availability Zone, here we are going to reference the AZ 1 and of course in the RDS instance here we will reference the AZ 2.

- And the last property will be the DB Instance Type, here we type in: db.r3.xlarge

As we mentioned before the next resource will be another DB instance, with the only different that the Availability zone , will be a reference to the AZ 2.

Then we have the last 2 resources the RDSDBClusterParameterGroup  and the RDSDBParameterGroup both specify parameters for the whole cluster and instances, the only difference is that the parameters specified at the cluster level contemplate global parameters, like the time zone, and the other one are more related to the instances, for exampe the sql mode. Of course in both side we can specify the description for the cluster and the instances, and also the Engine family, in our case, Aurora 5.6

With this we finnsh our template, we are not going to add any outputs for this template, but as a homework you can add a DB cluster url output to use it on your application db connection set up.

In the next lecture we are going to create our Stack using the AWS Console and will try to analize all the module created.

Aurora Database (RDS) Cluster Cloudformation Definition
09:53
+
Cloudformation Execution and Details
1 Lecture 09:26

Deep Dive in the Cloudformation Magic. (Following the Cloudformation Execution)

In the past lecture we set up the cloudformation service, and started the building process. Lets now follow the cloudformation building process...

Transcript.

In the past lecture we set up the cloudformation service, and started the building process. Lets now follow the cloudformation building process...

In the stacks screen we are able to see the whole process. First it start with the Creating in progress status for our main architecture template And the first module cloudformation is going to create is our VPC stack. We can check everything happening in the event tab, selecting the stack we want to explore.

For each stack we can check the parameters, the template they are using, the events, and the outputs.

Lets start checking the resources already created.

First, the VPC, go to the Network Section and click the VPC service.

We can verify the VPC name in the CIDR block we specify in our template file, The we can check the subnets associated:

This ones are easier to find if we sort the subnets by name:

There is to important things to notice here, first is the 2 subnets under the VPC cidr block and the amount of ips available, the next one is the creation of this subnets each one different availability zone.

us east 1a and us east 1b

We can also check our route tables with the local and the internet gateway routes. And of course the subnet associate to this route table.

Then we check our internet gateway attached And finally the Default security group: lets remember this number vpc-

As you can see there is only one security group allowing incoming traffic only from the same Security Group.

Going back to our cloudformation stack screen we can check the outputs created from the VPC stack.

We see that our loadbalancer is already created, lets go to the Compute Section and click EC2 to check the Loadbalancer configuration.

Lets check now our Load balancer. The LoadBalancer is inside the EC2 service.

Inside the cicd loadbalancer details we are able to see:

1.- The vpc associated is the one we just created before

2.- The Availability zones for this load balancer are related to the subnets we created in the last step.

3.- This load balancer is an internet facing Load Balancer

4.- Lets check the Security Group associated with this load balancer, its the number... we can see in the details section Inbound Rules that only traffic from port 80 is allowed

5.- In the Listener tab, we can verify our port 80 listener and of course the tag StackName we just created during the Stack setup.

Going Back to the stack screen, lets check the ECS cluster details.

ECS cluster was created, you can verify that there is 1 service running and 2 instances instances, Going into the service detail we can see in the ECS instances tab both instances running, the CPU available, the memory available and of course the Docker version on this instances.

Its nice to notice that our instances are running one in the AZ1 and the other in the AZ2.

We wont see anything in the task tab, because we are not running any docker image on this instances

Lets go to the Task definition link, there we will the task definition we defined in our template file, lets click it to see the family name and the family revision.

If we click it again we can start checking all the task definition detail we defined, CPU units, memory, port, etc.

The last thing to check here is the ECR repository created, remember this is the one with the Deletion policy attribute, if we delete our stack this repository is not going to be deleted.

The last Module to check is the Database Cluster:

The first thing to check is the Aurora Engine in 2 instances. the db instance class type that we set up, the Multi Availability zone and the replication role, as we explained before .

One important data we will neee from here is the Cluster endpoint, this will be our url to connect from our Docker containers.

Inside we can also check more details, like the parameters groups we set up, the subnet attached, the Availability zone, the port and more.

Going back to our cloudformation stack screen, we see that all our architecture is ready, and with this we finish the first part of the course. In the next part we are going to explore and implement our deployment pipeline, tweak a little bit our architecture to verify the change set actions and of course, see our application and deployment pipeline in action.

The Cloudformation Magic
09:26
+
Content & Coding - Deployment Pipeline
6 Lectures 47:45

Deployment Pipeline with CodePipeline

In the part 1 of the course we created our base architecture, based on ECS cluster, DB cluster, and Application load balancers backed in a Infrastructure  As a Code thanks to the Cloudformation templates.

Transcript.

In the part 1 of the course we created our base architecture, based on ECS cluster, DB cluster, and Application load balancers backed in a Infrastructure  As a Code thanks to the Cloudformation templates.

In the part 2 we are going to be focusing in the developers side, the continuous integration and continuous deployment implementation and of course the tools we need to do it in an efficient way.

We want that every code changes our developers do and push to th codeCommit repo we defined in the first part, be automatically built, tested, and prepared for a release to production.

The tool we are going to learn and use is AWS CodePipeline. AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define. This enables you to rapidly and reliably deliver features and updates.

Benefits

Rapid Delivery

AWS CodePipeline automates your software release process, allowing you to rapidly release new features to users. With CodePipeline, you can quickly iterate on feedback and get new features to customers faster.

Improved Quality

Automating your build, test, and release process allows you to easily test each code change and catch bugs while they are small and simple to fix. You can assure the quality of your application or infrastructure code by running each change through your standardized release process.

Configurable Workflow

AWS CodePipeline allows you to model the different stages of your software release process through a graphical user interface. You can specify the tests to run and the steps to deploy your application and its dependencies.

Get Started Fast

With AWS CodePipeline, you can immediately begin to model your software release process. There are no servers to provision or set up. CodePipeline is a fully managed continuous delivery service that connects to your existing tools and systems.

As usual we want our Codepipeline definition in a cloudformatino template, and thats what we are going to be making in the next video.

Deployment Pipeline - CodePipeline Overview
03:20

CodePipeline definition over cloudformation

So we understand the whole idea behind Codepipeline, and we want this part of our Infrastructure as a Code templates.

Lets go now to the Main architecture template file we already built, we want our pipeline to be on this file as a module. So under DatabaseCluster we add the resource Deployment Pipeline.

Transcript.

Workflow Modeling

A pipeline defines your release process workflow, and describes how a new code change progresses through your release process. A pipeline comprises a series of stages (e.g., build, test, and deploy), which act as logical divisions in your workflow. Each stage is made up of a sequence of actions, which are tasks such as building code or deploying to test environments. AWS CodePipeline provides you with a graphical user interface to create, configure, and manage your pipeline and its various stages and actions, allowing you to easily visualize and model your release process workflow.

So we understand the whole idea behind Codepipeline, and we want this part of our Infrastructure as a Code templates.

Lets go now to the Main architecture template file we already built, we want our pipeline to be on this file as a module. So under DatabaseCluster we add the resource Deployment Pipeline.

This one will be an AWS::CloudFormation::Stack type.

Properties. Here we will have the TemplateURL lets type in https://s3.amazonaws.com/cicdoveraws/deployment-pipeline.yaml .

Next the parameters we are going to need for this module. Because Codepipeline will need to change and deploy to different parts in the app we will need:

The ECS Cluster name:

The codeCommit repository:

The repository branch:

The LoadBalancer targetgroup:

The ECR, or the Continainer Repository:

The Stackname

and finally The template bucket

With this we finished the parameters needed for the Deployment Pipeline module, in the next lecture we are going to build the module itself with 3 stages on it to deliver automatically from source to production.

CodePipeline Main Architecture Reference Cloudformation Definition
03:10

Cloudformation Execution - Building our Stack.

After we finnish all our architecture templates, its now time to build our stack using the cloudformation service.

Transcript.

After we finnish all our architecture templates, its now time to build our stack using the cloudformation service.

Lets check first our requirements & considerations for this cloudformation architecture to build successfully:

1.- Taking a look to our main architecture template, parameters section, we are declaring a CodeCommit repository for our future Development Pipeline. You probably notice we are not using this repo in any place, so for this step, we are able to removed, but lets keep it for now, because that parameters are gonna be used in the second part of the course, when we define the deployment pipeline.

2.- Lets now check the VPC resource in the same file. We are declaring a CIDR block for the VPC and their subnets. Its important to know that this cidr block must not in use in our account, remember that the Cloudformation is going to create this VPC and subnets and wont be able to do it if we already have this CIDR in use in our account.

3.- If you see all our TemplateURL: are pointing to an s3 bucket, we need to define this S3 bucket in our account, upload our template files there, give it Versioning and static website hosting capability.

4.- Lets go now to the ECS-Cluster yaml file, remember we defined a Mappings property to have ECS optimized AMI, its always nice to keep this list updated with the latest aws list for this purpose.

5.- In the same file, there is a property called keyname owned by the Autoscaling resource, this one specify the name for an ssh pair key. We need to have a SSH key pair with that name in the same region we are deploying our stack.

6.- Its also important that if we want to access our EC2 instance inside the ECS cluster, we will need a bastion instance in the same VPC with the same Security Group, and of course this bastion instance needs to have access in port 22 from our pc and then from this instance we are going to connect to our EC2 isntance. Remember that our EC2 instance only allows traffic from the ALB in port 80, and from other instances in the same Security Group. and there is no way to access the instances from outside.

With all this in mind, lets now build our Architecture using the clouformation service:

First login to your AWS account and go to the Cloudformation service under the Management Tool Section.

Once you are here, you need to click on the Create Stack button,

Next In the select template screen we click the option Specify an Amazon S3 template URL, and there we type in our S3 bucket url containning our templates and the name of the main architecture file:

https://s3.amazonaws.com/cicdoveraws/main-arch.yaml

You can always use my S3 but I really recommend to build your own to host your architecture templates.

If we want to check all the dependencies and modify in a visual way our architecture, we can click the View/Edit template in Designer link

Here we see a really cool perspective of the Architecture we are building. In this screen we will have our 5 stacks or module to be built and the relation between them.

Following the arrows we notice:

- The load balancer needs outputs from the VPC.the

- The Same happends with the Database Cluster

-  The ECS cluster needs outputs from the VPC and from the Load Balancer stack

- The ECS service needs outputs from the Load balancer and of course from the ECS cluster

- Lastly the VPC can be created stand alone.

With this analysis we can figure out an order creation used by the cloudformation service for our architecture template.

We are also able to check our template in both Yaml and json format, and of course, verify its properties, metadata,  Deletion Policy, Dependencies and Conditions.

Continuing with our building we click the  next button and get into the screen where we define our set up parameters. The first thing to add is a name for our Stack. Lets fill in the Stack name field with: continuous integration and continuous delivery over Amazon Web Services.

Next the parameters we declare in our main architecture template:

Checking our main architecture yaml file we should have the code commit parameters group with the label CodeCommit Repository Configuration and the labels:

- CodeCommit Repository Name

- Code Repository Branch Name (master)

And outside from this parameter group we should have the DB password field.

Going back to our cloduformation create stack screen we can see that everything is perfect.

Lets now add our Code Commit repository name, if you dont have created a codecommit repo, just open the Codecommit AWS service in a new tab,

go to the Create repository button give a name and a description to the repository and click the Create repository button.

Once you have you repo ready, lets type in the repository name we just created in the field CodeCommit Repository name.

The next field is the CodeCommit repository Branch Name, here we just type in the master. Because we want t work with our master branch.

The last parameter is the password for our database we can use a really cool password here. Lets type in a nice passwod. for example $ASD123asdAdmin-$ and click the next button.

I like the idea to add tags to our clouformation resource. You can use the AWS CloudFormation Resource Tags property to apply tags to resources, which can help you identify and categorize those resources. You can tag only resources for which AWS CloudFormation supports tagging.

So the key will be StackName and the key will be cicdoveraws, we are not going to add any IAM Role here, or any advanced option. So we click the next button.

We just got our review screen, here we can check everything we set in the previous screen. If we made a mistake we can click the title for this section and go directly to edit it. We check everything is perfect and just go the Capabilities section, there we just click the check I acknowledge that AWS CloudFormation might create IAM resources with custom names and click the create button to start the magic.

In the next lecture we are going to deep dive in the building magic, the resources created, the order of creation and of course all the service we use in our architecture.

Preview 10:18

Deployment Pipeline Template file. Part 1.

In the last lecture we defined the parameters needed for our deployment pipeline template, lets now build the core functionality.

Transcript.

In the last lecture we defined the parameters needed for our deployment pipeline template, lets now build the core functionality.

This template file is longer than the other we defined before, so, we are going to have it defined in 2 lectures.

First our 4 main parts:

Description, Parameters, Resources and Outputs.

Lets type in Codepipeline Deployment Pipeline template. on the Description section

The the parameters, checking the main architecture file we have, one, two, three, four, five, six we have seven parameters and we will include one more for our template.

So we have in our template file:

CodeCommitRepo, RepositoryBranch, TargetGroup, StackName, Repository, Cluster, TemplateBucket, all this seven parameters will be String type.

(Wait 4 seconds)

We will add one more parameter called TestSemaphore, will be a String type, will have an empty default value, and we will add a description for readability and understanding. This description will say: This parameter is used during build stage, to set the variable for each test we need to build.    Because this will be used as a variable during the building process.

Next the resource section, We want our codepipeline to be able to read, create a modify our main architecture, so we want:

1.- Our deploy stage needs to be able to modify our ECS, ECR, IAM, CodeCommit, Autoscaling and cloudwatch.

2.- Our build stage needs to be able to create and add log events, be able to access the ECR, and be able to get and put in the Artifact Bucket.

3.- Our Codepipeline needs to be able to get and put in the S3 Artifact bucket and in the Template bucket, and of course be able to access the services: codecommit, codebuild, cloudformation and the IAM to wrap everything together.

So we will need to assume 3 roles:

Lets start with the CloudFormationExecutionRole, this is the special one we are going to need in the Deploy stage, remember we want the Deploy stage to be able to modify and get info from the ECR, ECS, the autoscaling, the cloudwatch, etc.

this resource will be an AWS::IAM::Role type, also we want this role to be saved in case of stack deletion, so we use the DeletionPolicy: Retain.

Lets start now with the resource properties, we want our role to have a name, so we use the key RoleName and using the sub function we concatenate the cfn string with the stackname.

Next for the path user group we are going to use the slash.

Now we are going to declare the assume role action, as we used in other templates.

Lastly the policies for this resource, first we are going to add the PolicyName property, to give a name to the policy.

Then the policydocument, here we are going allow all actions to the services:

ECS, ECR, IAM, CODECOMMIT, APPLICATION AUTOSCALING and CLOUDWATCH.

Lets now declare the second role, the initial structure with the AssumeRolePolicyDocument is the same. Lets go directly to the Policies.

The first Actions we are going to allow are:

- logs:CreateLogGroup

                 - logs:CreateLogStream

                 - logs:PutLogEvents  and

                 - ecr:GetAuthorizationToken

Now lets specify the S3 resource artifactbucket action allowed:

- s3:GetObject

                 - s3:PutObject

                 - s3:GetObjectVersion

Finnaly our last resource is the ECR repository we will need access to:

- ecr:GetDownloadUrlForLayer

                 - ecr:BatchGetImage

                 - ecr:BatchCheckLayerAvailability

                 - ecr:PutImage

                 - ecr:InitiateLayerUpload

                 - ecr:UploadLayerPart

                 - ecr:CompleteLayerUpload

The role resource we are going to setup will be:

CodePipelineServiceRole the initial statements will be  the same we have in the other two.

The resource we want to have access is the Artifact and Template bucket, for this one we will need to:

- s3:PutObject

                 - s3:GetObject

                 - s3:GetObjectVersion

                 - s3:GetBucketVersioning

The next will be to give access to:

- codecommit:*

                 - codebuild:StartBuild

                 - codebuild:BatchGetBuilds

                 - cloudformation:*

                 - iam:PassRole

With this we finished the roles, lets start with our artifact bucket, When you create your first pipeline, AWS CodePipeline creates or use a single Amazon S3 bucket you specify in the same region as the pipeline. Codepipeline use it to store artifacts for your pipeline as the automated release process runs.

You are probably wondering whats an artifact: AWS CodePipeline copies files or changes that will be worked upon by the actions and stages in the pipeline to the Amazon S3 bucket. These objects are referred to as artifacts, and might be the source for an action (input artifacts) or the output of an action (output artifacts). An artifact can be worked upon by more than one action.

Every action has a type. Depending on the type, the action might have an input artifact, which is the artifact it consumes or works on over the course of the action run; an output artifact, which is the output of the action; or both. Every output artifact must have a unique name within the pipeline. Every input artifact for an action must match the output artifact of an action earlier in the pipeline, whether that action is immediately prior to the action in a stage or runs in a stage several stages earlier.

Lets now define our ArtifactBucket:, this will be a AWS::S3::Bucket type and we want to have a Retain DeletionPolicy.

Because we are going to use the AWS Codebuild service in the Build stage of our codepipeline, we are goint to define a CodeBuildProject resource, lets call this one:

CodeBuildProject, the type will be AWS::CodeBuild::Project. Lets define the properties needed for this Codebuild project.

The first property will be the Artifacts, here we will need to specify a location for our S3 bucket, here we are going to reference our recently created Artifact bucket and will add a S3 type.

Then we will add the Source, as usual, the first thing to add is the location, here we will use sub function to concatenate the artifact bucket and the source.zip string, and will add a S3 type.

All Code build project needs a the Build specifications or directions, lets call it Buildspec, A build spec is a collection of build commands and related settings, in YAML format, that AWS CodeBuild uses to run a build. You can include a build spec as part of the source code or you can define a build spec when you create a build project.

The first thing to do is to add phases to our codebuild project specs, The first phase will be pre_build:  Represents the commands that AWS CodeBuild will run before the build. For example, you might use this phase to log in to Amazon ECR, or you might install npm dependencies.

The first command we are going to execute is an echo command to add the code build id to a file in the temporal folder of the codebuild machine.

The next command will add the repository URL to the build tag file in the temporary folder.

Next we are going to add the tag and a json build file in the temporary folder.

Lastly we want to prints a command that we can use to log in to our default Amazon ECR registry.

Lets now define our build stage, Because we will need to build a docker image from our code commit repository we are going to use the docker build command: docker build --tag "$(cat /tmp/build_tag$(TestSemaphore).out)" --file Dockerfile$(TestSemaphore)

The last phase will be our post build command, here we are going to take the image built and push it to our ECR registry.

The last property we are going to specify in our build spec will be our artifacts. Represents information about where AWS CodeBuild can find the build output and how AWS CodeBuild will prepare it for uploading to the Amazon S3 output bucket.

Next we will define the environment for our Build Machine, we are going to use a General Type compute instance, the Docker 1.12 version, will use a Linux container and in the Environment Variable we will have the AWS default region and The repository url.

To finish with our Codebuild project we specify a name and of course the service role we define at the beginning for this Codebuild to work.

In the next video we are going to finish our template

Codepipeline Cloudformation Definition - Part 1
12:14

Deployment Pipeline Template file. Part 2.

Lets continue with our Codepipeline template file. In the past lecture we define our description and parameter section and started with the resources section.

Transcript.

Lets continue with our Codepipeline template file. In the past lecture we define our description and parameter section and started with the resources section.

In resources we defined the 3 roles we are going to use in our Codepipeline implementation:

CloudFormationExecutionRole: needed in the Deploy stage.

CodeBuildServiceRole: needed in the Codebuild project execution.

CodePipelineServiceRole: needed in the pipeline definition.

Lets continue with our Codepipeline template file.

Its time now to define our Pipeline with the 3 stages we need in this course:

We are going to name the resource Pipeline And give it a AWS::CodePipeline::Pipeline type

this one creates a pipeline that describes how software changes go through a release process.

Next the Properties:

Rolearn is the first property to define. Here we are going to reference the role we created above and extract the ARN attribute !GetAtt CodePipelineServiceRole.Arn

Next the ArtifactStore:

As we explained in the last lecture this one specifies The S3 bucket location where AWS CodePipeline stores pipeline artifacts.

First we define the type S3 and reference the Artifact Location

Next we are going to define the core of the pipeline, the stages

As we talked, we are going to implement here 3 stages: Source, Build and Deploy

Lets start with the Stage Source

We are going to use the Name Source. The first thing to do are the Actions In AWS CodePipeline, an action is part of the sequence in a stage of a pipeline. It is a task performed on the artifact in that stage.

Pipeline actions occur in a specified order, in sequence or in parallel, as determined in the configuration of the stage.

Lets give the action a Name: CodeCommitRepoSource

Next we will need the ActionTypeId property this one specifies the action type and provider

for an AWS CodePipeline action.

Now we define the Actiontypeid Category: A category specify which action type is going to be performed, and of course constraint the provider for this action.

In our case will be Source, so the provider must be a Source Provider, CodeCommit and the owner of the Provider is AWS.

Next we specify the Version 1

Lets now go with the Configuration for our source Actiontypeid you probably figured out we will need here.

The repository name and the branch name both will be referenced from our parameters section

Then the OutputArtifacts property this specifies an artifact that is the result of an AWS CodePipeline action.

We need to give a name to the OutputArtifacts So we type in Name CodeCommitRepoSource

Next we want to specify a running order with the RunOrder property The default runOrder value for an action is 1. The value must be a positive integer. To specify a serial sequence of actions,

use the smallest number for the first action and larger numbers for each of the rest of the actions in sequence.

To specify parallel actions, use the same integer for each action you want to run in parallel.

At this Source stage we want to also specify a S3 bucket containing our template files.

This is going be used in the deploy stage.

First lets give it a name, we can call it Architecture-Template.

So we define in the same way we just did the ActionTypeId, the Category, the Owner and the version but in the Provider we will need to use the S3 value, of course, our OutputArtifact will be Architecture-Template. We want this to run in parallel with the Source action so we will have RunOrder 1, And the configuration will be S3Bucket: with a reference to the TemplateBucket parameter and S3ObjectKey with the templates.zip

We are going to be uploading this file containing all our templates in the templates folder.

Next stage will be the Build stage

The properties will be really similar to the ones we specified in the source stage

The Name will be Build the ActionTypeId property has:

Category: Build

Owner: AWS

Version: 1

Our Provider will be CodeBuild

In the configuration we will reference the already defined CodeBuild project resource. This stage will have an InputArtifacts precisely the one we just output before

Name: CodeCommitRepoSource

The OutputArtifacts for this stage will be BuildOutput

This will be the only one running so we will have RunOrder 1

We are going to define a Buildtest here this one is not going to be used in our course but its nice to have it done in case you need it in your projects.

We will use almost the same configuration we used in the Build action we declared before the only difference will be the ParameterOverrides specifying the TestSemaphore.

The last stage will be our Deploy stage as we did on the other two stages

we will have:

Category: Deploy

Owner: AWS

Version: 1

Provider: CloudFormation

Lets check the Configuration details.

We are going to use ChangeSetName property here this property let us to give a name of an existing change set or a new change set that you want to create for the specified stack.

With this we are giving a name to the changeset performed by our deploy stage

We want our deploy stage to be able to creates the stack if the stack does not exist, or in our case, the stack exists and AWS CloudFormation updates the stack.

ActionMode: CREATE_UPDATE knowing this, the next property is the stackname affected for our deploy stage as usual we use the reference to our stackname StackName: !Ref StackName

Next we want to deal with the permissions and use the IAM resources or roles we created for this we use the Capabilities property with the value CAPABILITY_IAM.

Now we need to specify out template file location affected for this we use the property TemplatePath: with the value Architecture-Template::templates/service.yaml

Next we need the role to assume RoleArn: !GetAtt CloudFormationExecutionRole.Arn this is the one we declare first in our resources

We want our deploy stage be able to change parameters in our template, I’m going to include an example here able to change the Desired Count variable we specify in our service template file and we are leaving the Cluster and the targetgroup variables the same.

Lastly the inputartifacts for this stage will be

- Name: Architecture-Template

- Name: BuildOutput

With the run order of 1.

The final section in our template is the Outputs section,  here we are going to have the Pipeline url created, as we did in other template files we are going to use the Sub function to concatenate string and build our URL

Our deployment and architecture is finished and ready to use In the next lecture we are going to build our Dockerfie with a simple PHP app and see how everything works together

Codepipeline Cloudformation Definition - Part 2
09:31

Defining our app & Dockerfile

Lets build a simple php app to use with our Pipeline and Architecture. And explore how to use Docker with our Architecture and Pipeline.

Transcript.

This one will be a simple to do list able to connect to the database Cluster, insert a new task to the todo list, mark the task done, and of course list the todo items.

We want this app to run in our ECS cluster so we will need to define a docker configuration.

We want Docker to build the image with the parameters specified in that docker configuration document.

This docker configuration is called Dockerfile

A Dockerfile is a text document that contains  all the commands a user could call

on the command line to assemble an image.

Besides our app and its dependencies I want to be able to check and admin the Database through a web view for that purpose I’m going to use adminer

Adminer  is a full-featured database management tool written in PHP.

It consist of a single file ready to deploy to the target server.

So our App will have the adminer and the files needed for our app to work.

Lets do a briefly check of our app.

Everything is in one file

First we see all the functions and database interactions defined in php language

Then we have some html forms and some php function calls

As you can see we defined the database parameters connection at the beginning of our file

we are going to set this parameters manually But in the next lecture we are going to set this parameters automatically during the build stage, using shell commands

So the first parameters  is the

DB_SERVER, we go to the DB cluster service

Copy the endpoint value and paste here.

We know the DB password, and just type it here

With this our app is ready to connect to our DB cluster.

In this lecture we are going to use the adminer to add all the tables needed for our app

In the next lecture we are going to add this validation and creation in our PHP application.

Now we want to build our dockerfile

The first we need is the PHP official image this one includes the Apache webserver and the PHP version 7 Then we want apt-get to update and install git, apache mod rewrite, the pdo mysql extension and grants all the privileges to the apache user.

Next we want to copy our source code to the /var/www/html directory this is the default Apache root folder

Next we want to able to handle user sessions in our app the actual app is not using user login

but this will be great for your applications So we create a folder out of the html folder

and give it write permissions

Lastly we want the Apache user default to be the owner of everything inside the /var/www/html

So now lets check the Docker magic and run this app locally

First we are going to build the Dockerfile

Then we are going to run it at this point we can check locally our app

Opening a browser in localhost pointing to port 8080

You will be able to check the adminer running locally but the app will try to connect

to our private DB Cluster an eventually will give you an error.

Lets check the magic here, we are using docker to run this container like a some kind of virtual machine under our linux and will have  exactly the same container running in our AWS ECS cluster

Lets create a change set for our stack and add our Deployment pipeline to our architecture

lets go to our Cloudformation service screen and there click the Action Create a Change set for our stack

We already upload our main-arch and Deployment pipeline yaml files to the S3 that contains all our architecture files

In this screen we just type in our S3 url pointing to the Main architecture file and click continously until we get to the briefing screen

here we can check all the changes applied and click the execute button

Here you will start checking the changes and will appear the new Deployment Pipeline module

Ok, now we have our Deployment pipeline ready for our application

Lets push the app to our codecommit repository

You can check the application folder structure

The dockerfile is on the root and the application files are inside the sources folder.

To push our code we use the command

git add -A

git commit -m “Initial commit”

git push origin

We previously clone this codecommit repo  into our local machine

After some minutes

You will see our Codepipeline working

First the source stage

This one prepare our Dockerfile, source code and the architecture templates files

and pass to the build stage

Once in the build stage CodeBuild will generate our image from the Dockerfile

and push it to the ECR repository

Next will take place the deploy stage

Here we are going to add our app to ECS cluster. Once this one finnished we are ready to test our app.

Just open the browser pointing to the Loadbalancer URL and start adding task to our todo list

So our app is ready and running in our architecture

In the next lecture we are going to tweak our app and architecture for a better performance and nicer parameters handling.

Defining our Application & Using the Dockerfile
09:12
+
Using Docker, Tweaking - Tunning the Application & the Architecture
1 Lecture 13:50

Tweaking the app and architecture.

So we already built our architecture and pipeline. Let’s now tweak it a little bit to get a better performance and nicer parameters handling in our app.

Transcript.

So we already built our architecture and pipeline. Let’s now tweak it a little bit to get a better performance and nicer parameters handling in our app.

Lets open our app index file and check the 3rd and 5th line this one defines our Db cluster url

and our Db password

Lets delete both values and set up our yaml files to fill in this 2 values during the codepipeline execution.

The first thing to set up our yaml files:

Lets open our database cluster yaml file and add the outputs section at the end of the file

We will need to output our Db Cluster Endpoint so we type in dbClusterURL:

Set a description Database Cluster URL Endpoint and finally use the Get Attribute function

with the attribute Endpoint.Address

With this we will have access to our Db Cluster endpoint URL in our main architecture yaml file

Lets go there and add the new parameters to our CodePipeline module

First we will need to add the parameter DbPassword, this one we already have it from our parameters section and the DbClusterEndpoint parameter referenced as an output from the DatabaseCluster output we just created

Its time for our Deployment Pipeline yaml file this is the one capable to do the whole set up we need

First we are going to add our 2 new parameters in the parameters section

Lets add DbClusterEndpoint:  with the string type and we are going to do the same process

with DbPassword parameter

Lets go now to the CodeBuildProject resource

We want to add this 2 new parameters into our index file replacing the line 3 and 5

Lets do this using the commands in the prebuild stage

We are going to be using the Environment variables property to setup our Dbcluster url and DB password and used in our commands

So lets type in first the commands and then will setup the environment variable property needed

We are going to use a simple sed command to replace the whole line 3 in our index.php file

We are going to add a new commandafter the second printf and type in

sed -i "3s/.*/ define('DB_SERVER', '"$DB_CLUSTER_URL"'); /" sources/index.php

With this command we are replace the third line with the string define('DB_SERVER',

and the environment variable DB_CLUSTER_URL in the files index.php inside the sources folder

we will do the same with the 5th line

sed -i "5s/.*/ define('DB_PASS', '"$DB_CLUSTER_PASS"'); /" sources/index.php

but of course the environment variablewill be a different one.

We are almost done we just need to setup our 2 environment variables to achieve it

we go to the EnvironmentVariables property and add DB_CLUSTER_URL referencing the DbClusterEndpoint declared in the parameters section and  add DB_CLUSTER_PASS referencing the DbPassword also declared in our parameters section

Lets now test everything in our stack

First we upload everything the 3 files modified to the S3 bucket, do not forget to also upload

the zip file with all the files in it.

Then we go to our Cloudformation Service screen click Actions and select the Create Change set option

Then we type in our main architecture url and click next

Once here we add a name and a description for this newly created change set and click next until we get to the review screen

here we can check all the modules affected and click the execute button

After some minutes we will have everything ready to work we go to our shell console and push our changes

After some seconds our codepipeline will start to work

Passing first for the Source stage

Then the Build stage, here we want to check the log details to see whats happening

The first thing we notice is the pre-build commands executing then the docker build execution

doing each step we declare in our dockerfile

finally everything is pushed to the ECR repository

Lets go back to our codepipeline and check the Deploy stage

We can also go in details here

Lets go to the EC2 container service screen, click our Service then the service name and open the events tab

Here we can check our Deploy working, starting 2 new task, registering 2 targets into the targetgroup, deregistering the old 2 targerts, drainning the connections to this old targets

stoping the 2 task and finally reaching a steady state with the new task

We can now open our loadbalancer url from the outputs in our cfn service screen and will see our application running smoothly

Let go back to the ECS screen and open the metrics tab

Here we can monitor our cluster execution with the CPU and memory utilization graph

We can also go to the events tab and click in our target group link above

This will open our target group definition

Here open the Monitoring tab and you can verify  all the functioning details for our application

Healthy and unhealthy host registered

Average latency

Requests

200 and 300 code answers

500 and 400 code errors and more

This is a pretty cool diagnostic of our app running

Lets check more metrics

Go to the Cluster link and check the cluster metrics

here we can check the cpu utilization and reservation and the memory utilization and reservation

Lets get an even better memory reservation definition

In our service definition yaml file

we defined a Memory property with a 490 value this one is telling cloudformation to create the task with 490 MiB of memory to reserve for the container, but if the container attempts to exceed

the allocated memory, the container is terminated.

We don’t want our container to be terminated instead, we want to utilize all the memory available in the instance, because our architecture is not scaling inside the container, instead our architecture is scaling in more ec2 instances.

For this reason we need our container to be able to take everything he needs from the machine hosting it.

To achieve it, we just want to change this property to MemoryReservation with this we are stablishing a memory reservation and not a memory limit.

Lets save everything into our S3 bucket and update the Service module through the cfn service screen

We follow the same process we just did before...

And its now ready, our app and architecture have been tweaked for a better performance

and parameters handling

We also checked how easy is to modify a module in our architecture definition and of course how easy is to modify our codepipeline definition.

The next lecture will be our final lecture there, we will talk about the conclusions

the courses coming and more...

Tweaking - Tunning the Application & Architecture
13:50
+
Conclusions
1 Lecture 01:55

Conclusion.

In this course we were able to implement an Infrastructure as a Code Architecture using Cloudformation.

The archiecture implemented was backed with a Codepipeline definition able to implement CI & CD exposing a Highly available and scalable ECS Cluster in the compute layer

During the course we used all this services offered by AWS:

- ECS

- RDS

- Elastic Load Balancer

- CodeCommit

- CodePipeline

- VPC

- ECR

- Security Group

- Availability zones

- etc

and of course, learned how to implement and use docker containers with this pipeline

from a developer point of view.

We also checked how easy is to modify our architecture modules from our architecture definition files and of course this opens a huge automation windows from the QA and Developer side.

In the next course we are going to add more features to this base architecture EFS for persistent data in our docker containers, Inside EC2 autoscaling configuration and many more

Here you can check my social networks for next launches and courses.

In my github you can check the course transcript, the yaml files and everything you need for this course and the next ones.

Course Conclusions
01:55
About the Instructor
Alberto Eduardo
4.2 Average rating
25 Reviews
315 Students
1 Course
CTO and Cloud Consultant Expert. AWS enthusiastic and Guru.

My name is Eng. Alberto Eduardo, I'm a Software Engineer with more than 12 years of rich experience in Software Development and Software Architecture Design and Implementation. 

With more than 6 years experience using AWS console and API, I come to you to bring all my knowledge in an easy and practical way. With tips and techniques adquired during all this time designing and developing millions of users apps and websapps.