- 11 hours on-demand video
- Full lifetime access
- Access on mobile and TV
- Certificate of Completion
Get your team access to 4,000+ top Udemy courses anytime, anywhere.Try Udemy for Business
- Learn about various cloud services provided by Amazon Web Services (AWS)
- Learn how to create and configure AWS services
- Learn how to integrate various AWS services together to present a holistic cloud solution
- Learn the use cases of the AWS services, Do's and Dont's while using those services and best practices
- Basic knowledge of Linux commands
- Common IT knowledge
***UPDATE: New Content ALERT!!! Around 50+ video content added to the course!***
AWS is a public cloud and a pioneer in the cloud computing domain. It provides every kind of service models like IAAS, PAAS, and SAAS. In this course, I will discuss various cloud services provided by Amazon Web Services (AWS).
Here we will elaborate on the features of AWS services, their use cases, how to create those services, how to configure them and how to integrate various services together to present a holistic cloud solution.
This series is useful for anyone starting their cloud journey as well as cloud practitioners. However, it is especially useful for learners who are preparing for various AWS certifications such as AWS Solutions Architect, AWS SysOps Associate and AWS Developer Associates. We discuss use cases of the services, Do's and Dont's while using those services and best practices.
- Beginners who want to start their journey in cloud domain
- IT professionals who want to become cloud practitioners
- Developers who want to deploy an application on AWS
- Candidates preparing for various AWS certifications such as AWS Solutions Architect, AWS SysOps Associate and AWS Developer Associates
This video describes the scope of this series of AWS Services and answers basic questions about this course such as:
Who is the target audience?
Technical prerequisites for this course?
and what all you need to begin this course
This video describes the basics of cloud computing. It explains:
The need for cloud,
Example of cloud application
Various service models on the cloud such as IAAS, PAAS, and SAAS
Benefits of cloud
and Career options in cloud computing
Launched in 2006, AWS is provided by cloud solution concept pioneer Amazon Inc. Amazon's internal IT resource management built AWS, which expanded and grew into an innovative and cost-effective cloud solution provider.
AWS resides on the same infrastructure as the host of Amazon's other Web properties, such as Webstore.
Amazon packages AWS with scalable and virtually unlimited computing, storage and bandwidth resources. AWS uses the subscription pricing model of pay-as-you-go or pay-for-what-you-use.
AWS services include but not limited to:
Amazon Elastic Computer Cloud (EC2)
Amazon Simple Storage Service (Amazon S3)
Amazon Relational Database Service (Amazon RDS)
Amazon Simple Notification Service (Amazon SNS)
Amazon Simple Queue Service (Amazon SQS)
Amazon Virtual Private Cloud (Amazon VPC)
In this video, we explore the position of AWS in the overall cloud market, the range of services it provides and what to expect going forward.
In this video I have demonstrated how to create a Free Tier AWS account from scratch. AWS provides a free account for one year with access to a wide range of services. A lot of services but not all, are free. Also the free services are free to certain extent only. Before using these services it is advised that you visit www.aws.amazon.com/free for more information and find out which service is free to what extent. Before using this account also have a look on another video in this series about Best practices for Free Tier AWS account.
Any Public cloud service provider needs to maintain a large Physical Infrastructure. AWS being the pioneer in public cloud provides a globally distributed Infrastructure.
In this video, we talk about the Global Physical infrastructure laid out by AWS, its hierarchy, and its components. It explains the concepts of:
What is Region?
What is the availability zone?
What is the edge location?
Amazon S3 is an object storage built to store and retrieve any amount of data from anywhere – web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry.
S3 provides comprehensive security and compliance capabilities that meet even the most stringent regulatory requirements. It gives customers flexibility in the way they manage data for cost optimization, access control, and compliance. S3 provides query-in-place functionality, allowing you to run powerful analytics directly on your data at rest in S3. And Amazon S3 is the most supported cloud storage service available, with integration from the largest community of third-party solutions, systems integrator partners, and other AWS services.
In this video, I will introduce you to the most fundamental storage service by AWS called S3 (Simple Storage Service), its use case and how it is different from other storage services.
In this video, I have explained how to get started with S3 in your account. It is a hands-on video, so be ready with your own AWS account. You can follow the steps.
Create a bucket
Upload a file
Check the properties
Select the storage class of objects
Discuss Various operations on bucket and files
... along with discussing various concepts about S3.
Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed data, S3 Standard-Infrequent Access and S3 One Zone-Infrequent Access for long-lived, but less frequently accessed data, and Amazon Glacier for long-term archive. Amazon S3 also offers configurable lifecycle policies for managing your data throughout its lifecycle. Once a policy is set, your data will automatically migrate to the most appropriate storage class without any changes to your application.
In this video, we discuss what are the various options S3 provides in terms of storage classes. These Storage classes differ from each other in terms of pricing and its use. For an AWS Solutions Architect, it is necessary to know these options and when to use them.
Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures. You can upload multiple versions of your file on S3 without even changing its name.
In this video, we will work hands-on and play with this awesome feature. Open your AWS console before starting the video and then follow along.
AWS S3 provides you a way to track the access to your S3 bucket using its logging feature. It provides Bucket level as well as Object-level logging.
Server access logging provides detailed records for the requests that are made to a bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits.
You can log the object-level API operations on your S3 buckets. Before Amazon CloudWatch Events can match these events, you must use AWS CloudTrail to set up a trail configured to receive these events.
In this video, we explore its pros/cons and how to enable and use it effectively.
Data Security is always one of the prime concerns of enterprises when using cloud or otherwise.
Amazon S3 provides comprehensive security and compliance capabilities that meet even the most stringent regulatory requirements. It gives you flexibility in the way you manage data for cost optimization, access control, and compliance. However, because the service is flexible, a user could accidentally configure buckets in a manner that is not secure.
In this video, we talk about various access control mechanisms such as access control and bucket policies, to secure your bucket and its content.
You can host a static website on Amazon Simple Storage Service (Amazon S3). On a static website, individual webpages include static content. They might also contain client-side scripts. By contrast, a dynamic website relies on server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET. Amazon S3 does not support server-side scripting.
In this video, we show you how to host a static website using AWS S3.
Again a hands-on lab, so be ready with your own AWS account and follow along.
Uploading and accessing data on the cloud is pretty seamless, but once you have got a lot of data on the cloud, or bucket specifically, how do you manage that.
In this video, we will discuss how to organize and manage your buckets more effectively by using its existing features. We have discussed here:
Lifecycle rules for S3 storage classes
Inventory management on S3
Cross-region replication for S3 objects
After covering a lot of S3 properties, it is now time to look at other awesome properties of S3. AWS very actively keeps on updating new features, so you may be surprised with some more properties in your S3 bucket that were launched after recording this video.
In this video we cover:
AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.
IAM is a feature of your AWS account offered at no additional charge.
IAM is the one of the most important Security and Identity Management service in AWS. Lets get started.
You can create individual IAM users within your account that correspond to users in your organization. IAM users are not separate accounts; they are users within your account. Each user can have its own password for access to the AWS Management Console. You can also create an individual access key for each user so that the user can make programmatic requests to work with resources in your account.
In this video we will walk through various ways to create, configure and manage IAM users, and discuss various aspects of IAM users.
An IAM group is a collection of IAM users. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users. For example, you could have a group called Admins and give that group the types of permissions that administrators typically need. Any user in that group automatically has the permissions that are assigned to the group.
If a new user joins your organization and needs administrator privileges, you can assign the appropriate permissions by adding the user to that group. Similarly, if a person changes jobs in your organization, instead of editing that user's permissions, you can remove him or her from the old groups and add him or her to the appropriate new groups.
In this Video, we will see how to create groups, how to assign it IAM Policy and how to add and remove Users in these groups.
A policy is an object in AWS that, when associated with an entity or resource, defines their permissions. AWS evaluates these policies when a principal, such as a user, makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents.
IAM policies define permissions for an action regardless of the method that you use to perform the operation.
In this video we discuss about the structure of policy, how to write it and and how to use it.
An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials (password or access keys) associated with it. Instead, if a user assumes a role, temporary security credentials are created dynamically and provided to the user.
In this video we go through various types of the roles and its use-cases.
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.
Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.
Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate them from common failure scenarios.EC2 is the most fundamental compute service provided by AWS. This service provides you VM in the AWS cloud.
In this video, we are building the foundation for upcoming more advanced concepts in AWS compute space.
There are four ways to pay for Amazon EC2 instances: On-Demand, Reserved Instances, and Spot Instances. You can also pay for Dedicated Hosts which provide you with EC2 instance capacity on physical servers dedicated for your use.
In this vides we talk about various types of EC2 instances based on the payment options and tenancy.
Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target workload.
This video answers:
How to configure the size of your VM (EC2 instance)?
What various instance families mean?
What are the chief use cases?
How to pick an instance type?
... and many more.
AWS Security groups are more like firewalls around compute units or instances in AWS Cloud. They have inbound and outbound rules that govern what kind of traffic is allowed to go in/out of the system.
In this video we will talk all about security groups and will go through hands-on lab to see how to create configure and modify security groups.
Introduction to Security groups
How to create AWS Security groups
How to select Security groups for EC2 instances
Creating Inbound rules and outbound rules
Editing Rules in security groups
Best practices for using Security groups
This video describes step by step process to connect to your ec2 instance.
Get the Public IP of instance
Converting keypair from .pem format to .ppk format
Using putty to connect to the EC2 instance
Troubleshooting connectivity issues with EC2
Here I also explain various frequently asked connectivity issues for EC2 instances and how to resolve it.
EC2 instances can be of any OS type.
We have seen how to launch and connect to an Unix-based EC2 instances. In this videos I will show you the step-by-step process to launch and connect to a Windows-based-EC2 instance.
As opposed to putty we shall use RDP to connect to a Windows-based instance.
An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You must specify a source AMI when you launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration. You can use different AMIs to launch instances when you need instances with different configurations.
An AMI includes the following:
A template for the root volume for the instance (for example, an operating system, an application server, and applications)
Launch permissions that control which AWS accounts can use the AMI to launch instances
A block device mapping that specifies the volumes to attach to the instance when it is launched In this hands-on lab we will see how to create an AMI, Move it and use it.
In this hands-on lab we will see how to create an AMI, Move it and use it.
When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives. You can also pass this data into the launch wizard as plain text, as a file. This is called bootstrap script.
Bootstrap provides you a lot of scopes of automation. These are a way to configure your ec2 instances so that they are ready to perform as soon as they boot-up.
Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability.
EBS Volumes can be imagined as your internal or external hard disk for your servers on cloud. As opposed to S3, which is Object based storage, EBS volumes can be attached to EC2 instances and the storage space can be used as boot volume as well.
In this Video we describe the features of EBS Volumes, what is the use case, how block storage is different from Object storage and other properties of EBS Volumes.
Amazon provides the following EBS volume types on a high level, which differ in performance characteristics and price, so that you can tailor your storage performance and cost to the needs of your applications. The volumes types fall into two categories:
SSD-backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS. Under this category it provides General Purpose SSD (gp2) and Provisioned IOPS SSD (io1)
HDD-backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS. Under this category it provides Throughput Optimized HDD (st1) and Cold HDD(sc1)
In this video we discuss about various type of EBS options available on AWS, its use-cases and pros and cons.
To work with EC2 instance you need EBS, and most of the time you will need to increase storage space attached to your instances. To achieve that you need to create EBS Volumes. You can create an Amazon EBS volume that you can then attach to any EC2 instance within the same Availability Zone. You can choose to create an encrypted EBS volume.
In this video we will have a hands-on Lab on how to create an EBS volume, with an EC2 instance and without an EC2 instance.
EBS volumes can be created independently as well, and then can be attached to an EC2 instance. It can be detached and then moved to different EC2 instance.
In this hands-on Lab we will go through the following process :
create external EBS volume,
attach the volume to an EC2 instance
mount it on an EC2 instance,
To remove the EBS volume:
Unmount the volume
Detach the Volume
Delete the volume or attach to another EC2 instance.
In this video I have discussed about what are the various mehanism other than AWS console to access, create and manage your AWS resources. We have talked about: What is AWS CLI tool What is AWS SDK What is Cloudformation template.
In subsequent videos I have discussed About these mechanisms in detail.
The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
New file commands make it easy to manage your Amazon S3 objects. Using familiar syntax, you can view the contents of your S3 buckets in a directory-based listing. For example:
$ aws s3 cp myfolder s3://mybucket/myfolder --recursive
upload: myfolder/file1.txt to s3://mybucket/myfolder/file1.txt
upload: myfolder/subfolder/file1.txt to s3://mybucket/myfolder/subfolder/file1.txt
$ aws s3 sync myfolder s3://mybucket/myfolder --exclude *.tmp
upload: myfolder/newfile.txt to s3://mybucket/myfolder/newfile.txt
Amazon Elastic File System (Amazon EFS) provides simple, scalable, elastic file storage for use with AWS Cloud services and on-premises resources. It is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily. Amazon EFS is built to elastically scale on demand without disrupting applications, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it. It is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS that scale as a file system grows, with consistent low latencies. As a regional service, Amazon EFS is designed for high availability and durability storing data redundantly across multiple Availability Zones.
In this video we talk about AWS Elastic File System (EFS).
AWS Storage Gateway is a hybrid storage service that enables your on-premises applications to seamlessly use AWS cloud storage. You can use the service for backup and archiving, disaster recovery, cloud data processing, storage tiering, and migration. Your applications connect to the service through a virtual machine or hardware gateway appliance using standard storage protocols, such as NFS, SMB and iSCSI. The gateway connects to AWS storage services, such as Amazon S3, Amazon Glacier, and Amazon EBS.
In this video we explore various Storage Gateway types and their use cases, such as:
Amazon Web Services provides fully managed relational and NoSQL database services, as well as fully managed in-memory caching as a service and a fully managed petabyte-scale data-warehouse service. Or, you can operate your own database in the cloud on Amazon EC2 and Amazon EBS.
This lecture gives you an overall introduction to various database services offered by AWS and its use-cases.
The database services discussed are:
In this video we discuss all about AWS database (RDS service), taking Mysql DB Engine as an example.
Please go through the previous video if you want to know how to launch an RDS instance.
1. Install mysql client on EC2 instance.
2. Connect to an RDS instance from a EC2 instance.
3. Editing Security group of RDS instance to allow access for database clients.
4. Explore RDS dashboard.
5. Modify RDS instances.
6. How to take Snapshot of RDS AWS Database.
7. How to restore AWS RDS instance from a snapshot.
8. How to move an RDS instance to another region. OR how to copy an RDS database to another AWS region.
9. How to create your own parameter group.
10. Difference between Automated Backups and Manual Snapshots in RDS instance.
11. How to delete an AWS RDS instance.
In this video we describe what makes a good infrastructure for web applications in general. We further discuss how various AWS helps us provide a fault-tolerant, scalable, highly available and robust Infrastructure for 3 -tier web application.
For more info look at the below links:
Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. Elastic Load Balancing offers three types of load balancers that all feature the high availability, automatic scaling, and robust security necessary to make your applications fault tolerant. Types of load balancers are:
Classic Load Balancer
Network Load Balancer
Application Load Balancer
In this video we first talk about scaling in general, type of scaling such as horizontal scaling, vertical scaling. After that we see how to create a fault-tolerant and scalable web application. We then talk about the need of load balancer and then types of load balancer.
A load balancer distributes incoming application traffic across multiple EC2 instances in multiple Availability Zones. This increases the fault tolerance of your applications. Elastic Load Balancing detects unhealthy instances and routes traffic only to healthy instances.
Your load balancer serves as a single point of contact for clients. This increases the availability of your application. You can add and remove instances from your load balancer as your needs change, without disrupting the overall flow of requests to your application. Elastic Load Balancing scales your load balancer as traffic to your application changes over time. Elastic Load Balancing can scale to the vast majority of workloads automatically.
Along with the discussion on the concepts, in this video we start from scratch and go through following steps:
Create a Classic ELB.
Configure it properly.
Attach 2 EC2 instances with this ELB
Deploy a demo webpage on both the servers.
Test out the fault
An Application Load Balancer functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model. After the load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply, and then selects a target from the target group for the rule action. You can configure listener rules to route requests to different target groups based on the content of the application traffic. Routing is performed independently for each target group, even when a target is registered with multiple target groups.
You can add and remove targets from your load balancer as your needs change, without disrupting the overall flow of requests to your application. Elastic Load Balancing scales your load balancer as traffic to your application changes over time. Elastic Load Balancing can scale to the vast majority of workloads automatically.
In this video we create an Application ELB and discuss how to :
Create Target Groups
Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes below this size. You can specify the maximum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes above this size. If you specify the desired capacity, either when you create the group or at any time thereafter, Amazon EC2 Auto Scaling ensures that your group has this many instances. If you specify scaling policies, then Amazon EC2 Auto Scaling can launch or terminate instances as demand on your application increases or decreases.
In this video we discuss various concepts associated with autoscaling.
In this video we take hands-on approach to see how to bring autoscalability to our infrastructure.
We see how to create launch configuration, autoscaling group, cloud watch alarms, scaling policy, termination policy, scheduled action etc. Be ready with your own AWS account and follow the steps to make full use of this video.
A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC. You can specify an IP address range for the VPC, add subnets, associate security groups, and configure route tables.
In this video we start exploring AWS VPC, further videos will go deeper into the various components of VPC.
A VPC spans all the Availability Zones in the region. After creating a VPC, you can add one or more subnets in each Availability Zone. When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each subnet must reside entirely within one Availability Zone and cannot span zones. Availability Zones are distinct locations that are engineered to be isolated from failures in other Availability Zones.
By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location. AWS assigns a unique ID to each subnet.
The configuration for this scenario includes a virtual private cloud (VPC) with a single public subnet, and an Internet gateway to enable communication over the Internet. This configuration is recommended if you need to run a single-tier, public-facing web application, such as a blog or a simple website.
An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic.
An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.
An egress-only Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows outbound communication over IPv6 from instances in your VPC to the Internet, and prevents the Internet from initiating an IPv6 connection with your instances.
You control how the instances that you launch into a VPC access resources inside and outside the VPC.
In this video we talk about how the network packets flow inside the VPC. What are the various routing and access control mechanism in the way.
Your default VPC includes an internet gateway, and each default subnet is a public subnet. Each instance that you launch into a default subnet has a private IPv4 address and a public IPv4 address. These instances can communicate with the internet through the internet gateway. An internet gateway enables your instances to connect to the internet through the Amazon EC2 network edge.
You can enable internet access for an instance launched into a nondefault subnet by attaching an internet gateway to its VPC (if its VPC is not a default VPC) and associating an Elastic IP address with the instance.
A network access control list (NACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You can create a custom network ACL and associate it with a subnet. A network ACL contains a numbered list of rules that we evaluate in order, starting with the lowest numbered rule, to determine whether traffic is allowed in or out of any subnet associated with the network ACL. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. Your VPC automatically comes with a modifiable default network ACL.
In this video we will talk about the significance of NACL, how to use it, manage it and troubleshoot it.
A route table contains a set of rules, called routes, that are used to determine where network traffic is directed.
Each subnet in your VPC must be associated with a route table; the table controls the routing for the subnet. A subnet can only be associated with one route table at a time, but you can associate multiple subnets with the same route table.
Your VPC has an implicit router that reads route tables and behaves accordingly. VPC comes with a main route table that you can modify. You can create additional custom route tables for your VPC. Each subnet must be associated with a route table, which controls the routing for the subnet. If you don't explicitly associate a subnet with a particular route table, the subnet is implicitly associated with the main route table.
In this video we will work with route tables in details.
Amazon Virtual Private Cloud aka VPC lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. You can use both IPv4 and IPv6 in your VPC for secure and easy access to resources and applications.
Amazon VPC enables you to launch AWS resources into a virtual network that you've defined.
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account.
Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Amazon SNS provides topics for high-throughput, push-based, many-to-many messaging. Using Amazon SNS topics, your publisher systems can fan out messages to a large number of subscriber
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
Amazon CloudWatch is a monitoring and management service built for developers, system operators, site reliability engineers (SRE), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, understand and respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.
CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications and services that run on AWS, and on-premises servers.
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.
Snowball is a petabyte-scale data transport solution that uses devices designed to be secure to transfer large amounts of data into and out of the AWS Cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Customers today use Snowball to migrate analytics data, genomics data, video libraries, image repositories, backups, and to archive part of data center shutdowns, tape replacement or application migration projects. Transferring data with Snowball is simple, fast, more secure, and can be as little as one-fifth the cost of transferring data via high-speed Internet.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS - both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services.
Amazon Route 53 is a highly available and scalable cloud Domain Name Service (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names (like www.example.com) into the numeric IP addresses that computers use to connect to each other.
In this video we talk about Route53 features and Routing policies.
AWS CloudFormation enables you to create and provision AWS infrastructure deployments predictably and repeatedly. AWS CloudFormation enables you to use a template file to create and delete a collection of resources together as a single unit, also known as a stack. It helps you leverage AWS products such as Amazon EC2, Amazon Elastic Block Store, Amazon SNS, Elastic Load Balancing, and Auto Scaling to build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure.
In this video we will see what are the components of CloudFormation and template files.
The AWS Developer Tools is a set of services designed to enable developers and IT operations professionals practicing DevOps to rapidly and safely deliver software. Together, these services help you securely store and version control your application's source code and automatically build, test, and deploy your application to AWS. You can use AWS CodePipeline to orchestrate an end-to-end software release workflow using these services and third-party tools or integrate each service independently with your existing tools.
Serverless is the native architecture of the cloud that enables you to shift more of your operational responsibilities to AWS, increasing your agility and innovation. Serverless allows you to build and run applications and services without thinking about servers. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning.
You can build them for nearly any type of application or backend service, and everything required to run and scale your application with high availability is handled for you.