Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
31 views

Aws Intern

Aws

Uploaded by

likitha30m
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Aws Intern

Aws

Uploaded by

likitha30m
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

AWS CLOUD VIRTUAL INTERNSHIP

A report submitted in partial fulfillment of the requirements for the Award of the
Degree of
BACHELOR OF TECHNOLOGY
In

COMPUTER SCIENCE AND ENGINEERING


Submitted by

V Vijayendra Prasad

208W1A05C7

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

V.R Siddhartha Engineering College


(Autonomous)

Approved by AICTE, NAAC A+, NBA Accredited

Affiliated to Jawaharlal Nehru Technological University, Kakinada

Vijayawada 520007

June, 2023
VELAGAPUDI RAMAKRISHNA SIDDHARTHA
ENGINEERING COLLEGE
(Autonomous, Accredited with ‘A+’ grade by NAAC)
Department of Computer Science and Engineering

CERTIFICATE

This is to certify that the Internship report entitled “AWS Cloud Virtual
Internship” being submitted by

V Vijayendra Prasad (208W1A05C7)

in partial fulfillment of the requirements for the award of the degree of BACH-
ELOR OF TECHNOLOGY in COMPUTER SCIENCE AND ENGINEERING,
from May 2023 to July 2023.

Dr.G.Anuradha, M.Tech, Ph.D Dr.D.Rajeswara Rao, M.Tech, Ph.D

Department Internship Coordinator Professor & HOD, CSE

i
DECLARATION

I hereby declare that the dissertation entitled “AWS Cloud Virtual Intern-
ship” submitted for the B.Tech Degree is my work and the dissertation has not
formed the basis for the award of any degree, associates, fellowship or any other
similar titles.

Place: Vijayawada V Vijayendra Prasad

Date: 03-11-2023 208W1A05C7

ii
ACKNOWLEDGEMENT

Behind every achievement lies an unfathomable sea of gratitude to those who


activated it,without whom it would ever have come into existence. To them we
lay the words of gratitude imprinted with us.
I owe my sincere thanks to our respected Principal, Dr. A. V. Ratna Prasad
for providing us the necessary trainings and his immense support to shape us into
being industry ready.
I would like to thank Dr. D. Rajeswara Rao, Head of the Department,
Computer Science and Engineering, for the endless support and that was
provided which made us challenge ourselves and accept such impeccable opportu-
nities.
I would like to extend my gratitude to the department faculty, especially Dr.
G. Anuradha, Associate Professor & Internship Co-Ordinator, Dr. K.
Srinivas, Professor & Faculty for this opportunity support, and guidance
throughout the internship.
I would like to extend my gratitude to team for providing me the opportunity
for this internship. I owe my acknowledgements to an equally long list of people
who helped me in this internship. I would like to thank the various mentors
associated with me in various use cases throughout the internship.

Place: Vijayawada V Vijayendra Prasad

Date: 03-11-2023 208W1A05C7

iii
INTERNSHIP CERTIFICATE BY AWS ACADEMY

iv
COMPANY PROFILE AND EXTERNAL
GUIDE DETAILS

The AWS Academy is an educational program launched by Amazon Web Services


(AWS) to meet the growing demand for skilled cloud computing professionals.
Established in 2016, the AWS Academy offers educational institutions, educators,
and students a wide range of resources and training materials designed to cultivate
expertise in cloud computing, AWS services, and associated technical skills. The
program provides participants with the knowledge and hands-on experience needed
to leverage AWS technologies effectively and earn valuable AWS certifications,
preparing them for careers in the constantly expanding field of cloud computing.
The AWS Academy provides a wide variety of courses that cater to different
domains and skill levels. These courses are carefully designed to offer students
and professionals practical knowledge and expertise in AWS services and cloud
computing concepts. Students can choose from a range of courses such as Cloud
Foundations, Cloud Architecting, Cloud Operations, Cloud Security, and Cloud
Data Analytics. These courses cover a wide range of topics, from basic cloud
computing principles to advanced specialized domains like security and data ana-
lytics, which can be adapted to different career paths within the cloud computing
ecosystem.
The AWS Academy goes beyond just providing courses by providing educa-
tional institutions with the necessary tools and support to integrate AWS tech-
nologies into their programs. This ensures that students receive a contemporary
and hands-on education, equipping them with the skills required to excel in a
highly competitive job market. Moreover, the AWS Academy collaborates with
accredited institutions worldwide, promoting the integration of AWS services into
academic programs, and fostering a new generation of professionals who are pro-
ficient in cloud technology.

v
ABSTRACT

The AWS Cloud Virtual Internship is a comprehensive and structured training


program that offers a deep dive into the world of cloud computing, with a specific
focus on Amazon Web Services (AWS). This internship equips participants with
the knowledge, skills, and hands-on experience necessary to excel in AWS cloud
technologies. This internship covers a wide range of topics such as cloud con-
cepts, cloud economics, security, networking, compute, storage, databases, cloud
architecture, auto scaling, monitoring, and disaster recovery. Each module in-
cludes video content, hands-on labs, practical demonstrations, and assessments to
ensure a well-rounded learning experience. aims to empower participants with a
strong foundation in AWS services, architecture best practices, and operational
techniques. It emphasizes the practical application of cloud computing concepts,
making it highly relevant to real-world scenarios. As cloud technology contin-
ues to reshape the IT landscape, this internship provides a valuable platform for
individuals seeking to advance their careers in cloud architecture, development,
and operations. With the increasing demand for cloud expertise, this internship
offers a gateway to success in cloud computing, ensuring that participants are
well-prepared for the challenges and opportunities in the cloud era.

Keywords: AWS Services, Cloud Computing, Amazon EC2, Amazon S3.

vi
Table of Contents

1 AWS Academy Cloud Foundations 1


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Amazon Web Services . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Benefits of Cloud . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.3 Introduction to AWS . . . . . . . . . . . . . . . . . . . . . . 4
1.2.4 Benefits of AWS . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Cloud Economics and Billing . . . . . . . . . . . . . . . . . . . 5
1.3.1 AWS Pricing Model . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.2 Total Cost of Ownership . . . . . . . . . . . . . . . . . . . . 6
1.3.3 AWS Billing & Cost Management . . . . . . . . . . . . . . . 7
1.4 AWS Global Infrastructure . . . . . . . . . . . . . . . . . . . . 7
1.4.1 Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.2 Data Centers . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.3 Points of Presence . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 AWS Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.1 Categories of Services . . . . . . . . . . . . . . . . . . . . . . 9
1.5.2 Storage services . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.3 Compute Services . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5.4 Database Services . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.5 Networking and content delivery services . . . . . . . . . . . 14
1.5.6 Security, identity, and compliance services . . . . . . . . . . 16
1.5.7 Management and governance services . . . . . . . . . . . . . 18

2 AWS Academy Cloud Architecting 20


2.1 Introduction to Cloud Architecting . . . . . . . . . . . . . . . 20
2.2 Adding a Storage Layer using Amazon S3 . . . . . . . . . . . 20
2.2.1 Bucket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.2 Storing Data in Amazon S3 . . . . . . . . . . . . . . . . . . 20
2.3 Adding a Computer Layer using Amazon EC2 . . . . . . . . 21
2.3.1 Choosing an AMI to Launch an EC2 Instance: . . . . . . . . 21
2.3.2 Selecting an EC2 Instance Type: . . . . . . . . . . . . . . . 21
2.3.3 Demo Configuring an EC2 Instance with User Data: . . . . 22

vii
2.4 Amazon EC2 Pricing Options . . . . . . . . . . . . . . . . . . . 22
2.5 Adding a Database Layer . . . . . . . . . . . . . . . . . . . . . 23
2.6 Creating a Network Environment . . . . . . . . . . . . . . . . 23
2.6.1 Creating an AWS networking environment: . . . . . . . . . . 24
2.7 Connecting Networks with AWS . . . . . . . . . . . . . . . . . 24
2.7.1 Connecting to Your Remote Network with AWS Site-to-Site
VPN: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.7.2 Connecting to Your Remote Network with AWS Direct Con-
nect: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.7.3 Connecting VPCs in AWS with VPC Peering: . . . . . . . . 25
2.8 Securing User Application Access . . . . . . . . . . . . . . . . 25
2.8.1 Account users and IAM: . . . . . . . . . . . . . . . . . . . . 25
2.9 Implementing Elasticity, High Availability, and Monitor-
ing in AWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.9.1 Elasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.9.2 High Avaliability: . . . . . . . . . . . . . . . . . . . . . . . . 27
2.10 Automating Your Architecture . . . . . . . . . . . . . . . . . 27
2.10.1 Automating Your Infrastructure: . . . . . . . . . . . . . . . 28
2.11 Caching Content in AWS . . . . . . . . . . . . . . . . . . . . . . 28
2.11.1 Overview Of Caching: . . . . . . . . . . . . . . . . . . . . . 28
2.11.2 Edge Caching: . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.11.3 Caching Web Sessions: . . . . . . . . . . . . . . . . . . . . . 29
2.12 Building Decoupled Architectures: . . . . . . . . . . . . . . . . 29
2.12.1 Decoupling Your Architecture: . . . . . . . . . . . . . . . . 29
2.12.2 Decoupling with Amazon SQS: . . . . . . . . . . . . . . . . 29
2.12.3 Decoupling with Amazon SNS: . . . . . . . . . . . . . . . . 29
2.12.4 Sending Messages Between Cloud Applications and On-Premises
with Amazon MQ: . . . . . . . . . . . . . . . . . . . . . . . 30
2.13 Planning for Disaster: . . . . . . . . . . . . . . . . . . . . . . . . 30
2.13.1 Disaster Planning Strategies: . . . . . . . . . . . . . . . . . . 30
2.13.2 Disaster Recovery Patterns: . . . . . . . . . . . . . . . . . . 30

3 Implementation 31
3.1 Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4 Conclusion 38

REFERENCES 39

viii
List of Figures

1.1 AWS pricing model . . . . . . . . . . . . . . . . . . . . . . . . . . . 6


1.2 AWS Global infrastructure . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 AWS Storage Services . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 AWS Database Services . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5 Amazon Route 53 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6 AWS Identity Access Management . . . . . . . . . . . . . . . . . . 17
1.7 Amazon CloudWatch . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.1 Amazon EC2 console . . . . . . . . . . . . . . . . . . . . . . . . . . 31


3.2 Amazon Machine Image . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3 Amazon EC2 Instance type . . . . . . . . . . . . . . . . . . . . . . 33
3.4 Amazon EC2 security groups . . . . . . . . . . . . . . . . . . . . . 34
3.5 Amazon EC2 Key pair . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.6 Amazon EC2 state . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.7 Public IP address . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.8 Public IP address . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.9 Working with EC2 Instance . . . . . . . . . . . . . . . . . . . . . . 37
3.10 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

ix
Chapter 1
AWS Academy Cloud Foundations

1.1 Introduction
Amazon has a long history of using a decentralized IT infrastructure. This arrange-
ment enabled our development teams to access compute and storage resources on
demand, and it has increased overall productivity and agility. By 2005, Ama-
zon had spent over a decade and millions of dollars building and managing the
large-scale, reliable, and efficient IT infrastructure that powered one of the world’s
largest online retail platforms. Amazon launched Amazon Web Services (AWS) so
that other organizations could benefit from Amazon’s experience and investment
in running a large-scale distributed, transactional IT infrastructure. AWS has
been operating since 2006, and today serves hundreds of thousands of customers
worldwide. Today Amazon.com runs a global web platform serving millions of
customers and managing billions of dollars’ worth of commerce every year. Using
AWS, you can requisition compute power, storage, and other services in minutes
and have the flexibility to choose the development platform or programming model
that makes the most sense for the problems they’re trying to solve. You pay only
for what you use, with no up-front expenses or long-term commitments, making
AWS a cost-effective way to deliver applications. Here are some of examples of
how organizations, from research firms to large enterprises, use AWS today: A
large enterprise quickly and economically deploys new internal applications, such
as HR solutions, payroll applications, inventory management solutions, and on-
line training to its distributed workforce. An e-commerce website accommodates
sudden demand for a “hot” product caused by viral buzz from Facebook and Twit-
ter without having to upgrade its infrastructure. A pharmaceutical research firm
executes large-scale simulations using computing power provided by AWS. Me-
dia companies serve unlimited video, music, and other media to their worldwide
customer base.

1.1.1 Purpose
AWS offers low, pay-as-you-go pricing with no up-front expenses or long-term
commitments. We are able to build and manage a global infrastructure at scale,
and pass the cost saving benefits onto you in the form of lower prices. With the
efficiencies of our scale and expertise, we have been able to lower our prices on

1
15 different occasions over the past four years. AWS provides a massive global
cloud infrastructure that allows you to quickly innovate, experiment and iterate.
Instead of waiting weeks or months for hardware, you can instantly deploy new
applications, instantly scale up as your workload grows, and instantly scale down
based on demand. Whether you need one virtual server or thousands, whether you
need them for a few hours or 24/7, you still only pay for what you use. AWS is
a language and operating system agnostic platform. You choose the development
platform or programming model that makes the most sense for your business. You
can choose which services you use, one or several, and choose how you use them.
This flexibility allows you to focus on innovation, not infrastructure. AWS is a
secure, durable technology platform with industry-recognized certifications and
audits: PCI DSS Level 1, ISO 27001, FISMA Moderate, FedRAMP, HIPAA, and
SOC 1 (formerly referred to as SAS 70 and/or SSAE 16) and SOC 2 audit reports.
Our services and data centers have multiple layers of operational and physical
security to ensure the integrity and safety of your data.

1.1.2 Scope
IDC has forecast that IoT purpose-built platforms are growing at a CAGR of
17.7%, from $37.2 billion in 2014 to $84.1 billion in 2019. During this time, the
IoT purpose-built platform market will continue to witness considerable consolida-
tion through partnerships and acquisitions and new entrants to the market need to
work exceptionally hard to differentiate their IoT platform from that of incumbent
vendors. AWS could leverage its worldwide presence and knowledge of operational
and physical security across industry verticals to provide multiple layers of pro-
tection for its infrastructure and the end user to address one of the major hurdles
in the adoption of IoT particularly on a large scale where the threat surface could
be potentially unbounded.

1.2 Amazon Web Services


1.2.1 Cloud Computing
Cloud computing represents a transformative paradigm shift in the way businesses
and individuals approach IT infrastructure and services. It allows users to access
and use computing resources, including servers, storage, databases, networking,
software, analytics, and more, over the internet. Here are some key points to
consider in the introduction to cloud computing:
Cloud computing is a model for delivering on-demand computing resources and
services. It offers significant flexibility, scalability, and cost-efficiency compared to

2
traditional on-premises IT infrastructure. Users can access cloud services from
anywhere with an internet connection, reducing geographical constraints. Cloud
computing service models include Infrastructure as a Service (IaaS), Platform as
a Service (PaaS), and Software as a Service (SaaS). Cloud computing deployment
models include public cloud, private cloud, and hybrid cloud.

1.2.2 Benefits of Cloud


The adoption of cloud computing offers several compelling advantages for busi-
nesses and individuals alike:

• Cost Efficiency: Cloud services often operate on a pay-as-you-go or sub-


scription model, eliminating the need for significant upfront hardware and
software investments. This leads to cost savings and predictable budgeting.

• Scalability: Cloud resources can be quickly scaled up or down based on


demand. This elasticity allows organizations to adapt to changing workloads
and avoid overprovisioning.

• Flexibility: Cloud services provide a wide range of tools and platforms, al-
lowing users to choose the best-fit solutions for their specific needs. This
flexibility extends to the selection of operating systems, databases, program-
ming languages, and more.

• Accessibility: Cloud services can be accessed from anywhere, enabling remote


work, collaboration, and access to data and applications on a global scale.

• Reliability: Leading cloud providers invest heavily in redundancy and failover


mechanisms, ensuring high availability and data durability. This reduces the
risk of data loss and service disruptions.

• Security: Cloud providers implement robust security measures, including


data encryption, identity and access management, and compliance certifica-
tions. These measures often surpass what many organizations can achieve
on their own.

• Innovation: Cloud providers continuously introduce new features and tech-


nologies, allowing users to stay up-to-date with the latest advances in the IT
industry.

3
1.2.3 Introduction to AWS
Amazon Web Services (AWS) is a global leader in cloud computing, established
in 2006 as a subsidiary of Amazon.com. Since its inception, AWS has been at
the forefront of revolutionizing the IT landscape by offering a vast array of cloud
services that cater to the needs of organizations of all sizes worldwide. One of the
key features that sets AWS apart is its extensive global reach. AWS operates in
multiple geographic regions, and each region comprises several Availability Zones,
which are essentially data centers equipped with redundant power, cooling, and
networking. This global presence ensures high availability and reliability for the
services and applications hosted on AWS.

AWS’s service offerings span various categories, each designed to address specific
computing and infrastructure needs. In the realm of computing services, Amazon
Elastic Compute Cloud (EC2) allows users to run virtual machines (instances) in
the cloud, offering flexibility in terms of instance types, operating systems, and
configurations. For storage solutions, AWS provides scalable and highly available
options, including Amazon S3 (Simple Storage Service) for object storage and
Amazon EBS (Elastic Block Store) for block storage, making data storage and
retrieval straightforward. In the domain of databases, AWS offers fully managed
services like Amazon RDS (Relational Database Service) for relational databases
and Amazon DynamoDB for NoSQL databases, simplifying database management.

AWS’s networking services encompass Amazon VPC (Virtual Private Cloud) for
isolating resources, Amazon Route 53 for domain name system (DNS) manage-
ment, and AWS Direct Connect, which facilitates dedicated network connections.
Furthermore, AWS boasts a wide array of application services, such as AWS
Lambda for serverless computing, Amazon API Gateway for API creation, and
AWS Elastic Beanstalk for application deployment and management.
8

1.2.4 Benefits of AWS


Amazon Web Services (AWS) has become a leading force in the cloud computing
industry, offering a wide range of cloud services and solutions that empower or-
ganizations to scale, innovate, and transform their businesses. AWS is known for
its global infrastructure, extensive service offerings, and commitment to security,
reliability, and innovation.

One of the key reasons organizations choose AWS is its global reach, with data

4
centers and regions strategically located around the world. This enables businesses
to deploy their applications and services in regions that are geographically closer
to their users, reducing latency and improving the overall user experience. AWS’s
extensive network of Availability Zones within regions provides redundancy and
high availability, ensuring applications remain resilient and operational even in the
face of hardware failures or unexpected disruptions.

The breadth of AWS services is another compelling reason for its popularity. AWS
offers over 200 fully featured services, including computing, storage, databases, ma-
chine learning, analytics, and Internet of Things (IoT), to name just a few. These
services provide organizations with the flexibility to choose the right tools for their
specific needs, whether they are building a simple website, running complex data
analytics, or deploying machine learning models.

Security is paramount in the cloud, and AWS has invested heavily in its secu-
rity and compliance measures. The AWS shared responsibility model ensures that
while AWS is responsible for the security of the cloud, customers are responsible
for the security in the cloud. This model, along with a vast array of security ser-
vices and features, allows organizations to build secure and compliant applications
and environments.

AWS’s focus on innovation is evident through its continuous release of new services
and features. AWS is at the forefront of emerging technologies such as serverless
computing, containers, and artificial intelligence. With AWS, organizations can
stay ahead in the competitive landscape by quickly adopting the latest technolog-
ical advancements.

1.3 Cloud Economics and Billing


Cloud economics and billing are essential aspects of AWS services, and understand-
ing them is crucial for optimizing cloud spending. In this module, participants gain
a foundational understanding of the key principles and concepts related to cloud
economics and billing. They learn how cloud services are charged, the different
pricing models, and the significance of cost optimization. This knowledge sets the
stage for effectively managing costs and resources in the cloud.

5
Figure 1.1: AWS pricing model

1.3.1 AWS Pricing Model


The fundamentals of pricing in AWS cover various aspects, including on-demand
pricing, reserved instances, and spot instances. On-demand pricing allows users
to pay for cloud resources on an hourly or per-second basis with no upfront costs.
Reserved instances offer significant cost savings when users commit to a specific
instance type and term. Spot instances enable users to take advantage of unused
AWS capacity at a lower price, making it a cost-effective option for certain work-
loads.

The AWS Free Tier is a generous offering by Amazon Web Services designed
to allow customers to explore and experiment with AWS services at no cost for
a limited time. It provides a risk-free way for individuals, startups, and small
businesses to become familiar with AWS’s vast array of cloud services, without
incurring any initial charges.

1.3.2 Total Cost of Ownership


Total Cost of Ownership (TCO) is a cornerstone concept for organizations contem-
plating cloud adoption. Understanding the total cost of ownership is crucial for
making informed decisions about moving to the cloud. TCO considers not only the
direct costs associated with AWS services but also indirect costs like maintenance,
support, and operational expenses. Participants learn to calculate and analyze
TCO, helping them compare the cost-effectiveness of running workloads on AWS
versus on-premises infrastructure.

The AWS Pricing Calculator is a valuable tool provided by Amazon Web Ser-

6
vices to help customers estimate and plan their AWS expenses effectively. This
online tool allows users to model and estimate the costs associated with deploying
and operating applications on AWS.

1.3.3 AWS Billing & Cost Management


AWS Billing and Cost Management is a set of tools and services provided by
Amazon Web Services to help customers monitor, analyze, and manage their AWS
spending and usage. It plays a crucial role in enabling organizations to understand
their cloud costs, allocate expenses, and optimize resource utilization. It provides
detailed cost and usage reports. These reports offer insights into how AWS re-
sources are being utilized and how much they cost. They help organizations track
spending by service, account, and region, allowing for informed decision-making.
To facilitate internal cost tracking, AWS Billing and Cost Management offers cost
allocation tagging. Users can assign custom tags to resources, and AWS will gen-
erate cost allocation reports based on these tags. This enables organizations to
attribute costs to specific projects, departments, or teams. The service allows users
to set budgets for their AWS spending. They can define spending thresholds, and
AWS will send alerts when spending approaches or exceeds these limits. Budgets
help organizations manage their expenses proactively.

1.4 AWS Global Infrastructure


The AWS Global Infrastructure is designed and built to deliver a flexible, reliable,
scalable, and secure cloud computing environment with high-quality global net-
work performance. AWS continually updates its global infrastructure footprint.

Figure 1.2: AWS Global infrastructure

7
1.4.1 Regions
The AWS Cloud infrastructure is built around Regions.AWS has 22 Regions world-
wide. An AWS Regionis a physical geographical location with one or more Avail-
ability Zones. Availability Zones in turn consist of one or more data centers.To
achieve fault tolerance and stability, Regions are isolated from one another. Re-
sources in one Region are not automatically replicated to other Regions.

Availability Zones (AZs) are isolated data centers within AWS Regions that are
designed to be highly available and fault-tolerant. AWS typically has multiple AZs
within each Region. These AZs are physically separated from each other and have
their power, cooling, and networking infrastructure. Deploying resources across
multiple AZs within the same Region is a best practice to ensure redundancy and
minimize downtime in the event of failures.

1.4.2 Data Centers


The foundation for the AWS infrastructure is the data centers. Customers do not
specify a data center for the deployment of resources. Instead, an Availability Zone
is the most granular level of specification that a customer can make. However, a
data center is the location where the actual data resides. Amazon operates state-
of-the-art, highly available data centers. Although rare, failures can occur that
affect the availability of instances in the same location.
Data centers are securely designed with several factors in mind. Each location
is carefully evaluated to mitigate environmental risk.

• Data centers have a redundant design that anticipates and tolerates failure
while maintaining service levels.

• To ensure availability, critical system components are backed up across mul-


tiple Availability Zones.

• To ensure capacity, AWS continuously monitors service usage to deploy in-


frastructure to support availability commitments and requirements.

• Data center locations are not disclosed and all access to them is restricted.

• In case of failure, automated processes move data traffic away from the af-
fected area.

8
1.4.3 Points of Presence
AWS Points of Presence are located in most of the major cities around the world.
By continuously measuring internet connectivity, performance and computing to
find the best way to route requests,the Points of Presence deliver a better near real-
time user experience. They are used by many AWS services, including Amazon
CloudFront, Amazon Route 53, AWS Shield, and AWS Web Application Firewall
(AWS WAF) services. Regional edge caches are used by default with Amazon
CloudFront. They are used when you have content that is not accessed frequently
enough to remain in an edge location and absorb this content and provide an
alternative to that content having to be fetched from the origin server.

1.5 AWS Services


AWS offers a broad set of global cloud-based products that can be used as building
blocks for common cloud architectures.

1.5.1 Categories of Services


AWS offers a broad set of cloud-based services. There are 23 different product
or service categories, and each category consists of one or more services. Some of
the important services include Compute, Cost Management, Database, Manage-
ment and Governance, Networking and Content Delivery, Security, Identity, and
Compliance, and Storage.

1.5.2 Storage services

Figure 1.3: AWS Storage Services

9
Amazon Simple Storage Service (Amazon S3)

Amazon S3 is storage for the Internet. It is designed to make web-scale computing


easier for developers. Amazon S3 provides a simple web services interface that
can be used to store and retrieve any amount of data, at any time, from anywhere
on the web. The container for objects stored in Amazon S3 is called an Amazon
S3 bucket. Amazon S3 gives any developer access to the same highly scalable,
reliable, secure, fast, inexpensive infrastructure that Amazon uses to run its own
global network of websites. The service aims to maximize benefits of scale and to
pass those benefits on to developers.

Amazon Elastic Block Storage (EBS)

Amazon Elastic Block Store (EBS) provides block level storage volumes for use
with Amazon EC2 instances. Amazon EBS volumes are network- attached, and
persist independently from the life of an instance. Amazon EBS provides highly
available, highly reliable, predictable storage volumes that can be attached to
a running Amazon EC2 instance and exposed as a device within the instance.
Amazon EBS is particularly suited for applications that require a database, file
system, or access to raw block level storage.

Amazon Elastic File System (EFS)

Amazon Elastic File System (EFS) is a cloud-based file storage service provided
by Amazon Web Services (AWS) designed to provide scalable, elastic, concurrent,
and encrypted file storage for use with both AWS cloud services and on-premises
resources. Amazon EFS is built to be able to grow and shrink automatically as
files are added and removed. It supports Network File System (NFS) versions 4.0
and 4.1 (NFSv4) protocol, and control access to files through Portable Operat-
ing System Interface (POSIX) permissions. According to Amazon, use cases for
this file system service typically include content repositories, development envi-
ronments, web server farms, home directories, and big data applications. Amazon
EFS provides open-after-close consistency semantics that applications expect from
NFS. It is designed to be highly available and durable for thousands of EC2 in-
stances that are connected to the service. Amazon EFS stores each file system
object in multiple availability zones (AZs); an IT professional can access each file
system from different AZs in the region it is located. The service also supports
periodic backups from on-premises storage services to EFS for disaster recovery.
Amazon EFS includes default General Purpose performance mode and Max I/O
performance mode. An admin can opt for the latter performance mode, which
scales to higher throughput levels at the expense of latency for applications with

10
many attached instances. Pricing for EFS is based on the storage capacity that
the file system service uses.

Amazon Simple Storage Service Glacier

Amazon Glacier is an extremely low-cost storage service that provides secure and
durable storage for data archiving and backup. In order to keep costs low, Amazon
Glacier is optimized for data that is infrequently accessed and for which retrieval
times of several hours are suitable. With Amazon Glacier, customers can reliably
store large or small amounts of data for as little as $0.01 per gigabyte per month,
a significant savings compared to on- premises solutions. It only takes a few clicks
in the AWS Management Console to set up Amazon Glacier, and then you can
upload any amount of data you choose.

1.5.3 Compute Services


Amazon Elastic Compute Cloud (Amazon EC2)

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides
resizable compute capacity in the cloud. It is designed to make web- scale com-
puting easier for developers and system administrators. Amazon EC2’s simple
web service interface allows you to obtain and configure capacity with minimal
friction. It provides you with complete control of your computing resources and
lets you run on Amazon’s proven computing environment.Amazon EC2 reduces
the time required to obtain and boot new server instances to minutes, allowing
you to quickly scale capacity, both up and down, as your computing requirements
change. Amazon EC2 changes the economics of computing by allowing you to pay
only for capacity that you actually use. Amazon EC2 provides developers and
system administrators the tools to build failure resilient applications and isolate
themselves from common failure scenarios.

Amazon EC2 Auto Scaling

Auto Scaling allows you to scale your Amazon EC2 capacity up or down automat-
ically according to the conditions you define. With Auto Scaling, you can ensure
that the number of Amazon EC2 instances you’re using increases seamlessly dur-
ing demand spikes to maintain performance and decreases automatically during
demand lulls to minimize costs. Auto Scaling is particularly well suited for appli-
cations that experience hourly, daily, or weekly variability in usage. Auto Scaling
is enabled by Amazon CloudWatch and available at no additional charge beyond
Amazon CloudWatch fees.

11
AWS Elastic Beanstalk

AWS Elastic Beanstalk is a Platform as a Service (PaaS) offering by Amazon


Web Services (AWS) that simplifies the deployment and management of web ap-
plications and services. It provides a platform where developers can focus on
their code, while AWS takes care of the underlying infrastructure, including com-
pute resources, load balancing, scaling, and application health monitoring. Elas-
tic Beanstalk supports multiple programming languages, application stacks, and
frameworks, making it a versatile choice for developers.

Amazon Elastic Container Service (Amazon ECS)

Amazon Elastic Container Service (ECS) is a fully managed container orches-


tration service provided by Amazon Web Services (AWS). It simplifies the de-
ployment, management, and scaling of containerized applications using popular
container technologies such as Docker. Amazon ECS is designed to help develop-
ers and organizations build and run microservices architectures efficiently, taking
advantage of the flexibility and portability that containers offer.

AWS Fargate

AWS Fargate is a serverless compute engine provided by Amazon Web Services


(AWS) for containers. It is designed to simplify the management of containerized
applications by abstracting the underlying infrastructure, enabling developers to
focus solely on their application code. With Fargate, users can run containers
without the need to provision or manage virtual machines, making it a powerful
choice for serverless container deployments.

AWS Lambda

AWS Lambda is a serverless compute service provided by Amazon Web Services


(AWS) that enables developers to run code without provisioning or managing
servers. It is designed to simplify the process of building and deploying applications
by allowing users to focus on writing code while AWS takes care of the operational
aspects, such as server management, scaling, and monitoring. AWS Lambda is
a fundamental component of serverless computing, which abstracts infrastructure
complexities and offers a pay-as-you-go model for executing code.

12
Figure 1.4: AWS Database Services

1.5.4 Database Services


Amazon Relational Database Service (Amazon RDS)

Amazon RDS is a managed service provided by Amazon Web Services (AWS)


that makes it easy to set up, operate, and scale a relational database in the cloud.
It provides cost-effective and resizable capacity while automating time-consuming
administration tasks such as hardware provisioning, database setup, patching, and
backups. It supports six different commercial and open-source database engines:
MySQL, MariaDB, Oracle, SQL Server, PostgreSQL, and Amazon Aurora. This
means that the code, applications, and tools you already use today with your
existing databases should work seamlessly with Amazon RDS. Amazon RDS allows
users to easily scale the compute resources or storage capacity associated with
their relational database instance. It also makes it easy to use replication to
enhance database availability, improve data durability, or scale beyond the capacity
constraints of a single database instance for read-heavy database workloads.

Amazon DynamoDB

Amazon DynamoDB is a fast, fully managed NoSQL database service that makes
it simple and cost-effective to store and retrieve any amount of data, and serve
any level of request traffic.
All data items are stored on Solid State Drives (SSDs), and are replicated
across 3 Availability Zones for high availability and durability. With DynamoDB,
you can offload the administrative burden of operating and scaling a highly avail-
able distributed database cluster, while paying a low price for only what you use
Amazon DynamoDB is designed to address the core problems of database manage-

13
ment, performance, scalability, and reliability. Developers can create a database
table that can store and retrieve any amount of data, and serve any level of re-
quest traffic. DynamoDB automatically spreads the data and traffic for the table
over a sufficient number of servers to handle the request capacity specified by
the customer and the amount of data stored, while maintaining consistent, fast
performance. All data items are stored on solid state drives (SSDs) and are au-
tomatically replicated across multiple Availability Zones in a Region to provide
built-in high availability and data durability.

Amazon Aurora

Amazon Aurora is a fully managed, high-performance, and cost-effective relational


database service provided by Amazon Web Services (AWS). It is designed to ad-
dress the challenges associated with traditional relational database systems, offer-
ing a more scalable, reliable, and efficient database solution for a wide range of ap-
plications. Amazon Aurora is compatible with MySQL and PostgreSQL database
engines and is known for its exceptional performance and high availability.

Amazon Redshift

Amazon Redshift is a fully managed data warehousing service provided by Ama-


zon Web Services (AWS). It is designed to handle large-scale data analytics and
reporting workloads, making it an ideal choice for organizations seeking a cost-
effective, high-performance, and scalable solution for data warehousing and busi-
ness intelligence (BI) tasks. It stores data in a columnar format, which optimizes
query performance for analytical workloads. It reduces I/O and speeds up query
execution by only reading the columns needed for a query.

1.5.5 Networking and content delivery services


Amazon Virtual Private Cloud (Amazon VPC)

Amazon Virtual Private Cloud lets you provision a logically isolated section of
the Amazon Web Services (AWS) Cloud where you can launch AWS resources in
a virtual network that you define. You have complete control over your virtual
networking environment, including selection of your own IP address range, creation
of subnets, and configuration of route tables and network gateways. You can easily
customize the network configuration for your Amazon VPC. For example, you can
create a public-facing subnet for your webservers that has access to the Internet,
and place your backend systems such as databases or application servers in a
private-facing subnet with no Internet access. You can leverage multiple layers of

14
Figure 1.5: Amazon Route 53

security (including security groups and network access control lists) to help control
access to Amazon EC2 instances in each subnet. Additionally, you can create a
hardware virtual private network (VPN) connection between your corporate data
center and your VPC and leverage the AWS cloud as an extension of your corporate
data center.

Amazon Route 53

Amazon Route 53 is a highly available and scalable Domain Name System (DNS)
web service. It is designed to give developers and businesses an extremely reliable
and costeffective way to route end users to Internet applications by translating
human readable names, such as www.example.com, into the numeric IP addresses,
such as 192.0.2.1, that computers use to connect to each other.
Amazon Route 53 is designed to be fast, easy to use, and cost effective. It
answers DNS queries with low latency by using a global network of DNS servers.
Queries for your domain are automatically routed to the nearest DNS server, and
thus are answered with the best possible performance.

AWS Direct Connect

AWS Direct Connect makes it easy to establish a dedicated network connection


from your premises to AWS. Using AWS Direct Connect, you can establish private
connectivity between AWS and your data center, office, or co- location environ-
ment, which in many cases can reduce your network costs, increase bandwidth

15
throughput, and provide a more consistent network experience than Internet-based
connections. AWS Direct Connect lets you establish a dedicated network connec-
tion between your network and one of the AWS Direct Connect locations. Using
industry standard 802.1Q virtual LANS (VLANs), this dedicated connection can
be partitioned into multiple logical connections. This allows you to use the same
connection to access public resources such as objects stored in Amazon S3 using
public IP address space, and private resources such as Amazon EC2 instances run-
ning within an Amazon VPC using private IP space, while maintaining network
separation between the public and private environments. Logical connections can
be reconfigured at any time to meet your changing needs.

Elastic Load Balancing

Elastic Load Balancing automatically distributes incoming application traffic across


multiple Amazon EC2 instances. It enables you to achieve even greater fault tol-
erance in your applications, seamlessly providing the amount of load balancing
capacity needed in response to incoming application traffic. Elastic Load Bal-
ancing detects unhealthy instances and automatically reroutes traffic to healthy
instances until the unhealthy instances have been restored. Customers can enable
Elastic Load Balancing within a single Availability Zone or across multiple zones
for even more consistent application performance.

Amazon Virtual Private Network (AWS VPN)

Amazon Virtual Private Network (VPN) is a service provided by Amazon Web


Services (AWS) that allows users to establish secure and encrypted network con-
nections between their on-premises data centers, remote offices, or remote users
and AWS resources. It facilitates secure access to AWS resources over the pub-
lic internet while maintaining the confidentiality and integrity of data in transit.
AWS VPN ensures secure connectivity between on-premises networks and AWS
Virtual Private Clouds (VPCs). It uses industry-standard encryption protocols
to protect data in transit, safeguarding it from eavesdropping and unauthorized
access.

1.5.6 Security, identity, and compliance services


AWS Identity and Access Management (IAM)

AWS Identity and Access Management (IAM) is a comprehensive security service


provided by Amazon Web Services (AWS) that allows users to control access to
AWS resources and services. IAM enables organizations to manage user identities,

16
permissions, and authentication in a secure and scalable manner, helping to ensure
the confidentiality and integrity of their AWS resources. AWS IAM enables the
creation and management of user identities and user groups. This allows organi-
zations to define who can access AWS resources, and it simplifies access control
by grouping users with similar permissions.

Figure 1.6: AWS Identity Access Management

AWS Organizations

AWS Organizations is a service provided by Amazon Web Services (AWS) that


allows users to centrally manage multiple AWS accounts as an organization. It
is designed to simplify the management of AWS resources and accounts within
complex and large-scale enterprise environments. AWS Organizations enables or-
ganizations to have a unified view of their AWS accounts, optimize costs, and
establish consistent security and compliance policies. One of the key features of
AWS Organizations is consolidated billing. It allows organizations to combine mul-
tiple AWS accounts into a single payer account. This simplifies billing and cost
management, as all accounts’ charges are aggregated into one bill. AWS Organiza-
tions provides the capability to create organizational units (OUs) to logically group
AWS accounts. This aids in organizing accounts based on department, project, or
any other criteria.

17
Amazon Cognito

Amazon Cognito is a fully managed service provided by Amazon Web Services


(AWS) that simplifies the process of adding user identity and access management
to web and mobile applications. It is designed to handle user registration, au-
thentication, authorization, and user management, allowing developers to focus
on building their applications rather than handling complex identity management
tasks. Amazon Cognito enables developers to manage user identities and profiles,
including registration, sign-in, and account recovery.

AWS Artifact

AWS Artifact is a service provided by Amazon Web Services (AWS) that offers on-
demand access to compliance documentation, reports, and other resources related
to AWS’s security and compliance posture. It is designed to help AWS customers
meet their auditing and compliance requirements by providing a central repository
for obtaining the necessary documentation and reports. Users can access AWS
Artifact on-demand, allowing them to retrieve compliance documents and reports
whenever they are needed, rather than having to wait for periodic updates.

1.5.7 Management and governance services


AWS Management Console

The AWS Management Console is a web-based user interface provided by Amazon


Web Services (AWS) that allows users to interact with and manage their AWS
resources and services. It serves as a central control panel for AWS, enabling users
to configure, monitor, and administer their cloud infrastructure, applications, and
services. Users can view and manage a wide array of AWS resources, including
virtual machines, databases, storage, networking, security, analytics, and more, all
in one place.

AWS Config

AWS Config is a fully managed service provided by Amazon Web Services (AWS)
that allows users to assess, audit, and evaluate the configuration of their AWS re-
sources. It provides continuous monitoring and recording of resource configurations
and changes, helping organizations maintain compliance, security, and governance
while gaining insights into their AWS infrastructure. AWS Config continuously
records the configuration of AWS resources. This includes details about resource
properties, relationships, and configuration history.

18
Figure 1.7: Amazon CloudWatch

Amazon CloudWatch

Amazon CloudWatch is a monitoring and observability service provided by Ama-


zon Web Services (AWS) that allows users to collect, store, and analyze operational
data and log files from AWS resources and applications. It provides valuable in-
sights into the performance, health, and operational state of AWS environments,
enabling users to ensure the reliability and availability of their applications and
infrastructure. CloudWatch collects and stores metrics from AWS resources, such
as Amazon EC2 instances, Amazon RDS databases, and AWS Lambda functions.
Users can monitor these metrics in real-time and set alarms based on thresholds.

AWS Auto Scaling

AWS Auto Scaling is a service provided by Amazon Web Services (AWS) that
enables users to automatically adjust the capacity of their AWS resources to ac-
commodate varying workloads. It helps organizations optimize the performance,
cost, and availability of their applications by dynamically scaling resources up or
down based on demand. Users can define scaling policies that determine when and
how resources should be scaled. These policies can be based on metrics like CPU
utilization, network traffic, or custom application metrics.

19
Chapter 2
AWS Academy Cloud Architecting

2.1 Introduction to Cloud Architecting


Cloud architecting is the process of designing and implementing cloud-based solu-
tions that meet the needs of an organization. A cloud architect is an IT professional
who is responsible for overseeing a company’s cloud computing strategy. This in-
cludes cloud adoption plans, cloud application design, and cloud management and
monitoring. Cloud architects oversee application architecture and deployment in
cloud environments – including public cloud, private cloud, and hybrid cloud. Ad-
ditionally, they act as consultants to their organization and need to stay current
on the latest trends and issues.

2.2 Adding a Storage Layer using Amazon S3


Amazon Simple Storage Service (Amazon S3) is an object storage service that
offers industry-leading scalability, data availability, security, and performance.

2.2.1 Bucket
• In Amazon S3, a bucket is a container for objects stored in the cloud. Each
object is contained in a bucket, and bucket names must be unique across all
AWS accounts in all the AWS Regions within a partition

• In general, a bucket is a term used in data analysis and aggregation to group


data into discrete categories or ranges. Bucket aggregations categorize sets
of documents as buckets, and the type of bucket aggregation determines
whether a given document falls into a bucket or not

2.2.2 Storing Data in Amazon S3


• Amazon S3 provides virtually unlimited scalability at low cost for storing
and retrieving any amount of data, at any time, from anywhere.

• Amazon S3 offers a range of storage classes with different features, durability,


availability, and pricing.

20
• Amazon S3 provides a simple web service interface that can be used to store
and retrieve any amount of data, at any time, from anywhere, making it easy
to build applications that make use of cloud-native storage.

2.3 Adding a Computer Layer using Amazon EC2


Amazon Elastic Compute Cloud (Amazon EC2) is a fundamental service pro-
vided by Amazon Web Services (AWS) that allows organizations to deploy virtual
servers, known as instances, in the cloud. This service plays a pivotal role in
expanding an organization’s computing capacity, supporting a wide range of use
cases from web hosting and application development to high-performance com-
puting. The steps involved in adding a compute layer with Amazon EC2 are as
follows.

2.3.1 Choosing an AMI to Launch an EC2 Instance:


The process begins by selecting an Amazon Machine Image (AMI) that serves as
the template for the EC2 instance. AMIs are pre-configured with an operating
system and software. Users can choose from a wide range of publicly available
AMIs or create custom ones tailored to their requirements.

2.3.2 Selecting an EC2 Instance Type:


Next, users select an EC2 instance type that suits their workload. AWS provides
a variety of instance types optimized for different use cases, including general-
purpose, compute-optimized, memory-optimized, and storage-optimized instances.
The choice depends on factors like computational power, memory, and storage
capacity.

• Using User Data to Configure an EC2 Instance: User data is a feature that
allows users to customize the configuration of an EC2 instance during launch.
This is useful for tasks such as installing applications, running scripts, and
configuring settings to meet specific needs.

• Adding Storage to an Amazon EC2 Instance: Storage options are a criti-


cal consideration when configuring EC2 instances. Users can attach various
types of storage volumes, such as Amazon Elastic Block Store (EBS) vol-
umes for persistent block storage or Amazon Simple Storage Service (S3) for
scalable object storage. The choice depends on performance, durability, and
use case requirements.

21
2.3.3 Demo Configuring an EC2 Instance with User Data:
This step often involves a demonstration or hands-on practice to show how user
data can be utilized to configure an EC2 instance based on specific use cases. Users
can learn how to automate tasks and set up instances with desired configurations.

2.4 Amazon EC2 Pricing Options


Amazon Elastic Compute Cloud (Amazon EC2) offers a variety of pricing options
to meet the diverse needs of AWS customers. These pricing options provide flex-
ibility and cost optimization, allowing organizations to choose the most suitable
model based on their specific workloads and budget considerations. Below are the
key Amazon EC2 pricing options:

• On-Demand Instances:
On-Demand Instances are the most flexible pricing option. Users can pay for
compute capacity by the hour or second, with no upfront costs or long-term
commitments. This option is ideal for workloads with variable or unpre-
dictable demand, allowing users to scale up or down as needed.

• Reserved Instances (RIs):


Reserved Instances provide cost savings in exchange for a one- or three-year
commitment. Users can choose between Standard RIs, which offer capacity
reservation in a specific Availability Zone, and Convertible RIs, which allow
flexibility in changing instance types. RIs are recommended for workloads
with predictable or steady-state usage.

• Spot Instances:
Spot Instances allow users to take advantage of spare EC2 capacity at sig-
nificantly reduced prices. Spot Instances are suitable for workloads with
flexible start and end times, such as batch processing, data analysis, and
testing. Users bid on available capacity, and when the spot price is lower
than the bid, the instance is provisioned.

• Dedicated Hosts:
Dedicated Hosts provide physical EC2 servers dedicated to a specific user.
They are ideal for workloads with specific licensing requirements or compli-
ance needs. Users pay for the host and have control over the placement of
instances.

22
• Capacity Reservations:
Capacity Reservations allow users to reserve capacity for specific instance
types in specific Availability Zones for a one- or three-year term. This en-
sures that capacity is always available, making it suitable for critical and
predictable workloads.

2.5 Adding a Database Layer


When creating an architecture that uses AWS database services, there are several
considerations to keep in mind. Here are two key points about database layer
considerations:

• When choosing a database, it is important to consider scalability, storage


requirements, the type and size of objects to be stored, and durability re-
quirements. Relational databases have strict schema rules, provide data in-
tegrity, and support SQL, while non-relational databases scale horizontally,
provide higher scalability and flexibility, and work for semi-structured and
unstructured data.

• Amazon RDS is a managed AWS database service that supports Microsoft


SQL Server, Oracle, MySQL, PostgreSQL, Aurora, and MariaDB. Ama-
zon RDS Multi-AZ deployments provide high availability with automatic
failover. Amazon DynamoDB is a fully managed non-relational key-value
and document NoSQL database service. DynamoDB is serverless, and pro-
vides extreme horizontal scaling and low latency. DynamoDB global tables
ensure that data is replicated to multiple Regions.

When designing an architecture that uses AWS database services, it is important


to consider the specific needs of the application or service, and choose a database
solution that meets those needs. Amazon RDS and DynamoDB are two popu-
lar options that provide managed database services with different features and
benefits.

2.6 Creating a Network Environment


In this module, we will learn how to design a network on AWS and build a Virtual
Private Cloud (VPC) with subnets. We will also learn how to connect instances
in our public and private subnets to the internet. Here are two key points about

23
2.6.1 Creating an AWS networking environment:
• Amazon VPC enables us to provision VPCs, which are logically isolated
sections of the AWS Cloud where we can launch our AWS resources. A VPC
belongs to only one Region and is divided into subnets. A subnet belongs
to one Availability Zone or Local Zone, and it is a subset of the VPC CIDR
block. We can create multiple VPCs within the same Region or in different
Regions, and in the same account or different accounts.

• When creating subnets for our VPC, we must specify an IPv4 CIDR block
for the subnet from the range of our VPC. We can optionally specify an IPv6
CIDR block for a subnet if there is an IPv6 CIDR block associated with the
VPC. Depending on the connectivity that we need, we might also need to
add gateways and route tables.

• Subnets can be used to launch AWS resources, such as EC2 instances, in


specific subnets. We can connect a subnet to the internet, other VPCs, and
our own data centers, and route traffic to and from our subnets using route
tables.

Creating an AWS networking environment involves designing and building a VPC


with subnets, and connecting instances in our public and private subnets to the
internet. By following best practices and considering factors such as scalability,
availability, and security, we can create a robust and flexible network architecture
that meets the needs of our applications and services.

2.7 Connecting Networks with AWS


In the context of Amazon Web Services (AWS), connecting networks is a funda-
mental aspect of building a robust and scalable cloud infrastructure. AWS pro-
vides several solutions for connecting networks, including Site-to-Site VPN, Direct
Connect, and VPC Peering. Each of these options serves specific networking re-
quirements, and understanding how to leverage them is essential for optimizing
network connectivity within AWS.

2.7.1 Connecting to Your Remote Network with AWS Site-


to-Site VPN:
AWS Site-to-Site VPN allows organizations to establish secure connections be-
tween their on-premises networks and their Virtual Private Cloud (VPC) in AWS.
This solution enables encrypted communication over the public internet, extending

24
the corporate network into the AWS cloud. Site-to-Site VPN is ideal for hybrid
cloud architectures, remote access, and secure data transfer.

2.7.2 Connecting to Your Remote Network with AWS Di-


rect Connect:
AWS Direct Connect provides a dedicated and private network connection be-
tween an organization’s data center or colocation facility and an AWS region.
This dedicated link offers consistent network performance, low latency, and higher
security. It is suitable for enterprises with stringent network requirements, large
data transfer volumes, and the need for a dedicated connection to AWS.

2.7.3 Connecting VPCs in AWS with VPC Peering:


VPC Peering is a feature that allows organizations to connect multiple Virtual Pri-
vate Clouds within AWS. It enables seamless and private communication between
VPCs, facilitating resource sharing and data transfer. VPC Peering is a scalable
solution for organizations that need to isolate workloads in separate VPCs while
maintaining connectivity. These networking solutions are essential for building
complex and interconnected cloud architectures. They empower organizations to
establish secure and efficient network connections between on-premises infrastruc-
ture and AWS resources, as well as between multiple VPCs within the AWS cloud.
Choosing the right networking approach depends on the specific needs of the or-
ganization, including network performance, security, and scalability requirements.

2.8 Securing User Application Access


The cafe needs to define the level of access that users and systems should have
across their cloud resources and put these access controls in place across their
AWS account. As the cafe grows, team members who build, maintain, or access
applications on AWS are specializing into roles, such as developer or database
administrator. However, they have not made an effort to clearly define what level
of access each user should have based on their roles and responsibilities.

2.8.1 Account users and IAM:


• AWS Identity and Access Management (IAM) is a service that allows us
to configure fine-grained access control to AWS resources. IAM enables se-
curity best practices by allowing us to grant unique security credentials to
users and groups. These credentials specify which AWS service application

25
programming interfaces (APIs) and resources they can access. IAM is secure
by default, and users have no access to AWS resources until permissions are
explicitly granted

• IAM is integrated into most AWS services, and we can define access controls
from one place in the AWS Management Console, and they will take effect
throughout our AWS environment. We can use IAM to define what a prin-
cipal entity is allowed to do in an AWS environment, and AWS evaluates
these policies when a principal uses an IAM entity (user or role) to make a
request. Permissions in the policies determine whether the request is allowed
or denied. Most policies are stored in AWS as JSON documents

2.9 Implementing Elasticity, High Availability,


and Monitoring in AWS
2.9.1 Elasticity
Elasticity refers to the ability of infrastructure to expand and contract when ca-
pacity needs change. We can acquire resources when we need them and release
resources when we do not. Here are two key points about elasticity:

• Elasticity is important in cloud computing because it allows us to scale re-


sources up or down as needed, without having to worry about the underlying
infrastructure.

• This means we can increase the number of web servers when traffic to our
application spikes, and lower the write capacity on our database when traffic
goes down

• AWS provides several services that support elasticity, including Amazon S3,
Amazon SQS, Amazon SNS, Amazon SES, Amazon Aurora, Amazon EC2,
Amazon ECS, AWS Fargate, Amazon EKS, and Amazon DynamoDB. Some
services require vertical scaling, while others integrate with AWS Auto Scal-
ing.

• To implement elasticity, we need to identify the workloads that have variable


load, identify the workload load range, and identify the application limita-
tions that may limit elasticity. We can then implement elasticity using AWS
Auto Scaling or Application Auto Scaling for the aspects of our service that
are not elastic by design

26
2.9.2 High Avaliability:
High availability refers to the ability of a system to remain available even when
some components fail. In a highly available system, downtime is minimized as
much as possible, and minimal human intervention is required.

• A highly available system enables resiliency in a reactive architecture. A


resilient workload can recover when it’s stressed by load (more requests for
service), attacks, or component failure. A resilient workload recovers from a
failure, or it rolls over to a secondary source within an acceptable amount of
degraded performance time

• AWS provides several services and infrastructure to build reliable, fault-


tolerant, and highly available systems in the cloud. Fault tolerance defines
the ability for a system to remain in operation even if some of the components
used to build the system fail. Most of the higher-level services, such as
Amazon S3, SimpleDB, SQS, and ELB, have been built with fault tolerance
and high availability in mind.

• Services that provide basic infrastructure, such as EC2 and EBS, provide
specific features, such as availability zones, elastic IP addresses, and snap-
shots, that a fault-tolerant and highly available system must take advantage
of and use correctly

2.10 Automating Your Architecture


Automation is a core principle in Amazon Web Services (AWS) that empowers
organizations to streamline and optimize their cloud infrastructure. By automating
various aspects of architecture, organizations can reduce manual tasks, enhance
efficiency, and ensure consistency in their cloud environments. AWS offers a wide
array of services and tools for automation, and understanding the reasons for
automation and the methodologies involved is crucial.

• Reasons to Automate: Before diving into the technical details of automation,


it’s essential to understand why automation is a critical component of cloud
architecture. Automation reduces human error, accelerates resource provi-
sioning, and enables scaling based on demand. It also helps organizations
achieve cost savings and maintain compliance through consistent configura-
tions.

27
2.10.1 Automating Your Infrastructure:
Automating infrastructure in AWS involves creating, provisioning, and manag-
ing resources programmatically. Tools like AWS CloudFormation, Terraform, and
AWS Elastic Beanstalk enable organizations to define infrastructure as code (IAC),
making it possible to version, deploy, and replicate infrastructure components
reliably and consistently. The first part delves into the process of automating
infrastructure through scripting and IAC. This includes defining resources, creat-
ing templates, and using orchestration tools to deploy and manage infrastructure
efficiently. The second part focuses on more advanced automation techniques,
such as automating the management of infrastructure based on application needs,
leveraging configuration management tools, and handling updates and rollbacks
seamlessly. Automation is not limited to infrastructure provisioning but extends
to various other areas, including application deployment, scaling, monitoring, and
security. By implementing automation, organizations can respond to changes and
demands more rapidly, reduce the risk of errors, and free up human resources to
focus on more strategic tasks.

2.11 Caching Content in AWS


Architectural need: In this module, you will learn how to implement caching in
your networking environment. The final components of the architecture diagram
(that is, Amazon ElastiCache and Amazon CloudFront) are introduced in this
module, and have been revealed. This module will also cover database caching
with Amazon DynamoDB

2.11.1 Overview Of Caching:


Caching is a technique used to store and reuse frequently accessed data in memory,
providing high throughput and low latency access to commonly accessed applica-
tion data. Caches can be applied and used throughout various layers of technology,
including operating systems, networking layers, web applications, and databases.
When deciding what data to cache, we should consider speed and expense, data
and access patterns, and our application’s tolerance for stale data.

2.11.2 Edge Caching:


Amazon CloudFront is a global Content Delivery Network (CDN) service that
accelerates the delivery of content, including static and video, to users with no
minimum usage commitments. CloudFront uses a global network that comprises

28
edge locations and regional edge caches to deliver content to our users. To use
CloudFront to deliver our content, we specify an origin server and configure a
CloudFront distribution.

2.11.3 Caching Web Sessions:


Sticky sessions are a feature of the EC2 Load Balancer that allows us to continue
to serve legacy applications that store session data on the instance. By using
sticky sessions, we can bind a user’s session to a specific instance, without having
to modify the code of the application. This ensures that all requests from the user
during the session are sent to the same instance

2.12 Building Decoupled Architectures:


Decoupling is a fundamental concept in cloud architecture, and it plays a crucial
role in designing scalable, reliable, and flexible systems. By decoupling components
and services, organizations can reduce interdependencies, enhance fault tolerance,
and improve the overall efficiency of their cloud-based applications. Amazon Web
Services (AWS) offers various services and tools for building decoupled architec-
tures.

2.12.1 Decoupling Your Architecture:


The initial step in building decoupled architectures involves understanding the
concept of decoupling and its significance. Decoupling helps isolate different com-
ponents of a system, allowing them to operate independently and asynchronously.
This section explores the benefits and strategies of decoupling within AWS.

2.12.2 Decoupling with Amazon SQS:


Amazon Simple Queue Service (SQS) is a fully managed message queuing service
provided by AWS. It enables decoupling by allowing different parts of an appli-
cation to communicate asynchronously. Messages can be placed in a queue and
processed independently, making it suitable for various use cases, such as event-
driven architectures and workload decoupling.

2.12.3 Decoupling with Amazon SNS:


Amazon Simple Notification Service (SNS) is another service for decoupling that
facilitates the publish-subscribe pattern. It allows components of an application

29
to send and receive notifications as messages, enhancing communication and co-
ordination. SNS is often used for fan-out scenarios and real-time communication.

2.12.4 Sending Messages Between Cloud Applications and


On-Premises with Amazon MQ:
Amazon MQ is a managed message broker service that enables decoupling between
cloud applications and on-premises systems. It provides support for multiple mes-
saging protocols, including Advanced Message Queuing Protocol (AMQP) and
Message Queuing Telemetry Transport (MQTT). This service simplifies the inte-
gration of applications across different environments. Decoupled architectures are
essential for building modern, scalable applications that can adapt to changing
demands. By breaking down monolithic systems into loosely coupled components,
organizations can achieve greater resilience and flexibility.

2.13 Planning for Disaster:


Disaster planning and recovery are integral aspects of cloud architecture to en-
sure business continuity and data protection. AWS offers various strategies and
patterns to plan for and recover from disasters effectively. Understanding these
strategies and patterns is critical for organizations to safeguard their applications
and data.

2.13.1 Disaster Planning Strategies:


Disaster planning involves a proactive approach to identifying potential risks and
developing strategies to mitigate their impact. This section explores the strate-
gies for disaster planning, which may include data backup and recovery, system
redundancy, data replication, and the creation of disaster recovery plans.

2.13.2 Disaster Recovery Patterns:


Disaster recovery patterns are predefined approaches to recovering from disasters
and outages. These patterns provide guidelines for structuring the recovery process
and ensuring minimal downtime. Common recovery patterns include backup and
restore, pilot light, warm standby, and multi-site recovery. Organizations must
assess their risk tolerance and the criticality of their applications and data to
determine the appropriate disaster planning and recovery strategies. By leveraging
AWS services and best practices, they can ensure that their systems can withstand
unexpected events and continue to operate without significant disruption.

30
Chapter 3
Implementation

3.1 Aim
Create a Virtual Machine, connect and work with it

3.2 Procedure
1. open EC2 dashboard by logging into your AWS management console and
then clicking the Amazon EC2 console

2. find compute under services from top left corner and then find EC2 under
compute.

3. we are now in the Amazon EC2 console, click the launch instance button.

Figure 3.1: Amazon EC2 console

4. With Amazon EC2, we can specify the software and specifications of the
instance you want to use. In this screen, you are shown options to choose
an Amazon Machine Image(AMI), which is a template that contains the
software configuration required to launch your instance.

31
Figure 3.2: Amazon Machine Image

5. we will now choose an instance type. Instance types comprise of varying


combinations of CPU, memory, storage and networking capacity so you can
choose the appropriate mix for your applications. For more information,
see Amazon EC2 Instance Types. select the default option of t2.micro-this
instance type is covered within the free tier. Click Review and Launch at
the bottom of the page.

6. AWS security groups (SGs) are associated with EC2 instances and provide
security at the protocol and port access level. Each security group – working
much the same way as a firewall – contains a set of rules that filter traffic
coming into and out of an EC2 instance. Each security group must have a
name, allowing you to easily identify it from account menus. It’s always a
good idea to choose a descriptive name that will quickly tell you this group’s
purpose. In fact, you would be well served to define and use a consistent
convention for naming all objects in your AWS account. Security groups
exist within individual VPCs. When you create a new group, make sure
that it’s in the same VPC as the resources it’s meant to protect.

7. To connect to your virtual machine, you need a key pair. A key pair is used to
log into your Instance (just like your house key is used to enter your home).
In the popover, select Create a new key pair and name it Demo. Then click
Download Key Pair. Demo.pem will be downloaded to your computer—make
sure to save this key pair in a safe location on your computer.

8. we can see our instance state as Running

32
Figure 3.3: Amazon EC2 Instance type

9. We may login to our EC2 instance by running the following command:


ssh -i your.pem ec2-user@PublicIPAddress. We can also use this command
to connect to our EC2 instance via the Aws Console or System Terminal.

10. Now we are trying to host a website on our EC2 instance, for that we have
to run the following commands on the terminal where we connected our EC2
instance
sudo su is to convert the permission to root.
yum install –y httpd for webserver creation.
chkconfig httpd on to check whether the service is on or not.
service httpd start to check if server is on then start the server.

11. Now create simple HTML file called index.HTML and home.HTML using
simple echo commands
echo “We are CSE 1”>> /var/www /html/index.html
Now change the directory to html and verify.
cd /var/www/html – change the directory
ls – to list the files.

12. Inorder to check the output, open a web browser and type the specified URL:
public dns of the launched instance has to be copied from AWS Console and
paste it as URL.

33
Figure 3.4: Amazon EC2 security groups

13. Now we have to terminate the EC2 instance if we didn’t need it or else we
get charged for the usage, so it is a good practice to terminate the instance
after it’s usage.

14. back on EC2 console, select the box next to the instance we created. Then
click the Action button, navigate to Instance State, and click Terminate.

15. You will be asked to confirm your termination- select Yes, Terminate. This
process can take several seconds to complete. Once your instance has been
terminated, the Instance State will change to terminated on your EC2 Con-
sole.

3.3 Result
We successfully created an EC2 instance, configured it for our requirements, hosted
a website using the instance, and finally terminated it after implementation and
demonstration.

34
Figure 3.5: Amazon EC2 Key pair

Figure 3.6: Amazon EC2 state

35
Figure 3.7: Public IP address

Figure 3.8: Public IP address

36
Figure 3.9: Working with EC2 Instance

Figure 3.10: Output

37
Chapter 4
Conclusion

The Amazon Web Services (AWS) Cloud Virtual Internship offered a compre-
hensive exploration of cloud computing and AWS services, covering fundamental
concepts to advanced topics like automation and disaster recovery. The hands-on
experience of creating and managing Elastic Compute Cloud (EC2) instances fur-
ther enriched participants’ learning, providing practical insights into provisioning
virtual servers and deploying applications. This practical dimension, particularly
with EC2 instances, enhanced interns’ abilities to leverage AWS effectively for
diverse organizational needs, solidifying their understanding of cloud architecture
and resource management.

38
REFERENCES

[1] Welcome to AWS Documentation. (n.d.).


https://docs.aws.amazon.com/index.html

[2] Whitepaper — AWS Security Blog. (2023, October 16).


https://aws.amazon.com/blogs/security/tag/whitepaper/

[3] Global infrastructure. (n.d.). Amazon Web Services, Inc.


https://aws.amazon.com/about-aws/global-infrastructure/

[4] “Amazon Web Services,” US About Amazon.


https://www.aboutamazon.com/what-we-do/amazon-web-services

[5] “Manage AWS Resources - AWS Management Console - AWS,” Amazon


Web Services, Inc. https://aws.amazon.com/console/

39

You might also like