Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

AWS Practioner

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Module 1: Introduction To AWS

What is a client-server model?


A client can be a web browser or desktop application that a person interacts with to make requests to
computer servers.

A server can be services such as Amazon Elastic Compute Cloud (Amazon EC2), a type of virtual
server.

Deployment models for cloud computing


The three cloud computing deployment models:
cloud-based deployment:
 Run all parts of the application in the cloud.
 Migrate existing applications to the cloud.
 Design and build new applications in the cloud.

on-premises deployment:
 Deploy resources by using virtualization and resource management tools.
 Increase resource utilization by using application management and virtualization
technologies.

Hybrid deployment:
 Connect cloud-based resources to on-premises infrastructure.
 Integrate cloud-based resources with legacy IT applications.

Benefits of cloud computing


Trade upfront expense for variable expense
Upfront expense refers to data centers, physical servers, and other resources that you would need to
invest in before using them. Variable expense means you only pay for computing resources you
consume instead of investing heavily in data centers and servers before you know how you’re going to
use them.

Stop spending money to run and maintain data centers

Stop guessing capacity

Benefit from massive economies of scale


By using cloud computing, you can achieve a lower variable cost than you can get on your own.
Because usage from hundreds of thousands of customers can aggregate in the cloud, providers, such
as AWS, can achieve higher economies of scale. The economy of scale translates into lower pay-as-
you-go prices.

Increase speed and agility


 When computing in data centers, it may take weeks to obtain new resources that you need. By
comparison, cloud computing enables you to access new resources within minutes.

Amazon Elastic Compute Cloud (Amazon EC2)


provides secure, resizable compute capacity in the cloud as Amazon EC2 instances. 
 You can provision and launch an Amazon EC2 instance within minutes.
 You can stop using it when you have finished running a workload.
 You pay only for the compute time you use when an instance is running, not when it is stopped
or terminated.
 You can save costs by paying only for server capacity that you need or want.
1
Module 2: Compute in the Cloud

Amazon EC2 instance types


General purpose instances provide a balance of compute, memory, and networking resources.
You can use them for a variety of workloads, such as:
 application servers
 gaming servers
 backend servers for enterprise applications
 small and medium databases

Compute optimized instances are ideal for compute-bound applications that benefit from high-
performance processors.
Ideal for high-performance web servers, compute-intensive applications servers, and dedicated
gaming servers. You can also use compute optimized instances for batch processing workloads that
require processing many transactions in a single group.

Memory optimized instances are designed to deliver fast performance for workloads that process
large datasets in memory.
Suppose that you have a workload that requires large amounts of data to be preloaded before running
an application. This scenario might be a high-performance database or a workload that involves
performing real-time processing of a large amount of unstructured data.

Accelerated computing instances use hardware accelerators, or coprocessors, to perform some


functions more efficiently than is possible in software running on CPUs.
Accelerated computing instances are ideal for workloads such as graphics applications, game
streaming, and application streaming.

Storage optimized instances are designed for workloads that require high, sequential read and
write access to large datasets on local storage. Examples of workloads suitable for storage optimized
instances include distributed file systems, data warehousing applications, and high-frequency online
transaction processing (OLTP) systems.

Amazon EC2 pricing


On-Demand Instances are ideal for short-term, irregular workloads that cannot be interrupted. No
upfront costs or minimum contracts apply. The instances run continuously until you stop them, and
you pay for only the compute time you use. (developing and testing applications and running
applications that have unpredictable usage patterns).

Amazon EC2 Savings Plans enable you to reduce your compute costs by committing to a
consistent amount of compute usage for a 1-year or 3-year term. This term commitment results in
savings of up to 72% over On-Demand costs.

Any usage up to the commitment is charged at the discounted Savings Plan rate (for example, $10 an
hour). Any usage beyond the commitment is charged at regular On-Demand rates.

Reserved Instances are a billing discount applied to the use of On-Demand Instances in your
account. You can purchase Standard Reserved and Convertible Reserved Instances for a 1-year or 3-
year term, and Scheduled Reserved Instances for a 1-year term. You realize greater cost savings with
the 3-year option.

2
Spot Instances are ideal for workloads with flexible start and end times, or that can withstand
interruptions. Spot Instances use unused Amazon EC2 computing capacity and offer you cost savings
at up to 90% off of On-Demand prices. (background processing job)
Dedicated Hosts are physical servers with Amazon EC2 instance capacity that is fully dedicated to
your use. Of all the Amazon EC2 options that were covered, Dedicated Hosts are the most expensive.

Scalability
Amazon EC2 Auto Scaling enables you to automatically add or remove Amazon EC2 instances in
response to changing application demand. By automatically scaling your instances in and out as
needed, you are able to maintain a greater sense of application availability.
Within Amazon EC2 Auto Scaling, you can use two approaches: dynamic scaling and predictive
scaling.
 Dynamic scaling responds to changing demand. 
 Predictive scaling automatically schedules the right number
of Amazon EC2 instances based on predicted demand.
When you create an Auto Scaling group, you can set the minimum
number of Amazon EC2 instances.

The minimum capacity is the number of Amazon EC2 instances


that launch immediately after you have created the Auto Scaling
group. In this example, the Auto Scaling group has a minimum
capacity of one Amazon EC2 instance.

If you do not specify the desired number of Amazon EC2 instances in an Auto Scaling group the
desired capacity defaults to your minimum capacity.
The third configuration that you can set in an Auto Scaling group is the maximum capacity. For
example, you might configure the Auto Scaling group to scale out in response to increased demand,
but only to a maximum of four Amazon EC2 instances.
Because Amazon EC2 Auto Scaling uses Amazon EC2 instances, you pay for only the instances you
use, when you use them. You now have a cost-effective architecture that provides the best customer
experience while reducing expenses.

Elastic Load Balancing is the AWS service that automatically distributes incoming application
traffic across multiple resources, such as Amazon EC2 instances. 
Although Elastic Load Balancing and Amazon EC2 Auto Scaling are separate services, they work
together to help ensure that applications running in Amazon EC2 can provide high performance and
availability. 

Messaging and queuing

monolithic application
Applications are made of multiple components.
The components communicate with each other to transmit data, fulfill
requests, and keep the application running. 
Suppose that you have an application with tightly coupled components.
These components might include databases, servers, the user interface,
business logic, and so on. This type of architecture can be considered
a monolithic application. 
In this approach to application architecture, if a single component fails, other
components fail, and possibly the entire application fails.
3
microservices 
In a microservices approach, application components are loosely coupled.
In this case, if a single component fails, the other components continue to
work because they are communicating with each other. The loose coupling
prevents the entire application from failing. 
When designing applications on AWS, you can take a microservices
approach with services and components that fulfill different functions. Two
services facilitate application integration: Amazon Simple Notification
Service (Amazon SNS) and Amazon Simple Queue Service (Amazon SQS).

Amazon Simple Notification Service (Amazon SNS) is a publish/subscribe service. Using


Amazon SNS topics, a publisher publishes messages to subscribers.
In Amazon SNS, subscribers can be web servers, email addresses, AWS Lambda functions, or several
other options.

Amazon Simple Queue Service (Amazon SQS) is a message queuing service. 


Using Amazon SQS, you can send, store, and receive messages between software components, without
losing messages or requiring other services to be available. In Amazon SQS, an application sends
messages into a queue. A user or service retrieves a message from the queue, processes it, and then
deletes it from the queue.

Serverless computing
The term “serverless” means that your code runs on servers, but you do not need to provision or
manage these servers. With serverless computing, you can focus more on innovating new products
and features instead of maintaining servers.
Another benefit of serverless computing is the flexibility to scale serverless applications automatically.
Serverless computing can adjust the applications' capacity by modifying the units of consumptions,
such as throughput and memory. 
An AWS service for serverless computing is AWS Lambda.

AWS Lambda is a service that lets you run code without needing to provision or manage servers.

For example, a simple Lambda function might involve automatically resizing uploaded images to the
AWS Cloud. In this case, the function triggers when uploading a new image. 

Containers
Containers provide you with a standard way to package your application's code and dependencies into
a single object. You can also use containers for processes and workflows in which there are essential
requirements for security, reliability, and scalability.

4
Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance
container management system that enables you to run and scale containerized applications on AWS.
Amazon ECS supports Docker containers. Docker is a software platform that enables you to build,
test, and deploy applications quickly. AWS supports the use of open-source Docker Community
Edition and subscription-based Docker Enterprise Edition. With Amazon ECS, you can use API calls
to launch and stop Docker-enabled applications.

Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed service that you can use
to run Kubernetes on AWS.
Kubernetes is open-source software that enables you to deploy and manage containerized applications
at scale. A large community of volunteers maintains Kubernetes, and AWS actively works together
with the Kubernetes community. As new features and functionalities release for Kubernetes
applications, you can easily apply these updates to your applications managed by Amazon EKS.

AWS Fargate is a serverless compute engine for containers. It works with both Amazon ECS and
Amazon EKS.

Module 3: AWS global infrastructure and Reliability

An Availability Zone is a
single data center or a group of data centers within a Region. Availability Zones are located tens of
miles apart from each other. This is close enough to have low latency (the time between when content
requested and received) between Availability Zones. However, if a disaster occurs in one part of the
Region, they are distant enough to reduce the chance that multiple Availability Zones are affected.

Edge locations
An edge location is a site that Amazon CloudFront uses to store cached copies of your content closer
to your customers for faster delivery.

5
How to provision AWS resources
The AWS Management Console is a web-based interface for accessing and managing AWS
services. You can quickly access recently used services and search for other services by name,
keyword, or acronym. The console includes wizards and automated workflows that can simplify the
process of completing tasks.

You can also use the AWS Console mobile application to perform tasks such as monitoring resources,
viewing alarms, and accessing billing information. Multiple identities can stay logged into the AWS
Console mobile app at the same time.

The AWS Command Line Interface (AWS CLI)


To save time when making API requests, you can use the AWS Command Line Interface (AWS
CLI). AWS CLI enables you to control multiple AWS services directly from the command line within
one tool. AWS CLI is available for users on Windows, macOS, and Linux. 

By using AWS CLI, you can automate the actions that your services and applications perform through
scripts. For example, you can use commands to launch an Amazon EC2 instance, connect an Amazon
EC2 instance to a specific Auto Scaling group, and more.

Software Development Kits (Sdks)


Another option for accessing and managing AWS services is the software development kits
(SDKs). SDKs make it easier for you to use AWS services through an API designed for your
programming language or platform. SDKs enable you to use AWS services with your existing
applications or create entirely new applications that will run on AWS.

To help you get started with using SDKs, AWS provides documentation and sample code for each
supported programming language. Supported programming languages include C++, Java, .NET, and
more.

AWS Elastic Beanstalk


With AWS Elastic Beanstalk, you provide code and configuration settings, and Elastic Beanstalk
deploys the resources necessary to perform the following tasks:
 Adjust capacity
 Load balancing
 Automatic scaling
 Application health monitoring

AWS CloudFormation
With AWS CloudFormation, you can treat your infrastructure as code. This means that you can
build an environment by writing lines of code instead of using the AWS Management Console to
individually provision resources.

Module 4: Global networking


Connectivity to AWS
Amazon Virtual Private Cloud (Amazon VPC)
Amazon VPC enables you to provision an isolated section of the AWS Cloud. In this isolated section,
you can launch resources in a virtual network that you define. Within a virtual private cloud (VPC),
you can organize your resources into subnets. A subnet is a section of a VPC that can contain
resources such as Amazon EC2 instances.
6
Internet gateway
To allow public traffic from the internet to access your VPC, you attach an internet gateway to the
VPC.

Virtual private gateway


To access private resources in a VPC, you can use a virtual private gateway.

A virtual private gateway enables you to establish a virtual private network (VPN) connection between
your VPC and a private network, such as an on-premises data center or internal corporate network. A
virtual private gateway allows traffic into the VPC only if it is coming from an approved network.

AWS Direct Connect is a service that enables you to establish a dedicated private connection
between your data center and a VPC.

7
Subnets A subnet is a section of a VPC in which you can
group resources based on security or operational needs. Subnets
can be public or private. 
Public subnets contain resources that need to be accessible by
the public, such as an online store’s website.
Private subnets contain resources that should be accessible
only through your private network, such as a database that
contains customers’ personal information and order histories. 

Network access control lists (ACLs)


A network access control list (ACL) is a virtual firewall that
controls inbound and outbound traffic at the subnet level.
By default, your account’s default network ACL allows all inbound
and outbound traffic, but you can modify it by adding your own
rules. For custom network ACLs, all inbound and outbound traffic
is denied until you add rules to specify which traffic to allow.
Stateless packet filtering
Network ACLs perform stateless packet filtering. They remember
nothing and check packets that cross the subnet border each way:
inbound and outbound.

Security groups
A security group is a virtual firewall that controls inbound and outbound
traffic for an Amazon EC2 instance.
By default, a security group denies all inbound traffic and allows all
outbound traffic. You can add custom rules to configure which traffic to
allow or deny.
Stateful packet filtering
Security groups perform stateful packet filtering. They remember previous
decisions made for incoming packets.

Domain Name System (DNS)


You can think of DNS as being the phone book of the internet. DNS resolution is the process of
translating a domain name to an IP address. 

8
DNS resolution involves a customer DNS resolver communicating with a company DNS server.

Amazon Route 53
Amazon Route 53 is a DNS web service. It gives developers and businesses a reliable way to route end
users to internet applications hosted in AWS. 
Another feature of Route 53 is the ability to manage the DNS records for domain names. You can
register new domain names directly in Route 53. You can also transfer DNS records for existing
domain names managed by other domain registrars. This enables you to manage all of your domain
names within a single location.

Module 5: Storage and Database

Instance stores and Amazon Elastic Block Store (Amazon EBS)

Block-level storage

Instance stores

9
An instance store provides temporary block-level storage for an Amazon EC2 instance. An instance
store is disk storage that is physically attached to the host computer for an EC2 instance, and
therefore has the same lifespan as the instance. When the instance is terminated, you lose any data in
the instance store.

Amazon Elastic Block Store (Amazon EBS) is a service that provides block-level storage
volumes that you can use with Amazon EC2 instances. If you stop or terminate an Amazon EC2
instance, all the data on the attached EBS volume remains available.
To create an EBS volume, you define the configuration (such as volume size and type) and provision
it. After you create an EBS volume, it can attach to an Amazon EC2 instance.
Because EBS volumes are for data that needs to persist, it’s important to back up the data. You can
take incremental backups of EBS volumes by creating Amazon EBS snapshots.

Amazon EBS snapshots

An EBS snapshot is an incremental backup. This means that the first backup taken of a volume
copies all the data. For subsequent backups, only the blocks of data that have changed since the most
recent snapshot are saved. 

Object storage

In object storage, each object consists of data, metadata, and a key.


The data might be an image, video, text document, or any other type of file. Metadata contains
information about what the data is, how it is used, the object size, and so on. An object’s key is its
unique identifier.

Amazon Simple Storage Service (Amazon S3) is a service that provides object-level storage.
Amazon S3 stores data as objects in buckets.
You can upload any type of file to Amazon S3, such as images, videos, text files, and so on. For
example, you might use Amazon S3 to store backup files, media files for a website, or archived

10
documents. Amazon S3 offers unlimited storage space. The maximum file size for an object in
Amazon S3 is 5 TB.
When you upload a file to Amazon S3, you can set permissions to control visibility and access to it.
You can also use the Amazon S3 versioning feature to track changes to your objects over time.
Amazon S3 storage classes
Amazon S3 Standard
 Designed for frequently accessed data
 Stores data in a minimum of three Availability Zones
Amazon S3 Standard provides high availability for objects. This makes it a good choice for a wide
range of use cases, such as websites, content distribution, and data analytics. Amazon S3 Standard
has a higher cost than other storage classes intended for infrequently accessed data and archival
storage.

Amazon S3 Standard-Infrequent Access (S3 Standard-IA)


 Ideal for infrequently accessed data
 Similar to Amazon S3 Standard but has a lower storage price and higher retrieval price
Amazon S3 Standard-IA is ideal for data infrequently accessed but requires high availability when
needed. Both Amazon S3 Standard and Amazon S3 Standard-IA store data in a minimum of three
Availability Zones

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)


 Stores data in a single Availability Zone
 Has a lower storage price than Amazon S3 Standard-IA
This makes it a good storage class to consider if the following conditions apply:
 You want to save costs on storage.
 You can easily reproduce your data in the event of an Availability Zone failure.

Amazon S3 Intelligent-Tiering
 Ideal for data with unknown or changing access patterns
 Requires a small monthly monitoring and automation fee per object
In the Amazon S3 Intelligent-Tiering storage class, Amazon S3 monitors objects’ access patterns. If
you haven’t accessed an object for 30 consecutive days, Amazon S3 automatically moves it to the
infrequent access tier, Amazon S3 Standard-IA. If you access an object in the infrequent access tier,
Amazon S3 automatically moves it to the frequent access tier, Amazon S3 Standard.

Amazon S3 Glacier Instant Retrieval


 Works well for archived data that requires immediate access
 Can retrieve objects within a few milliseconds
When you decide between the options for archival storage, consider how quickly you must retrieve the
archived objects. You can retrieve objects stored in the Amazon S3 Glacier Instant Retrieval storage
class within milliseconds, with the same performance as Amazon S3 Standard.

Amazon S3 Glacier Flexible Retrieval


 Low-cost storage designed for data archiving
 Able to retrieve objects within a few minutes to hours
For example, you might use this storage class to store archived customer records or older photos and
video files.

Amazon S3 Glacier Deep Archive


 Lowest-cost object storage class ideal for archiving
 Able to retrieve objects within 12 hours

11
All objects from this storage class are replicated and stored across at least three geographically
dispersed Availability Zones.

Amazon S3 Outposts
 Creates S3 buckets on Amazon S3 Outposts
 Makes it easier to retrieve, store, and access data on AWS Outposts

Amazon S3 Outposts delivers object storage to your on-premises AWS Outposts environment.
Amazon S3 Outposts is designed to store data durably and redundantly across multiple devices and
servers on your Outposts. It works well for workloads with local data residency requirements that
must satisfy demanding performance needs by keeping data close to on-premises applications.

Amazon Elastic File System (Amazon EFS)

File Storage
File storage, also called file-level or file-based storage, stores data in a hierarchical structure. The data
is saved in files and folders, and presented to both the system storing it and the system retrieving it in
the same format.

Amazon Elastic File System (Amazon EFS) is a scalable file system used with AWS Cloud
services and on-premises resources. As you add and remove files, Amazon EFS grows and shrinks
automatically. It can scale on demand to petabytes without disrupting applications. 

Amazon Relational Database Service (Amazon RDS)


Amazon Relational Database Service (Amazon RDS) is a service that enables you to run relational
databases in the AWS Cloud.
Amazon RDS is a managed service that automates tasks such as hardware provisioning, database
setup, patching, and backups. With these capabilities, you can spend less time completing
administrative tasks and more time using data to innovate your applications. You can integrate
Amazon RDS with other services to fulfill your business and operational needs, such as using AWS
Lambda to query your database from a serverless application.
Amazon RDS provides a number of different security options. Many Amazon RDS database engines
offer encryption at rest (protecting data while it is stored) and encryption in transit (protecting data
while it is being sent and received).

Amazon RDS database engines


Amazon RDS is available on six database engines, which optimize for memory, performance, or
input/output (I/O). Supported database engines include:
 Amazon Aurora  PostgreSQL
 MySQL  MariaDB
 Oracle Database  Microsoft SQL Server

12
Amazon Aurora is an enterprise-class relational database. It is compatible with MySQL and
PostgreSQL relational databases. It is up to five times faster than standard MySQL databases and up
to three times faster than standard PostgreSQL databases.
Amazon Aurora helps to reduce your database costs by reducing unnecessary input/output (I/O)
operations, while ensuring that your database resources remain reliable and available. 
Consider Amazon Aurora if your workloads require high availability. It replicates six copies of your
data across three Availability Zones and continuously backs up your data to Amazon S3.

Amazon DynamoDB is a key-value database service. It delivers single-digit millisecond


performance at any scale.

Amazon Redshift is a data warehousing service that you can use for big data analytics. It offers the
ability to collect data from many sources and helps you to understand relationships and trends across
your data.

AWS Database Migration Service (AWS DMS)


AWS Database Migration Service (AWS DMS) enables you to migrate relational databases,
nonrelational databases, and other types of data stores.
With AWS DMS, you move data between a source database and a target database. The source and
target databases can be of the same type or different types. During the migration, your source
database remains operational, reducing downtime for any applications that rely on the database. 
For example, suppose that you have a MySQL database that is stored on premises in an Amazon EC2
instance or in Amazon RDS. Consider the MySQL database to be your source database. Using AWS
DMS, you could migrate your data to a target database, such as an Amazon Aurora database.

Additional database services


Amazon DocumentDB is a document database service that supports MongoDB workloads.
(MongoDB is a document database program.)

Amazon Neptune is a graph database service. 


You can use Amazon Neptune to build and run applications that work with highly connected datasets,
such as recommendation engines, fraud detection, and knowledge graphs.

Amazon Quantum Ledger Database (Amazon QLDB) is a ledger database service. 

You can use Amazon QLDB to review a complete history of all the changes that have been made to
your application data.

Amazon Managed Blockchain is a service that you can use to create and manage blockchain
networks with open-source frameworks. 

Blockchain is a distributed ledger system that lets multiple parties run transactions and share data
without a central authority.

13
Amazon ElastiCache is a service that adds caching layers on top of your databases to help improve
the read times of common requests. 
It supports two types of data stores: Redis and Memcached.

Amazon DynamoDB Accelerator (DAX) is an in-memory cache for DynamoDB. 


It helps improve response times from single-digit milliseconds to microseconds.

Module 6: Security
The AWS shared responsibility model
Customers: Security in the cloud
Customers are responsible for the security of everything that they create and put in the AWS Cloud.

When using AWS services, you, the customer, maintain complete control over your content. You are
responsible for managing security requirements for your content, including which content you choose
to store on AWS, which AWS services you use, and who has access to that content. You also control
how access rights are granted, managed, and revoked.

AWS: Security of the cloud


AWS is responsible for the security of the cloud.

AWS operates, manages, and controls the components at all layers of infrastructure. This includes
areas such as the host operating system, the virtualization layer, and even the physical security of the
data centers from which services operate.

AWS is responsible for protecting the global infrastructure that runs all of the services offered in the
AWS Cloud. This infrastructure includes AWS Regions, Availability Zones, and edge locations.

AWS manages the security of the cloud, specifically the physical infrastructure that hosts your
resources, which include:
 Physical security of data centers
 Hardware and software infrastructure
 Network infrastructure
 Virtualization infrastructure

AWS Identity and Access Management (IAM)


AWS Identity and Access Management (IAM) enables you to manage access to AWS services and
resources securely.

When you first create an AWS account, you begin with an identity known as the root user.
The root user is accessed by signing in with the email address and password that you used to create
your AWS account. You can think of the root user as being similar to the owner of the coffee shop. It
has complete access to all the AWS services and resources in the account.

IAM users
An IAM user is an identity that you create in AWS. It represents the person or application that
interacts with AWS services and resources. It consists of a name and credentials.

14
By default, when you create a new IAM user in AWS, it has no permissions associated with it. To allow
the IAM user to perform specific actions in AWS, such as launching an Amazon EC2 instance or
creating an Amazon S3 bucket, you must grant the IAM user the necessary permissions.
An IAM policy is a document that allows or denies permissions to AWS services and resources.
IAM policies enable you to customize users’ levels of access to resources. For example, you can allow
users to access all of the Amazon S3 buckets within your AWS account, or only a specific bucket.
Follow the security principle of least privilege when granting permissions.

IAM groups
An IAM group is a collection of IAM users. When you assign an IAM policy to a group, all users in the
group are granted permissions specified by the policy.

IAM roles
An IAM role is an identity that you can assume to gain temporary access to permissions.

IAM roles are ideal for situations in which access to services or resources needs to be granted
temporarily, instead of long-term.

Multi-factor authentication
First, when a user signs into an AWS website, they enter their IAM user ID and password.
Next, the user is prompted for an authentication response from their AWS MFA device. This device
could be a hardware security key, a hardware device, or an MFA application on a device such as a
smartphone.

AWS Organizations
Suppose that your company has multiple AWS accounts. You can use AWS Organizations to
consolidate and manage multiple AWS accounts within a central location.

In AWS Organizations, you can centrally control permissions for the accounts in your organization by
using service control policies (SCPs). SCPs enable you to place restrictions on the AWS services,
resources, and individual API actions that users and roles in each account can access.

Organizational units
In AWS Organizations, you can group accounts into organizational units (OUs) to make it easier to
manage accounts with similar business or security requirements. When you apply a policy to an OU,
all the accounts in the OU automatically inherit the permissions specified in the policy.

Compliance
AWS Artifact is a service that provides on-demand access to AWS security and compliance reports
and select online agreements. AWS Artifact consists of two main sections: AWS Artifact Agreements
and AWS Artifact Reports.

AWS Artifact Agreements


Suppose that your company needs to sign an agreement with AWS regarding your use of certain types
of information throughout AWS services. You can do this through AWS Artifact Agreements.

15
AWS Artifact Reports
Next, suppose that a member of your company’s development team is building an application and
needs more information about their responsibility for complying with certain regulatory standards.
You can advise them to access this information in AWS Artifact Reports.

AWS Artifact Reports provide compliance reports from third-party auditors.

Customer Compliance Center


The Customer Compliance Center contains resources to help you learn more about AWS compliance.
In the Customer Compliance Center, you can read customer compliance stories to discover how
companies in regulated industries have solved various compliance, governance, and audit challenges.

Denial-of-service attacks
For example, an attacker might flood a website or application with excessive network traffic until the
targeted website or application becomes overloaded and is no longer able to respond. If the website or
application becomes unavailable, this denies service to users who are trying to make legitimate
requests.

Distributed denial-of-service attacks


In a distributed denial-of-service (DDoS) attack, multiple sources are used to start an attack that aims
to make a website or application unavailable. This can come from a group of attackers, or even a
single attacker. The single attacker can use multiple infected computers (also known as “bots”) to
send excessive traffic to a website or application.

AWS Shield
AWS Shield is a service that protects applications against DDoS attacks. AWS Shield provides two
levels of protection:

AWS Shield Standard automatically protects all AWS customers at no cost. It protects your AWS
resources from the most common, frequently occurring types of DDoS attacks.

As network traffic comes into your applications, AWS Shield Standard uses a variety of analysis
techniques to detect malicious traffic in real time and automatically mitigates it.

AWS Shield Advanced is a paid service that provides detailed attack diagnostics and the ability to
detect and mitigate sophisticated DDoS attacks.

It also integrates with other services such as Amazon CloudFront, Amazon Route 53, and Elastic Load
Balancing. Additionally, you can integrate AWS Shield with AWS WAF by writing custom rules to
mitigate complex DDoS attacks.

AWS Key Management Service (AWS KMS) enables you to perform encryption operations
through the use of cryptographic keys to ensure that your applications’ data is secure while in
storage (encryption at rest) and while it is transmitted, known as encryption in transit. A
cryptographic key is a random string of digits used for locking (encrypting) and unlocking
(decrypting) data. You can use AWS KMS to create, manage, and use cryptographic keys. You can also
control the use of keys across a wide range of services and in your applications.
With AWS KMS, you can choose the specific levels of access control that you need for your keys. For
example, you can specify which IAM users and roles are able to manage keys. Alternatively, you can
temporarily disable keys so that they are no longer in use by anyone. Your keys never leave AWS KMS,
and you are always in control of them.

16
AWS WAF
AWS WAF is a web application firewall that lets you monitor network requests that come into your
web applications.

AWS WAF works together with Amazon CloudFront and an Application Load Balancer. Recall the
network access control lists that you learned about in an earlier module. AWS WAF works in a similar
way to block or allow traffic. However, it does this by using a web access control list (ACL) to
protect your AWS resources.

Amazon Inspector
Amazon Inspector helps to improve the security and compliance of applications by running
automated security assessments. It checks applications for security vulnerabilities and deviations
from security best practices, such as open access to Amazon EC2 instances and installations of
vulnerable software versions.

Amazon GuardDuty
Amazon GuardDuty is a service that provides intelligent threat detection for your AWS infrastructure
and resources. It identifies threats by continuously monitoring the network activity and account
behavior within your AWS environment.

After you have enabled GuardDuty for your AWS account, GuardDuty begins monitoring your
network and account activity. You do not have to deploy or manage any additional security software.
GuardDuty then continuously analyzes data from multiple AWS sources, including VPC Flow Logs
and DNS logs.

Module 7: AWS tools for monitoring and analytics


Amazon CloudWatch
Amazon CloudWatch is a web service that enables you to monitor and manage various metrics and
configure alarm actions based on data from those metrics.
CloudWatch uses metrics to represent the data points for your resources. AWS services send metrics
to CloudWatch. CloudWatch then uses these metrics to create graphs automatically that show how
performance has changed over time.

CloudWatch alarms
With CloudWatch, you can create alarms that automatically perform actions if the value of your
metric has gone above or below a predefined threshold.
You could create a CloudWatch alarm that automatically stops an Amazon EC2 instance when the
CPU utilization percentage has remained below a certain threshold for a specified period. When
configuring the alarm, you can specify to receive a notification whenever this alarm is triggered.

CloudWatch dashboard
The CloudWatch dashboard feature enables you to access all the metrics for your resources from a
single location. For example, you can use a CloudWatch dashboard to monitor the CPU utilization of
an Amazon EC2 instance, the total number of requests made to an Amazon S3 bucket, and more. You
can even customize separate dashboards for different business purposes, applications, or resources

17
AWS CloudTrail
AWS CloudTrail records API calls for your account. The recorded information includes the identity of
the API caller, the time of the API call, the source IP address of the API caller, and more. You can
think of CloudTrail as a “trail” of breadcrumbs (or a log of actions) that someone has left behind
them.
Recall that you can use API calls to provision, manage, and configure your AWS resources. With
CloudTrail, you can view a complete history of user activity and API calls for your applications and
resources.

CloudTrail Insights
Within CloudTrail, you can also enable CloudTrail Insights. This optional feature allows CloudTrail to
automatically detect unusual API activities in your AWS account.
For example, CloudTrail Insights might detect that a higher number of Amazon EC2 instances than
usual have recently been launched in your account. You can then review the full event details to
determine which actions you need to take next.

AWS Trusted Advisor


AWS Trusted Advisor is a web service that inspects your AWS environment and provides real-time
recommendations in accordance with AWS best practices.
Trusted Advisor compares its findings to AWS best practices in five categories: cost optimization,
performance, security, fault tolerance, and service limits. For the checks in each category, Trusted
Advisor offers a list of recommended actions and additional resources to learn more about AWS best
practices.
The guidance provided by AWS Trusted Advisor can benefit your company at all stages of
deployment. For example, you can use AWS Trusted Advisor to assist you while you are creating new
workflows and developing new applications. Or you can use it while you are making ongoing
improvements to existing applications and resources.

AWS Trusted Advisor dashboard

18
When you access the Trusted Advisor dashboard on the AWS Management Console, you can review
completed checks for cost optimization, performance, security, fault tolerance, and service limits.
For each category:
 The green check indicates the number of items for which it detected no problems.
 The orange triangle represents the number of recommended investigations.
 The red circle represents the number of recommended actions.

Module 8: AWS pricing and support


AWS Free Tier
Always Free
These offers do not expire and are available to all AWS customers.
For example, AWS Lambda allows 1 million free requests and up to 3.2 million seconds of compute
time per month. Amazon DynamoDB allows 25 GB of free storage per month.

12 Months Free
These offers are free for 12 months following your initial sign-up date to AWS.
Examples include specific amounts of Amazon S3 Standard Storage, thresholds for monthly hours of
Amazon EC2 compute time, and amounts of Amazon CloudFront data transfer out.

Trials
Short-term free trial offers start from the date you activate a particular service. The length of each
trial might vary by number of days or the amount of usage in the service.
For example, Amazon Inspector offers a 90-day free trial. Amazon Lightsail (a service that enables
you to run virtual private servers) offers 750 free hours of usage over a 30-day period.

How AWS pricing works


Pay for what you use.
For each service, you pay for exactly the amount of resources that you actually use, without requiring
long-term contracts or complex licensing.

Pay less when you reserve.


Some services offer reservation options that provide a significant discount compared to On-Demand
Instance pricing.

For example, suppose that your company is using Amazon EC2 instances for a workload that needs to
run continuously. You might choose to run this workload on Amazon EC2 Instance Savings Plans,
because the plan allows you to save up to 72% over the equivalent On-Demand Instance capacity.

Pay less with volume-based discounts when you use more.


Some services offer tiered pricing, so the per-unit cost is incrementally lower with increased usage.

For example, the more Amazon S3 storage space you use, the less you pay for it per GB.

AWS Pricing Calculator


Suppose that your company is interested in using Amazon EC2. However, you are not yet sure which
AWS Region or instance type would be the most cost-efficient for your use case. In the AWS Pricing
Calculator, you can enter details such as the kind of operating system you need, memory

19
requirements, and input/output (I/O) requirements. By using the AWS Pricing Calculator, you can
review an estimated comparison of different EC2 instance types across AWS Regions.

When you have created an estimate, you can save it and generate a link to share it with others.

AWS Billing & Cost Management dashboard


Use the AWS Billing & Cost Management dashboard to pay your AWS bill, monitor your usage, and
analyze and control your costs.

 Compare your current month-to-date balance with the previous month and get a forecast for
the next month based on current usage.
 View month-to-date spend by service.
 View Free Tier usage by service.
 Access Cost Explorer and create budgets.
 Purchase and manage Savings Plans.
 Publish AWS Cost and Usage Reports.

Consolidated billing
The consolidated billing feature of AWS Organizations enables you to receive a single bill for all AWS
accounts in your organization. By consolidating, you can easily track the combined costs of all the
linked accounts in your organization. The default maximum number of accounts allowed for an
organization is 4, but you can contact AWS Support to increase your quota, if needed.

Another benefit of consolidated billing is the ability to share bulk discount pricing, Savings Plans, and
Reserved Instances across the accounts in your organization. For instance, one account might not
have enough monthly usage to qualify for discount pricing. However, when multiple accounts are
combined, their aggregated usage may result in a benefit that applies across all accounts in the
organization.

AWS Budgets
Suppose that you have set a budget for Amazon EC2. You want to ensure that your company’s usage of
Amazon EC2 does not exceed $200 for the month.

In AWS Budgets, you could set a custom budget to notify you when your usage has reached half of this
amount ($100). This setting would allow you to receive an alert and decide how you would like to
proceed with your continued use of Amazon EC2.

20
AWS Cost Explorer
AWS Cost Explorer is a tool that enables you to visualize, understand, and manage your AWS costs
and usage over time.

AWS Cost Explorer includes a default report of the costs and usage for your top five cost-accruing
AWS services. You can apply custom filters and groups to analyze your data. For example, you can
view resource usage at the hourly level.

21
AWS Support
Basic Support is free for all AWS customers. It includes access to whitepapers, documentation, and
support communities. With Basic Support, you can also contact AWS for billing questions and service
limit increases.
With Basic Support, you have access to a limited selection of AWS Trusted Advisor checks.
Additionally, you can use the AWS Personal Health Dashboard, a tool that provides alerts and
remediation guidance when AWS is experiencing events that may affect you.

Developer Support
Customers in the Developer Support plan have access to features such as:
 Best practice guidance
 Client-side diagnostic tools
 Building-block architecture support, which consists of guidance for how to use AWS offerings,
features, and services together

Business Support plan have access to additional features, including:


 Use-case guidance to identify AWS offerings, features, and services that can best support your
specific needs
 All AWS Trusted Advisor checks
 Limited support for third-party software, such as common operating systems and application
stack components

Enterprise On-Ramp Support


In addition to all the features included in the Basic, Developer, and Business Support plans,
customers with an Enterprise On-Ramp Support plan have access to:
 A pool of Technical Account Managers to provide proactive guidance and coordinate access to
programs and AWS experts
 A Cost Optimization workshop (one per year)
 A Concierge support team for billing and account assistance
 Tools to monitor costs and performance through Trusted Advisor and Health API/Dashboard
Enterprise On-Ramp Support plan also provides access to a specific set of proactive support services,
which are provided by a pool of Technical Account Managers.
 Consultative review and architecture guidance (one per year)
 Infrastructure Event Management support (one per year)
 Support automation workflows
 30 minutes or less response time for business-critical issues

Enterprise Support
In addition to all features included in the Basic, Developer, Business, and Enterprise On-Ramp
support plans, customers with Enterprise Support have access to:
 A designated Technical Account Manager to provide proactive guidance and coordinate access
to programs and AWS experts
 A Concierge support team for billing and account assistance
 Operations Reviews and tools to monitor health
 Training and Game Days to drive innovation
 Tools to monitor costs and performance through Trusted Advisor and Health API/Dashboard
The Enterprise plan also provides full access to proactive services, which are provided by a designated
Technical Account Manager:
 Consultative review and architecture guidance
 Infrastructure Event Management support
 Cost Optimization Workshop and tools
 Support automation workflows
 15 minutes or less response time for business-critical issues
22
Technical Account Manager (TAM)
The Enterprise On-Ramp and Enterprise Support plans include access to a Technical Account
Manager (TAM).

The TAM is your primary point of contact at AWS. If your company subscribes to Enterprise Support
or Enterprise On-Ramp, your TAM educates, empowers, and evolves your cloud journey across the
full range of AWS services. TAMs provide expert engineering guidance, help you design solutions that
efficiently integrate AWS services, assist with cost-effective and resilient architectures, and provide
direct access to AWS programs and a broad community of experts.

For example, suppose that you are interested in developing an application that uses several AWS
services together. Your TAM could provide insights into how to best use the services together. They
achieve this while aligning with the specific needs that your company is hoping to address through the
new application.

AWS Marketplace
AWS Marketplace is a digital catalog that includes thousands of software listings from independent
software vendors. You can use AWS Marketplace to find, test, and buy software that runs on AWS.
For each listing in AWS Marketplace, you can access detailed information on pricing options,
available support, and reviews from other AWS customers.

AWS Marketplace Categories

Module 9: Migration and Innovation in the AWS Cloud


Six core perspectives of the Cloud Adoption Framework
At the highest level, the AWS Cloud Adoption Framework (AWS CAF) organizes guidance into six
areas of focus, called Perspectives. Each Perspective addresses distinct responsibilities. The planning
process helps the right people across the organization prepare for the changes ahead.

In general, the Business, People, and Governance Perspectives focus on business capabilities,
whereas the Platform, Security, and Operations Perspectives focus on technical capabilities.
1. Business Perspective: This perspective helps organizations understand the business value of
cloud adoption, and provides guidance on how to prioritize and plan investments in cloud services.

2. People Perspective: This perspective helps organizations understand the skills and capabilities
required for successful cloud adoption, and provides guidance on how to develop and retain the right
talent.

23
3. Governance Perspective: This perspective helps organizations understand the governance
requirements for cloud adoption, and provides guidance on how to design and implement effective
governance policies.

4. Security Perspective: This perspective helps organizations understand the security implications
of cloud adoption, and provides guidance on how to design and implement effective security controls.

5. Architecture Perspective: This perspective helps organizations understand the architectural


implications of cloud adoption, and provides guidance on how to design and implement effective
cloud architectures.

6. Cost Perspective: This perspective helps organizations understand the cost implications of cloud
adoption, and provides guidance on how to optimize cloud costs.

6 strategies for migration


Rehosting, also known as “lift-and-shift”, involves moving applications without changes.
In the scenario of a large legacy migration, in which the company is looking to implement its
migration and scale quickly to meet a business case, the majority of applications are rehosted.

Replatforming, also known as “lift, tinker, and shift,” involves making a few cloud optimizations to
realize a tangible benefit. Optimization is achieved without changing the core architecture of the
application.

Refactoring (also known as re-architecting) involves reimagining how an application is


architected and developed by using cloud-native features. Refactoring is driven by a strong business
need to add features, scale, or performance that would otherwise be difficult to achieve in the
application’s existing environment.

Repurchasing involves moving from a traditional license to a software-as-a-service model.


For example, a business might choose to implement the repurchasing strategy by migrating from a
customer relationship management (CRM) system to Salesforce.com.

Retaining consists of keeping applications that are critical for the business in the source
environment. This might include applications that require major refactoring before they can be
migrated, or work that can be postponed until a later time.

Retiring is the process of removing applications that are no longer needed.

AWS Snow Family members


The AWS Snow Family is a collection of physical devices that help to physically transport up to
exabytes of data into and out of AWS.
AWS Snow Family is composed of AWS Snowcone, AWS Snowball, and AWS Snowmobile.

24
AWS Snowcone is a small, rugged, and secure edge computing and data transfer device.
It features 2 CPUs, 4 GB of memory, and 8 TB of usable storage.

Snowball Edge Storage Optimized devices are well suited for large-scale data migrations and
recurring transfer workflows, in addition to local computing with higher capacity needs.
Snowball Edge Compute Optimized provides powerful computing resources for use cases such
as machine learning, full motion video analysis, analytics, and local computing stacks.

AWS Snowmobile This is a data transport solution that allows you to transfer up to 100 petabytes
of data to the AWS cloud.

Innovate with AWS Services


Consider some of the paths you might explore in the future as you continue on your cloud journey.

Serverless applications
With AWS, serverless refers to applications that don’t require you to provision, maintain, or
administer servers. You don’t need to worry about fault tolerance or availability. AWS handles these
capabilities for you.

Artificial intelligence
AWS offers a variety of services powered by artificial intelligence (AI).
For example, you can perform the following tasks:
 Convert speech to text with Amazon Transcribe.
 Discover patterns in text with Amazon Comprehend.
 Identify potentially fraudulent online activities with Amazon Fraud Detector.
 Build voice and text chatbots with Amazon Lex.

Machine learning
Traditional machine learning (ML) development is complex, expensive, time consuming, and
error prone. AWS offers Amazon SageMaker to remove the difficult work from the process and
empower you to build, train, and deploy ML models quickly.
You can use ML to analyze data, solve complex problems, and predict outcomes before they happen.

The AWS Well-Architected Framework and benefits of the AWS Cloud

The Well-Architected Framework is a set of best practices designed to help organizations design and
implement systems that are secure, reliable, efficient, and cost-effective. It is made up of six pillars:

Operational excellence is the ability to run and monitor systems to deliver business value and to
continually improve supporting processes and procedures.

25
The Security pillar is the ability to protect information, systems, and assets while delivering business
value through risk assessments and mitigation strategies.

Reliability includes testing recovery procedures, scaling horizontally to increase aggregate system
availability, and automatically recovering from failure.

Performance efficiency is the ability to use computing resources efficiently to meet system
requirements and to maintain that efficiency as demand changes and technologies evolve.
Evaluating the performance efficiency of your architecture includes experimenting more often, using
serverless architectures, and designing systems to be able to go global in minutes.

Cost optimization is the ability to run systems to deliver business value at the lowest price point.
Cost optimization includes adopting a consumption model, analyzing and attributing expenditure,
and using managed services to reduce the cost of ownership.

Sustainability is the ability to continually improve sustainability impacts by reducing energy


consumption and increasing efficiency across all components of a workload by maximizing the
benefits from the provisioned resources and minimizing the total resources required.

Advantages of cloud computing:


Trade upfront expense for variable expense.
Upfront expenses include data centers, physical servers, and other resources that you would need to
invest in before using computing resources. Instead of investing heavily in data centers and servers
before you know how you’re going to use them, you can pay only when you consume computing
resources.

Benefit from massive economies of scale.


By using cloud computing, you can achieve a lower variable cost than you can get on your own.
Because usage from hundreds of thousands of customers aggregates in the cloud, providers such as
AWS can achieve higher economies of scale. Economies of scale translate into lower pay-as-you-go
prices.

Stop guessing capacity.


With cloud computing, you don’t have to predict how much infrastructure capacity you will need
before deploying an application.
For example, you can launch Amazon Elastic Compute Cloud (Amazon EC2) instances when needed
and pay only for the compute time you use. Instead of paying for resources that are unused or dealing
with limited capacity, you can access only the capacity that you need, and scale in or out in response
to demand.

Increase speed and agility.


The flexibility of cloud computing makes it easier for you to develop and deploy applications.
This flexibility also provides your development teams with more time to experiment and innovate.

Stop spending money running and maintaining data centers.


Cloud computing in data centers often requires you to spend more money and time managing
infrastructure and servers.
A benefit of cloud computing is the ability to focus less on these tasks and more on your applications
and customers.

Go global in minutes.
The AWS Cloud global footprint enables you to quickly deploy applications to customers around the
world, while providing them with low latency.
26
Exam domains
Domain Percentage of exam
Domain 1: Cloud Concepts 26%
Domain 2: Security and Compliance 25%
Domain 3: Technology 33%
Domain 4: Billing and Pricing 16%
Total 100%

You are encouraged to use these benchmarks to help you determine how to allocate your time
studying for the exam.

Exam strategies
Read the full question.
First, make sure that you read each question in full. Key words or phrases in the question that, if left
unread, could result in you selecting an incorrect response option.

Predict the answer before reviewing the response options.


Next, try to predict the correct answer before looking at any of the response options.
This strategy helps you to draw directly from your knowledge and skills without distraction from
incorrect response options. If your prediction turns out to be one of the response options, this can be
helpful for knowing whether you’re on the right track. However, make sure that you review all the
other response options for that question.

Eliminate incorrect response options.


Before selecting your response to a question, eliminate any options that you believe to be incorrect.

27

You might also like