Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

KT AWS

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

Section - Fundamentals of Cloud Computing

Module 1: Introduction to Cloud Computing

Data Center Approach:

Requirement: Your company wants to host their website.

Solution -

System Administrator’s responsibility.

Arranging all the entire things.

i) Choose the DataCenter / Hosting Provider.


ii) You need to typically send them an enquiry about your requirements.
iii) They will contact you and price negotiations.
When there are any issues, the system administrator has to run.

1.2 Challenges with Data Center Model

Example 1:-

Due to some big promotion, server capacity needs to be increased from 4GB RAM to 32 GB
RAM

Data Center Provider Way:-

Buy a 32 GB RAM stick & install it onto your server

Hosting Provider Way:-

Raise a support ticket and expect a response within 15 minutes to 12 hours for a response.
Get the DC guys to resize your Server.

Cloud Way:-

Stop the Server & change the instance size.


1.3 Introduction to Cloud Computing

Cloud Computing is a model in which computing resource is available as a service.

3 important characteristic of Cloud Computing :

On-demand & self-serviced [ Any time launch without manual intervention ]


Elasticity. [ Can scale up and down anytime ]
Measured Service [ Pay what you use ]

Module 2: Cloud Computing Models


There are 3 types of Cloud Computing models

Software as Service [ Google Docs, Office 365 ]

Platform as Service [ Google App Engine ]

Infrastructure as Service [ AWS, Linode, Digital Ocean ]

It is very important to choose a right cloud service provider based on your use-case.

AWS is one of the most comprehensive Cloud providers.

It provides all the type of cloud models

● Software as Service
● Platform as Service
● Infrastructure as Service

But if you just depend on AWS for everything, you will lose a lot of money. Hence many of the
organizations opt for Multi-Cloud based approach.

Module 3: Architecture of Cloud Environments


The cloud from behind the scenes is the data center only.

Virtualization Technology plays a very important role in Cloud Computing.

Virtualization allows us to run multiple OS on a single hardware.

There are many virtualization software available like:

● VMware Workstation / vSphere


● KVM
● XEN
● VirtualBox

Module 4: On-Demand & Self Service - Characteristics of


Cloud
A person can provision resources in the cloud whenever needed, without requiring any human
interaction with a service provider.

On-demand makes self-service with automation possible in a seeming less way.

Challenges with On-Demand Model:

On-Demand does not always mean that you will be able to launch instances at any given point
of time.

Even a Cloud provider has limits, though it might be high, these limits are definitely reached.
Module 4: Elasticity
Elasticity deals with adding and removing capacity, whenever it is needed in the environment.

Capacity generally refers to mostly processing & memory.

It is like a rubber band.

4.1 Overview of Scalability

Horizontal Scalability: Adding or Removing instances from the pool like cluster farm
Vertical Scalability: Adding or Removing resources for existing servers.
4.2 Overview of Auto Scalability:

Scaling servers on-demand is the real deal.

It can be achieved through Auto Scaling functionality.

Use Case Scenario:

● Whenever CPU Load > 70%, scale up to two more servers


● Whenever CPU Load < 30%, scale down by two servers.

Here is the sample auto-scaling based configuration:

Module 2: AWS Global Infrastructure


A single data center typically has 1000s of servers.
What if the data center goes down?

Real-World Scenario:

● In Mumbai, there were very heavy rains in 2005.


● A lot of people were affected.
● A lot of Data Centers were also affected.

2.1 Availability Zone

AWS Data Centers are organized into Availability Zones (AZ)

Each availability zone is located at lower-risk locations.

There are multiple AZ and each of them is separated by geographic region


Each AZ is designed for an independent failure zone.

Thus, they are physically separated.

The AZ is interconnected with high-speed private links.

Each availability zone is located at lower-risk locations.

2.2 AWS Region

Each region contains two or more availability zones.

AWS has 22 regions worldwide and the number keeps increasing.


2.3 AWS Global Infrastructure

AWS currently operates on 22 regions across the world with 69 availability zones.

A firewall is a network security system that monitors and controls the incoming and outgoing
network traffic based on predetermined security rules.

There can be both hardware-based firewalls and software-based firewalls.


The overview working of Firewall can be depicted from the below diagram where it allows
connections from Trusted users and Blocks from Hackers.

Firewalls can allow trusted and block from hackers by allowing administrator to whitelist certain
ports and IP addresses.

Module 11: Simple Storage Service (S3)


AWS S3 is an object storage designed to store and retrieve any amount of data from anywhere

It is designed for 99.999999999% durability and 99.99% availability.

The thing that makes AWS S3 so powerful are the features that it comes preloaded with which
are simply the best.

Let’s understand this with a use-case:

Large Corp is a payments organization and has more than 1000 servers. As being PCI
DSS compliant, they must retain their logs for 1 year. It has been found that every day,
the payment server generates logs of 200 GB. How to achieve this use case pertaining
to the storage capacity in a cost-effective manner?
11.2 S3 Terminology

There is two important terminologies in AWS S3 :

● Buckets
● Objects

Buckets are like “Folders” where you can store multiple files (objects)

Module 12: S3 Storage Classes


Buckets are the containers for objects. You can have one or more buckets. For each bucket,
you can control access to it (who can create, delete, and list objects in the bucket), view
access logs for it and its objects, and choose the geographical region where Amazon S3 will
store the bucket and its contents.

S3 offers various kinds of storage classes for different use cases:-

● Standard
● Intelligent-Tiering
● Standard-IA
● One Zone-IA
● Glacier
● Glacier Deep Archive
● Reduced Redundancy

12.1 Durability vs Availability

Durability is percent ( % ) over one year period of time that the file which is stored in S3
will not be lost.

Availability is percent (%) over one year period of time that the file stored in S3 will
not be available.

Example:-

For Servers, Availability is one of the key metrics and any minute of downtime is a loss.
However, what happens if component of server itself fails and the server goes down?

12.2 Understanding Every S3 Storage Class

12.2.1 S3 Standard:

Amazon S3 Standard offers high durability, availability and performance for objects stored.

Designed for durability of 99.999999999% of objects ( eleven nines )


Designed for 99.99% availability over a given year

Example:-

If we have 10,000 files stored in S3 ( 11 nines durability ) then you can expect to lose one file
every ten million years.

12.2.2 S3 Standard In-Frequent Access:


Amazon S3 Standard - Infrequent Access is for data that is accessed less frequently but
requires rapid access when needed.

Designed for durability of 99.999999999% of objects

Designed for 99.90% availability over a given year

12.2.3 S3 Reduced Redundancy Storage (RRS)

AWS S3 Reduced Redundancy storage enables customers to reduce their costs by storing non-
critical, reproducible data at lower levels of redundancy than Amazon S3’s standard storage

Designed for durability of 99.99% of objects


Designed for 99.99% availability over a given year

12.2.4 Glacier

● AWS Glacier is meant to be for archiving and for storing long-term backups.
● It may take several hours for the object to get restored.

● 99.999999999% durability of object.
● It is much cheaper than S3 ( very low cost )

Example Use Case:-

Backup of Application logs more than 1 year older can be moved to Glacier.

12.2.5 AWS S3 Intelligent Tiering

The S3 Intelligent Tiering is primarily designed to optimize cost by automatically moving data to
the most cost-effective tier.

General Purpose - Standard S3


Infrequent Access - Standard IA

1TB of data stored in Standard S3 = 22.88$


1TB of data stored in Standard IA = 12.50$

Organization stores terabytes of data in S3.


It will be great if a solution automatically moves infrequent data to Standard IA.

The S3 Intelligent Tiering works by storing data in one of the two access tiers:
● Frequent Access Tier (Costly)
● Infrequent Access Tier (Much cheaper)

In this tier, the objects are automatically moved to frequent or infrequent access tier based on
the access patterns.

Amazon S3 monitors access patterns of the objects in S3 Intelligent-Tiering and moves the
ones that have not been accessed for 30 consecutive days to the infrequent access tier.

If an object in the infrequent access tier is accessed, it is automatically moved back to the
frequent access tier.

This type of storage class is preferable for long-lived data with access patterns that are
unknown or unpredictable.

S3 Intelligent-Tiering like other storage classes is configured at the object level.

1TB of data stored in Standard S3 = 22.88$


1TB of data stored in Standard IA = 12.50$
1 TB of data stored with Standard-Intelligent = 23$

12.2.6 One Zone Infrequent Access (One Zone IA)

Storage classes like Standard S3, Standard IA stores the data in minimum of 3 availability
zones.

Due to this, the overall cost per of storage is increased with such architecture.

S3 One Zone-IA stores data in a single AZ and costs 20% less than S3 Standard-IA.

It’s a good choice for storing secondary backup copies of on-premises data or easily recreatable
data.

Data will be lost in-case of availability zone destruction.

Overview of Pricing comparison between storage classes:

● 1TB of data stored in Standard S3 = 22.88$


● 1TB of data stored in Standard IA = 12.50$
● 1 TB of data stored in One Zone IA = 10$
12.2.7 Glacier Deep Archive

S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class and supports long-term
retention and digital preservation for data that may be accessed once or twice in a year.

All data stored in S3 Glacier Deep Archive can be restored within 12 hours.

On the contrary, Glacier is ideal for archives where data is regularly retrieved and some of the
data may be needed in minutes.

Pricing Comparison:

1 TB of data stored in Glacier: 14$


1 TB of data stored in Glacier Deep Archive: 10.99$

You might also like