Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Cloud Computing Unit 1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 49

UNIT-1(Cloud Computing)

What is Cloud Computing


The term cloud refers to a network or the internet. It is a technology
that uses remote servers on the internet to store, manage, and
access data online rather than local drives. The data can be
anything such as files, images, documents, audio, video, and more.

There are the following operations that we can do using cloud


computing:

o Developing new applications and services


o Storage, back up, and recovery of data
o Hosting blogs and websites
o Delivery of software on demand
o Analysis of data
o Streaming videos and audios

Why Cloud Computing?


Small as well as large IT companies, follow the traditional methods
to provide the IT infrastructure. That means for any IT company,
we need a Server Room that is the basic need of IT
companies.

In that server room, there should be a database server, mail server,


networking, firewalls, routers, modem, switches, QPS (Query Per
Second means how much queries or load will be handled by the
server), configurable system, high net speed, and the maintenance
engineers.

To establish such IT infrastructure, we need to spend lots of money.


To overcome all these problems and to reduce the IT infrastructure
cost, Cloud Computing comes into existence.
Characteristics of Cloud Computing
The characteristics of cloud computing are given below:

1) Agility

The cloud works in a distributed computing environment. It


shares resources among users and works very fast.

2) High availability and reliability

The availability of servers is high and more reliable because


the chances of infrastructure failure are minimum

3) High Scalability

Cloud offers "on-demand" provisioning of resources on a large


scale, without having engineers for peak loads.
4) Multi-Sharing

With the help of cloud computing, multiple users and


applications can work more efficiently with cost reductions by
sharing common infrastructure.

5) Device and Location Independence

Cloud computing enables the users to access systems using a web browser regardless of their
location or what device they use e.g. PC, mobile phone, etc. As infrastructure is off-
site (typically provided by a third-party) and accessed via the Internet, users
can connect from anywhere.

6) Maintenance

Maintenance of cloud computing applications is easier, since


they do not need to be installed on each user's computer and
can be accessed from different places. So, it reduces the cost
also.

7) Low Cost

By using cloud computing, the cost will be reduced because to take


the services of cloud computing, IT company need not to set its
own infrastructure and pay-as-per usage of resources.

8) Services in the pay-per-use mode

Application Programming Interfaces (APIs) are provided to the


users so that they can access services on the cloud by using
these APIs and pay the charges as per the usage of services.
Advantages and Disadvantages of Cloud
Computing
Advantages of Cloud Computing

As we all know that Cloud computing is trending technology. Almost


every company switched their services on the cloud to rise the
company growth.

Here, we are going to discuss some important advantages of Cloud


Computing-
1) Back-up and restore data

Once the data is stored in the cloud, it is easier to get back-up and
restore that data using the cloud.

2) Improved collaboration

Cloud applications improve collaboration by allowing groups of


people to quickly and easily share information in the cloud via
shared storage.

3) Excellent accessibility

Cloud allows us to quickly and easily access store information


anywhere, anytime in the whole world, using an internet connection.
An internet cloud infrastructure increases organization productivity
and efficiency by ensuring that our data is always accessible.

4) Low maintenance cost

Cloud computing reduces both hardware and software maintenance


costs for organizations.

5) Mobility

Cloud computing allows us to easily access all cloud data via mobile.

6) IServices in the pay-per-use model

Cloud computing offers Application Programming Interfaces (APIs) to


the users for access services on the cloud and pays the charges as
per the usage of service.

7) Unlimited storage capacity

Cloud offers us a huge amount of storing capacity for storing our


important data such as documents, images, audio, video, etc. in one
place.
8) Data security

Data security is one of the biggest advantages of cloud computing.


Cloud offers many advanced features related to security and
ensures that data is securely stored and handled.

Disadvantages of Cloud Computing


A list of the disadvantage of cloud computing is given below -

1) Internet Connectivity

As you know, in cloud computing, every data (image, audio, video,


etc.) is stored on the cloud, and we access these data through the
cloud by using the internet connection. If you do not have good
internet connectivity, you cannot access these data. However, we
have no any other way to access data from the cloud

2) Vendor lock-in

Vendor lock-in is the biggest disadvantage of cloud computing.


Organizations may face problems when transferring their services
from one vendor to another. As different vendors provide different
platforms, that can cause difficulty moving from one cloud to
another.

3) Limited Control

As we know, cloud infrastructure is completely owned, managed,


and monitored by the service provider, so the cloud users have less
control over the function and execution of services within a cloud
infrastructure.

4) Security

Although cloud service providers implement the best security


standards to store important information. But, before adopting cloud
technology, you should be aware that you will be sending all your
organization's sensitive information to a third party, i.e., a cloud
computing service provider. While sending the data on the cloud,
there may be a chance that your organization's information is
hacked by Hackers.

History of Cloud Computing


Before emerging the cloud computing, there was Client/Server
computing which is basically a centralized storage in which all the
software applications, all the data and all the controls are resided on
the server side.

If a single user wants to access specific data or run a program,


he/she need to connect to the server and then gain appropriate
access, and then he/she can do his/her business.

Then after, distributed computing came into picture, where all the
computers are networked together and share their resources when
needed.

On the basis of above computing, there was emerged of cloud


computing concepts that later implemented.

At around in 1961, John MacCharty suggested in a speech at MIT


that computing can be sold like a utility, just like a water or
electricity. It was a brilliant idea, but like all brilliant ideas, it was
ahead if its time, as for the next few decades, despite interest in the
model, the technology simply was not ready for it.

But of course time has passed and the technology caught that idea
and after few years we mentioned that:

In 1999, Salesforce.com started delivering of applications to users


using a simple website. The applications were delivered to
enterprises over the Internet, and this way the dream of computing
sold as utility were true.

In 2002, Amazon started Amazon Web Services, providing services


like storage, computation and even human intelligence. However,
only starting with the launch of the Elastic Compute Cloud in 2006 a
truly commercial service open to everybody existed.
In 2009, Google Apps also started to provide cloud computing
enterprise applications.

Of course, all the big players are present in the cloud computing
evolution, some were earlier, some were later. In
2009, Microsoft launched Windows Azure, and companies like
Oracle and HP have all joined the game. This proves that today,
cloud computing has become mainstream.

What are the Security Risks of Cloud


Computing
Cloud computing provides various advantages, such as improved
collaboration, excellent accessibility, Mobility, Storage capacity, etc. But
there are also security risks in cloud computing.

Some most common Security Risks of Cloud Computing are given below-

Data Loss
Data loss is the most common cloud security risks of cloud computing. It is
also known as data leakage. Data loss is the process in which data is being
deleted, corrupted, and unreadable by a user, software, or application. In a
cloud computing environment, data loss occurs when our sensitive data is
somebody else's hands, one or more data elements can not be utilized by
the data owner, hard disk is not working properly, and software is not
updated.

Hacked Interfaces and Insecure APIs


As we all know, cloud computing is completely depends on Internet, so it is
compulsory to protect interfaces and APIs that are used by external users.
APIs are the easiest way to communicate with most of the cloud services. In
cloud computing, few services are available in the public domain. These
services can be accessed by third parties, so there may be a chance that
these services easily harmed and hacked by hackers
Data Breach
Data Breach is the process in which the confidential data is viewed,
accessed, or stolen by the third party without any authorization, so
organization's data is hacked by the hackers.

Vendor lock-in
Vendor lock-in is the of the biggest security risks in cloud computing.
Organizations may face problems when transferring their services from one
vendor to another. As different vendors provide different platforms, that can
cause difficulty moving one cloud to another.

Increased complexity strains IT staff


Migrating, integrating, and operating the cloud services is complex for the IT
staff. IT staff must require the extra capability and skills to manage,
integrate, and maintain the data to the cloud.

Spectre & Meltdown


Spectre & Meltdown allows programs to view and steal data which is
currently processed on computer. It can run on personal computers, mobile
devices, and in the cloud. It can store the password, your personal
information such as images, emails, and business documents in the memory
of other running programs.

Denial of Service (DoS) attacks


Denial of service (DoS) attacks occur when the system receives too much
traffic to buffer the server. Mostly, DoS attackers target web servers of large
organizations such as banking sectors, media companies, and government
organizations. To recover the lost data, DoS attackers charge a great deal of
time and money to handle the data.

Account hijacking
Account hijacking is a serious security risk in cloud computing. It is the
process in which individual user's or organization's cloud account (bank
account, e-mail account, and social media account) is stolen by hackers. The
hackers use the stolen account to perform unauthorized activities.
Types of Cloud
There are the following 4 types of cloud that you can deploy according to the
organization's needs-

o Public Cloud
o Private Cloud
o Hybrid Cloud
o Community Cloud

Public Cloud
Public cloud is open to all to store and access information via the Internet
using the pay-per-usage method.

In public cloud, computing resources are managed and operated by the


Cloud Service Provider (CSP).

Example: Amazon elastic compute cloud (EC2), IBM SmartCloud Enterprise,


Microsoft, Google App Engine, Windows Azure Services Platform
Advantages of Public Cloud
There are the following advantages of Public Cloud -

o Public cloud is owned at a lower cost than the private and hybrid cloud.
o Public cloud is maintained by the cloud service provider, so do not need to
worry about the maintenance.
o Public cloud is easier to integrate. Hence it offers a better flexibility approach
to consumers.
o Public cloud is location independent because its services are delivered
through the internet.
o Public cloud is highly scalable as per the requirement of computing
resources.
o It is accessible by the general public, so there is no limit to the number of
users.

Disadvantages of Public Cloud


o Public Cloud is less secure because resources are shared publicly.
o Performance depends upon the high-speed internet network link to the cloud
provider.
o The Client has no control of data.

Private Cloud
Private cloud is also known as an internal cloud or corporate cloud. It is
used by organizations to build and manage their own data centers internally
or by the third party. It can be deployed using Opensource tools such as
Openstack and Eucalyptus.

Based on the location and management, National Institute of Standards and


Technology (NIST) divide private cloud into the following two parts-

o On-premise private cloud


o Outsourced private cloud

Advantages of Private Cloud


There are the following advantages of the Private Cloud -
o Private cloud provides a high level of security and privacy to the users.
o Private cloud offers better performance with improved speed and space
capacity.
o It allows the IT team to quickly allocate and deliver on-demand IT resources.
o The organization has full control over the cloud because it is managed by the
organization itself. So, there is no need for the organization to depends on
anybody.
o It is suitable for organizations that require a separate cloud for their personal
use and data security is the first priority.

Disadvantages of Private Cloud


o Skilled people are required to manage and operate cloud services.
o Private cloud is accessible within the organization, so the area of operations
is limited.
o Private cloud is not suitable for organizations that have a high user base, and
organizations that do not have the prebuilt infrastructure, sufficient
manpower to maintain and manage the cloud.

Hybrid Cloud
Hybrid Cloud is a combination of the public cloud and the private cloud. we
can say:

Hybrid Cloud = Public Cloud + Private Cloud

Hybrid cloud is partially secure because the services which are running on
the public cloud can be accessed by anyone, while the services which are
running on a private cloud can be accessed only by the organization's users.

Example: Google Application Suite (Gmail, Google Apps, and Google Drive),
Office 365 (MS Office on the Web and One Drive), Amazon Web Services.
Advantages of Hybrid Cloud
There are the following advantages of Hybrid Cloud -

o Hybrid cloud is suitable for organizations that require more security than the
public cloud.
o Hybrid cloud helps you to deliver new products and services more quickly.
o Hybrid cloud provides an excellent way to reduce the risk.
o Hybrid cloud offers flexible resources because of the public cloud and secure
resources because of the private cloud.

Disadvantages of Hybrid Cloud


o In Hybrid Cloud, security feature is not as good as the private cloud.
o Managing a hybrid cloud is complex because it is difficult to manage more
than one type of deployment model.
o In the hybrid cloud, the reliability of the services depends on cloud service
providers.
Community Cloud
Community cloud allows systems and services to be accessible by a group of
several organizations to share the information between the organization and
a specific community. It is owned, managed, and operated by one or more
organizations in the community, a third party, or a combination of them.

Example: Health Care community cloud

Advantages of Community Cloud


There are the following advantages of Community Cloud -

o Community cloud is cost-effective because the whole cloud is being shared


by several organizations or communities.
o Community cloud is suitable for organizations that want to have a
collaborative cloud with more security features than the public cloud.
o It provides better security than the public cloud.
o It provdes collaborative and distributive environment.
o Community cloud allows us to share cloud resources, infrastructure, and
other capabilities among various organizations.

Disadvantages of Community Cloud


o Community cloud is not a good choice for every organization.
o Security features are not as good as the private cloud.
o It is not suitable if there is no collaboration.
o The fixed amount of data storage and bandwidth is shared among all
community members.

Difference between public cloud, private cloud, hybrid cloud, and community
cloud -

The below table shows the difference between public cloud, private cloud,
hybrid cloud, and community cloud.

Paramete Public Cloud Private Cloud Hybrid Cloud Community Cloud


r

Host Service Enterprise (Third Enterprise (Third Community (Thir


provider party) party) party)

Users General Selected users Selected users Community


public members

Access Internet Internet, VPN Internet, VPN Internet, VPN

Owner Service Enterprise Enterprise Community


provider
Cloud Deployment Model
Today, organizations have many exciting opportunities to reimagine,
repurpose and reinvent their businesses with the cloud. The last decade has
seen even more businesses rely on it for quicker time to market, better
efficiency, and scalability. It helps them achieve lo ng-term digital goals as
part of their digital strategy.

Though the answer to which cloud model is an ideal fit for a business
depends on your organization's computing and business needs. Choosing the
right one from the various types of cloud service deployment models is
essential. It would ensure your business is equipped with the performance,
scalability, privacy, security, compliance & cost-effectiveness it requires. It is
important to learn and explore what different deployment types can offer -
around what particular problems it can solve.

Read on as we cover the various cloud computing deployment and service


models to help discover the best choice for your business.

What Is A Cloud Deployment Model?


It works as your virtual computing environment with a choice of deployment
model depending on how much data you want to store and who has access
to the Infrastructure

Different Types Of Cloud Computing Deployment Models


Most cloud hubs have tens of thousands of servers and storage devices to
enable fast loading. It is often possible to choose a geographic area to put
the data "closer" to users. Thus, deployment models for cloud computing are
categorized based on their location. To know which model would best fit the
requirements of your organization, let us first learn about the various types.
Public Cloud
The name says it all. It is accessible to the public. Public deployment models
in the cloud are perfect for organizations with growing and fluctuating
demands. It also makes a great choice for companies with low-security
concerns. Thus, you pay a cloud service provider for networking services,
compute virtualization & storage available on the public internet. It is also a
great delivery model for the teams with development and testing. Its
configuration and deployment are quick and easy, making it an ideal choice
for test environments.
Benefits of Public Cloud

o Minimal Investment - As a pay-per-use service, there is no large upfront cost


and is ideal for businesses who need quick access to resources
o No Hardware Setup - The cloud service providers fully fund the entire
Infrastructure
o No Infrastructure Management - This does not require an in-house team to
utilize the public cloud.

Limitations of Public Cloud

o Data Security and Privacy Concerns - Since it is accessible to all, it does not
fully protect against cyber-attacks and could lead to vulnerabilities.
o Reliability Issues - Since the same server network is open to a wide range of
users, it can lead to malfunction and outages
o Service/License Limitation - While there are many resources you can
exchange with tenants, there is a usage cap.

Private Cloud
Now that you understand what the public cloud could offer you, of course,
you are keen to know what a private cloud can do. Companies that look for
cost efficiency and greater control over data & resources will find the private
cloud a more suitable choice.

It means that it will be integrated with your data center and managed by
your IT team. Alternatively, you can also choose to host it externally. The
private cloud offers bigger opportunities that help meet specific
organizations' requirements when it comes to customization. It's also a wise
choice for mission-critical processes that may have frequently changing
requirements.
Benefits of Private Cloud

o Data Privacy - It is ideal for storing corporate data where only authorized
personnel gets access
o Security - Segmentation of resources within the same Infrastructure can help
with better access and higher levels of security.
o Supports Legacy Systems - This model supports legacy systems that cannot
access the public cloud.

Limitations of Private Cloud

o Higher Cost - With the benefits you get, the investment will also be larger
than the public cloud. Here, you will pay for software, hardware, and
resources for staff and training.
o Fixed Scalability - The hardware you choose will accordingly help you scale in
a certain direction
o High Maintenance - Since it is managed in-house, the maintenance costs also
increase.

Community Cloud
The community cloud operates in a way that is similar to the public cloud.
There's just one difference - it allows access to only a specific set of users
who share common objectives and use cases. This type of deployment model
of cloud computing is managed and hosted internally or by a third-party
vendor. However, you can also choose a combination of all three.

Benefits of Community Cloud

o Smaller Investment - A community cloud is much cheaper than the private &
public cloud and provides great performance
o Setup Benefits - The protocols and configuration of a community cloud must
align with industry standards, allowing customers to work much more
efficiently.

Limitations of Community Cloud

o Shared Resources - Due to restricted bandwidth and storage capacity,


community resources often pose challenges.
o Not as Popular - Since this is a recently introduced model, it is not that
popular or available across industries

Hybrid Cloud
As the name suggests, a hybrid cloud is a combination of two or more cloud
architectures. While each model in the hybrid cloud functions differently, it is
all part of the same architecture. Further, as part of this deployment of the
cloud computing model, the internal or external providers can offer
resources.

Let's understand the hybrid model better. A company with critical data will
prefer storing on a private cloud, while less sensitive data can be stored on a
public cloud. The hybrid cloud is also frequently used for 'cloud bursting'. It
means, supposes an organization runs an application on-premises, but due
to heavy load, it can burst into the public cloud.

Limitations of Hybrid Cloud

o Complexity - It is complex setting up a hybrid cloud since it needs to


integrate two or more cloud architectures
o Specific Use Case - This model makes more sense for organizations that have
multiple use cases or need to separate critical and sensitive data

A Comparative Analysis of Cloud Deployment Models


With the below table, we have attempted to analyze the key models with an
overview of what each one can do for you:

Important Public Private Community Hybrid


Factors to
Consider

Setup and ease Easy Requires Requires Requires professional I


of use professional IT professional IT Team
Team Team

Data Security Low High Very High High


and Privacy

Scalability and High High Fixed requirements High


flexibility

Cost- Most Most expensive Cost is distributed Cheaper than privat


Effectiveness affordable among members but more expensiv
than public

Reliability Low High Higher High

Making the Right Choice for Cloud Deployment Models


There is no one-size-fits-all approach to picking a cloud deployment model.
Instead, organizations must select a model based on workload-by-workload.
Start with assessing your needs and consider what type of support your
application requires. Here are a few factors you can consider before making
the call:

o Ease of Use - How savvy and trained are your resources? Do you have the
time and the money to put them through training?
o Cost - How much are you willing to spend on a deployment model? How much
can you pay upfront on subscription, maintenance, updates, and more?
o Scalability - What is your current activity status? Does your system run into
high demand?
o Compliance - Are there any specific laws or regulations in your country that
can impact the implementation? What are the industry standards that you
must adhere to?
o Privacy - Have you set strict privacy rules for the data you gather?
Each cloud deployment model has a unique offering and can immensely add
value to your business. For small to medium-sized businesses, a public cloud
is an ideal model to start with. And as your requirements change, you can
switch over to a different deployment model. An effective strategy can be
designed depending on your needs using the cloud mentioned above
deployment models.

Cloud delivery model


A cloud delivery model represents a specific, pre-packaged combination of IT resources
offered by a cloud provider. Three common cloud delivery models have become widely
established and formalized:
 Infrastructure-as-a-Service (IaaS)
 Platform-as-a-Service (PaaS)
 Software-as-a-Service (SaaS)

Infrastructure as a Service (IaaS)


IaaS is also known as Hardware as a Service (HaaS). It is a computing
infrastructure managed over the internet. The main advantage of using IaaS
is that it helps users to avoid the cost and complexity of purchasing and
managing the physical servers.
Characteristics of IaaS
There are the following characteristics of IaaS -

o Resources are available as a service


o Services are highly scalable
o Dynamic and flexible
o GUI and API-based access
o Automated administrative tasks

Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft


Azure, Google Compute Engine (GCE), Rackspace, and Cisco Metacloud.

Platform as a Service (PaaS)


PaaS cloud computing platform is created for the programmer to develop,
test, run, and manage the applications.

Characteristics of PaaS
There are the following characteristics of PaaS -

o Accessible to various users via the same development application.


o Integrates with web services and databases.
o Builds on virtualization technology, so resources can easily be scaled up or
down as per the organization's need.
o Support multiple languages and frameworks.
o Provides an ability to "Auto-scale".

Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com,


Google App Engine, Apache Stratos, Magento Commerce Cloud, and
OpenShift.

Software as a Service (SaaS)


SaaS is also known as "on-demand software". It is a software in which the
applications are hosted by a cloud service provider. Users can access these
applications with the help of internet connection and web browser.
Characteristics of SaaS
There are the following characteristics of SaaS -

o Managed from a central location


o Hosted on a remote server
o Accessible over the internet
o Users are not responsible for hardware and software updates. Updates are
applied automatically.
o The services are purchased on the pay-as-per-use basis

Example: BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk, Cisco


WebEx, ZenDesk, Slack, and GoToMeeting

Virtualization in Cloud Computing


Virtualization is the "creation of a virtual (rather than actual) version of something, such as
a server, a desktop, a storage device, an operating system or network resources".

In other words, Virtualization is a technique, which allows to share a single physical instance of
a resource or an application among multiple customers and organizations. It does by assigning a
logical name to a physical storage and providing a pointer to that physical resource when
demanded.

What is the concept behind the Virtualization?


Creation of a virtual machine over existing operating system and hardware is known as
Hardware Virtualization. A Virtual machine provides an environment that is logically separated
from the underlying hardware.

The machine on which the virtual machine is going to create is known as Host Machine and
that virtual machine is referred as a Guest Machine

Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is
directly installed on the hardware system is known as hardware
virtualization.

The main job of hypervisor is to control and monitoring the processor,


memory and other hardware resources.

After virtualization of hardware system we can install different operating


system on it and run different applications on those OS.

Usage:

Hardware virtualization is mainly done for the server platforms, because


controlling virtual machines is much easier than controlling a physical server.

2) Operating System Virtualization:


When the virtual machine software or virtual machine manager (VMM) is
installed on the Host operating system instead of directly on the hardware
system is known as operating system virtualization.

Usage:

Operating System Virtualization is mainly used for testing the applications on


different platforms of OS.

3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is
directly installed on the Server system is known as server virtualization.

Usage:

Server virtualization is done because a single physical server can be divided


into multiple servers on the demand basis and for balancing the load.

4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from
multiple network storage devices so that it looks like a single storage device.

Storage virtualization is also implemented by using software applications.


Usage:

Storage virtualization is mainly done for back-up and recovery purposes.

How does virtualization work in cloud computing?


Virtualization plays a very important role in the cloud computing
technology, normally in the cloud computing, users share the data present in
the clouds like application etc, but actually with the help of virtualization
users shares the Infrastructure.

The main usage of Virtualization Technology is to provide the


applications with the standard versions to their cloud users, suppose if the
next version of that application is released, then cloud provider has to
provide the latest version to their cloud users and practically it is possible
because it is more expensive.

To overcome this problem we use basically virtualization technology, By


using virtualization, all severs and the software application which are
required by other cloud providers are maintained by the third party people,
and the cloud providers has to pay the money on monthly or annual basis.

Conclusion
Mainly Virtualization means, running multiple operating systems on a single
machine but sharing all the hardware resources. And it helps us to provide
the pool of IT resources so that we can share these IT resources in order get
benefits in the business.
Hardware Virtualization
Previously, there was "one to one relationship" between physical servers and
operating system. Low capacity of CPU, memory, and networking
requirements were available. So, by using this model, the costs of doing
business increased. The physical space, amount of power, and hardware
required meant that costs were adding up.

The hypervisor manages shared the physical resources of the hardware


between the guest operating systems and host operating system. The
physical resources become abstracted versions in standard formats
regardless of the hardware platform. The abstracted hardware is represented
as actual hardware. Then the virtualized operating system looks into these
resources as they are physical entities.

Virtualization means abstraction. Hardware virtualization is


accomplished by abstracting the physical hardware layer by use of a
hypervisor or VMM (Virtual Machine Monitor).

When the virtual machine software or virtual machine manager (VMM) or


hypervisor software is directly installed on the hardware system is known as
hardware virtualization.

Advantages of Hardware Virtualization


The main benefits of hardware virtualization are more efficient resource
utilization, lower overall costs as well as increased uptime and IT flexibility.

1) More Efficient Resource Utilization:

Physical resources can be shared among virtual machines. Although the


unused resources can be allocated to a virtual machine and that can be used
by other virtual machines if the need exists.

2) Lower Overall Costs Because Of Server Consolidation:

Now it is possible for multiple operating systems can co-exist on a single


hardware platform, so that the number of servers, rack space, and power
consumption drops significantly.
3) Increased Uptime Because Of Advanced Hardware Virtualization
Features:

The modern hypervisors provide highly orchestrated operations that


maximize the abstraction of the hardware and help to ensure the maximum
uptime. These functions help to migrate a running virtual machine from one
host to another dynamically, as well as maintain a running copy of virtual
machine on another physical host in case the primary host fails.

4) Increased IT Flexibility:

Hardware virtualization helps for quick deployment of server resources in a


managed and consistent ways. That results in IT being able to adapt quickly
and provide the business with resources needed in good time.

Software Virtualization
Managing applications and distribution becomes a typical task for IT departments. Installation
mechanism differs from application to application. Some programs require certain helper
applications or frameworks and these applications may have conflict with existing applications.

Software virtualization is just like a virtualization but able to abstract the software
installation procedure and create virtual software installations.

Virtualized software is an application that will be "installed" into its own self-contained
unit.

Example of software virtualization is VMware software, virtual box etc. In the next
pages, we are going to see how to install linux OS and windows OS on VMware
application.

Advantages of Software Virtualization


1) Client Deployments Become Easier:

Copying a file to a workstation or linking a file in a network then we can


easily install virtual software.

2) Easy to manage:

To manage updates becomes a simpler task. You need to update at one


place and deploy the updated virtual application to the all clients.
3) Software Migration:

Without software virtualization, moving from one software platform to


another platform takes much time for deploying and impact on end user
systems. With the help of virtualized software environment the migration
becomes easier.

What is hypervisor?
A hypervisor is a virtualization layer that enables multiple operating system
to share hardware host .Each operating system or VM is allocated physical
resources such as memory,CPU,storage etc.

OR

A hypervisor is a virtualization layer that converts the physical Hardware into


virtual Hardware and controls the resource sharing between virtual
Hardware.

Types of Hypervisor

Type 1/Native/Bare metal hypervisor

It runs directly on the hardware of the host and manage the hardware and
guest operating systems

Example:-VMware,ESXi,Microsoft Hypervisor,Citrix XenServer.

Type 2/Hosted Hypervisor:

It runs on the conventional operational system,

Example:-VMware Workstation,Microsoft Virtual PC,etc.

multi-tenant cloud
A multi-tenant cloud is a cloud computing architecture that allows customers
to share computing resources in a public or private cloud. Each tenant's data
is isolated and remains invisible to other tenants.
In a multi-tenant cloud system, users have individualized space for storing
their projects and data. Each section of a cloud network with multi-tenant
architecture includes complex permissions with the intention of allowing each
user access to only their stored information along with security from other
cloud tenants. Within the cloud infrastructure, each tenant's data is
inaccessible to all other tenants, and can only be reached with the cloud
provider's permissions.

In a private cloud, the customers, or tenants, may be different individuals or


groups within a single company, while in a public cloud, entirely different
organizations may safely share their server space. Most public cloud
providers use the multi-tenancy model. It allows them to run servers with
single instances, which is less expensive and helps to streamline updates.

Benefits of multi-tenant cloud


Multi-tenant cloud networks provide increased storage and improved access compared
to single-tenancy clouds that include limited access and security parameters. Multi-
tenancy in cloud computing makes a greater pool of resources available to a larger
group of people without sacrificing privacy and security or slowing down
applications. The virtualization of storage locations in cloud computing allows for
flexibility and ease of access from almost any device or location.

Example of multi-tenancy
Multi-tenant clouds can be compared to the structure of an apartment building. Each
resident has access to their own apartment within the agreement of the entire building
and only authorized individuals can enter the specific units. However, the entire
building shares resources such as water, electricity and common areas.

This is similar to a multi-tenant cloud in that the provider sets overarching quotas,
rules and performance expectations for customers but each individual customer has
private access to their information.
The Five Levels of Implementing
Virtualization
Virtualization is not that easy to implement. A computer runs an OS that is
configured to that particular hardware. Running a different OS on the same
hardware is not exactly feasible.

To tackle this, there exists a hypervisor. What hypervisor does is, it acts as a
bridge between virtual OS and hardware to enable its smooth functioning of
the instance.

There are five levels of virtualizations available that are most commonly
used in the industry. These are as follows:

Instruction Set Architecture Level (ISA)

In ISA, virtualization works through an ISA emulation. This is helpful to run


heaps of legacy code which was originally written for different hardware
configurations.

These codes can be run on the virtual machine through an ISA.

A binary code that might need additional layers to run can now run on an
x86 machine or with some tweaking, even on x64 machines. ISA helps make
this a hardware-agnostic virtual machine.

The basic emulation, though, requires an interpreter. This interpreter


interprets the source code and converts it to a hardware readable format for
processing.

Hardware Abstraction Level (HAL)


As the name suggests, this level helps perform virtualization at the hardware
level. It uses a bare hypervisor for its functioning.

This level helps form the virtual machine and manages the hardware through
virtualization.

It enables virtualization of each hardware component such as I/O devices,


processors, memory, etc.

This way multiple users can use the same hardware with numerous instances
of virtualization at the same time.

IBM had first implemented this on the IBM VM/370 back in 1960. It is more
usable for cloud-based infrastructure.

Thus, it is no surprise that currently, Xen hypervisors are using HAL to run
Linux and other OS on x86 based machines.

Operating System Level

At the operating system level, the virtualization model creates an abstract


layer between the applications and the OS.

It is like an isolated container on the physical server and operating system


that utilizes hardware and software. Each of these containers functions like
servers.

When the number of users is high, and no one is willing to share hardware,
this level of virtualization comes in handy.

Here, every user gets their own virtual environment with dedicated virtual
hardware resources. This way, no conflicts arise.
Library Level

OS system calls are lengthy and cumbersome. Which is why applications opt
for APIs from user-level libraries.

Most of the APIs provided by systems are rather well documented. Hence,
library level virtualization is preferred in such scenarios.

Library interfacing virtualization is made possible by API hooks. These API


hooks control the communication link from the system to the applications.

Some tools available today, such as vCUDA and WINE, have successfully
demonstrated this technique.

Application Level

Application-level virtualization comes handy when you wish to virtualize only


an application. It does not virtualize an entire platform or environment.

On an operating system, applications work as one process. Hence it is also


known as process-level virtualization.

It is generally useful when running virtual machines with high-level


languages. Here, the application sits on top of the virtualization layer, which
is above the application program.

The application program is, in turn, residing in the operating system.

Programs written in high-level languages and compiled for an application-


level virtual machine can run fluently here.
Introduction to CPU Virtualization
CPU Virtualization is one of the cloud-computing technology that
requires a single CPU to work, which acts as multiple machines
working together. Virtualization got its existence since the 1960s
that became popular with hardware virtualization or CPU
virtualization. To work efficiently and utilize all the computing
resources to work together, CPU virtualization was invented to
manage things by running every OS in one machine easily.
Virtualization mainly focuses on efficiency and performance-related
operations by saving time. When needed, the hardware resources
are used, and the underlying layer process instructions to make
virtual machines work.

What is CPU Virtualization?


CPU Virtualization emphasizes running programs and instructions
through a virtual machine, giving the feeling of working on a
physical workstation. All the operations are handled by an emulator
that controls software to run according to it. Nevertheless, CPU
Virtualization does not act as an emulator. The emulator performs
the same way as a normal computer machine does. It replicates the
same copy or data and generates the same output just like a
physical machine does. The emulation function offers great
portability and facilitates working on a single platform, acting like
working on multiple platforms.

With CPU Virtualization, all the virtual machines act as physical


machines and distribute their hosting resources like having various
virtual processors. Sharing of physical resources takes place to each
virtual machine when all hosting services get the request. Finally,
the virtual machines get a share of the single CPU allocated to them,
being a single-processor acting as a dual-processor.

Types of CPU Virtualization


The various types of CPU virtualization available are as follows

1. Software-Based CPU Virtualization

This CPU Virtualization is software-based where with the help of it,


application code gets executed on the processor and the privileged
code gets translated first, and that translated code gets executed
directly on the processor. This translation is purely known as Binary
Translation (BT). The code that gets translated is very large in size
and also slow at the same time on execution. The guest programs
that are based on privileged coding runs very smooth and fast. The
code programs or the applications that are based on privileged code
components that are significant such as system calls, run at a
slower rate in the virtual environment.

2. Hardware-Assisted CPU Virtualization

There is hardware that gets assistance to support CPU Virtualization


from certain processors. Here, the guest user uses a different
version of code and mode of execution known as a guest mode. The
guest code mainly runs on guest mode. The best part in hardware-
assisted CPU Virtualization is that there is no requirement for
translation while using it for hardware assistance. For this, the
system calls runs faster than expected. Workloads that require the
updation of page tables get a chance of exiting from guest mode to
root mode that eventually slows down the program’s performance
and efficiency.

3. Virtualization and Processor-Specific Behavior


Despite having specific software behavior of the CPU model, the
virtual machine still helps in detecting the processor model on which
the system runs. The processor model is different based on the CPU
and the wide variety of features it offers, whereas the applications
that produce the output generally utilize such features. In such
cases, vMotion cannot be used to migrate the virtual machines that
are running on feature-rich processors. Enhanced vMotion
Compatibility easily handles this feature.

4. Performance Implications of CPU Virtualization

CPU Virtualization adds the amount of overhead based on the


workloads and virtualization used. Any application depends mainly
on the CPU power waiting for the instructions to get executed first.
Such applications require the use of CPU Virtualization that gets the
command or executions that are needed to be executed first. This
overhead takes the overall processing time and results in an overall
degradation in performance and CPU virtualisation execution.

Why CPU Virtualization is Important?


CPU Virtualization is important in lots of ways, and its usefulness has
been widespread in the cloud computing industry. I will brief
regarding the advantages of using CPU Virtualization, stated as
below:

 Using CPU Virtualization, the overall performance and


efficiency are improved to a great extent because it usually
takes virtual machines to work on a single CPU, sharing
resources acting like using multiple processors at the same
time. This saves cost and money.
 As CPU Virtualization uses virtual machines to work on
separate operating systems on a single sharing system,
security is also maintained by it. The machines are also kept
separate from each other. Because of that, any cyber-attack or
software glitch is unable to damage the system, as a single
machine cannot affect another machine.
 It purely works on virtual machines and hardware resources. It
consists of a single server where all the computing resources
are stored, and processing is done based on the CPU’s
instructions that are shared among all the systems involved.
Since the hardware requirement is less and the physical
machine usage is absent, that is why the cost is very less, and
timing is saved.
 It provides the best backup of computing resources since the
data is stored and shared from a single system. It provides
reliability to users dependent on a single system and provides
greater retrieval options of data for the user to make them
happy.
 It also offers great and fast deployment procedure options so
that it reaches the client without any hassle, and also it
maintains the atomicity. Virtualization ensures the desired data
reach the desired clients through the medium and checks any
constraints are there, and are also fast to remove it.

I/O Virtualization
I/O virtualization involves managing the routing of I/O requests
between virtual devices and the shared physical hardware. At the
time of this writing, there are three ways to implement I/O
virtualization: full device emulation, para-virtualization, and direct
I/O. Full device emulation is the first approach for I/O
virtualization. Generally, this approach emulates well-known, real-
world devices.
All the functions of a device or bus infrastructure, such as device
enumeration, identification, interrupts, and DMA, are replicated in
software. This software is located in the VMM and acts as a virtual
device. The I/O access requests of the guest OS are trapped in the
VMM which interacts with the I/O devices. The full device
emulation approach is shown in Figure 3.14. A single hardware
device can be shared by multiple VMs that run concurrently.
However, software emulation runs much slower than the
hardware it emulates [10,15]. The para-virtualization method of
I/O virtualization is typically used in Xen. It is also known as the
split driver model consisting of a frontend driver and a backend
driver. The frontend driver is running in Domain U and the
backend driver is running in Domain 0. They interact with each
other via a block of shared memory. The frontend driver manages
the I/O requests of the guest OSes and the backend driver is
responsible for managing the real I/O devices and multiplexing
the I/O data of different VMs. Although para-I/O-virtualization
achieves better device performance than full device emulation, it
comes with a higher CPU overhead. Direct I/O virtualization lets
the VM access devices directly. It can achieve close-to-native
performance without high CPU costs. However, current direct I/O
virtualization implementations focus on networking for
mainframes. There are a lot of challenges for commodity
hardware devices. For example, when a physical device is
reclaimed (required by workload migration) for later
reassignment, it may have been set to an arbitrary state (e.g.,
DMA to some arbitrary memory locations) that can function
incorrectly or even crash the whole system. Since software-based
I/O virtualization requires a very high overhead of device
emulation, hardware-assisted I/O virtualization is critical. Intel VT-
d supports the remapping of I/O DMA transfers and device-
generated interrupts. The architecture of VT-d provides the
flexibility to support multiple usage models that may run
unmodified, special-purpose, or “virtualization-aware” guest OSes.
Another way to help I/O virtualization is via self-virtualized I/O
(SV-IO) [47]. The key idea of SV-IO is to harness the rich resources
of a multicore processor. All tasks associated with virtualizing an
I/O device are encapsulated in SV-IO. It provides virtual devices
and an associated access API to VMs and a management API to
the VMM. SV-IO defines one virtual interface (VIF) for every kind of
virtualized I/O device, such as virtual network interfaces, virtual
block devices (disk) and others. The guest OS interacts with the
VIFs via VIF device drivers. Each VIF consists of two message
queues. One is for outgoing messages to the devices and the
other is for incoming messages from the devices. In addition,
each VIF has a unique ID for identifying iVMware Workstation for
I/O Virtualization The VMware Workstation runs as an application.
It leverages the I/O device support in guest OSes, host OSes, and
VMM to implement I/O virtualization. The application portion
(VMApp) uses a driver loaded into the host operating system
(VMDriver) to establish the privileged VMM, which runs directly on
the hardware. A given physical processor is executed in either the
host world or the VMM world, with the VMDriver facilitating the
transfer of control between the two worlds. The VMware
Workstation employs full device emulation to implement I/O
virtualization. Figure 3.15 shows the functional blocks used in
sending and receiving packetst in SV-IO., virtual camera devices,

VIRTUAL CLUSTERS AND RESOURCE MANAGEMENT

A physical cluster is a collection of servers (physical machines)


interconnected by a physical network such as a LAN. In Chapter 2,
we studied various clustering techniques on physical machines.
Here, we introduce virtual clusters and study its properties as well
as explore their potential applications. In this section, we will
study three critical design issues of virtual clusters: live migration
of VMs, memory and file migrations, and dynamic deployment of
virtual clusters.
When a traditional VM is initialized, the administrator needs to
manually write configuration information or specify the
configuration sources. When more VMs join a network, an
inefficient configuration always causes problems with overloading
or underutilization. Amazon’s Elastic Compute Cloud (EC2) is a
good example of a web service that provides elastic computing
power in a cloud. EC2 permits customers to create VMs and to
manage user accounts over the time of their use. Most
virtualization platforms, including XenServer and VMware ESX
Server, support a bridging mode which allows all domains to
appear on the network as individual hosts. By using this mode,
VMs can communicate with one another freely through the virtual
network interface card and configure the network automatically.
3.4.1 Physical versus Virtual Clusters Virtual clusters are built with
VMs installed at distributed servers from one or more physical
clusters. The VMs in a virtual cluster are interconnected logically
by a virtual network across several physical networks. Figure 3.18
illustrates the concepts of virtual clusters and physical clusters.
Each virtual cluster is formed with physical machines or a VM
hosted by multiple physical clusters. The virtual cluster
boundaries are shown as distinct boundaries. The provisioning of
VMs to a virtual cluster is done dynamically to have the following
interesting properties: • The virtual cluster nodes can be either
physical or virtual machines. Multiple VMs running with different
OSes can be deployed on the same physical node. • A VM runs
with a guest OS, which is often different from the host OS, that
manages the resources in the physical machine, where the VM is
implemented. • The purpose of using VMs is to consolidate
multiple functionalities on the same server. This will greatly
enhance server utilization and application flexibility.
• VMs can be colonized (replicated) in multiple servers for the
purpose of promoting distributed parallelism, fault tolerance, and
disaster recovery. • The size (number of nodes) of a virtual cluster
can grow or shrink dynamically, similar to the way an overlay
network varies in size in a peer-to-peer (P2P) network. • The
failure of any physical nodes may disable some VMs installed on
the failing nodes. But the failure of VMs will not pull down the host
system. Since system virtualization has been widely used, it is
necessary to effectively manage VMs running on a mass of
physical computing nodes (also called virtual clusters) and
consequently build a high-performance virtualized computing
environment. This involves virtual cluster deployment, monitoring
and management over large-scale clusters, as well as resource
scheduling, load balancing, server consolidation, fault tolerance,
and other techniques. The different node colors in Figure 3.18
refer to different virtual clusters. In a virtual cluster system, it is
quite important to store the large number of VM images
efficiently. Figure 3.19 shows the concept of a virtual cluster
based on application partitioning or customization. The different
colors in the figure represent the nodes in different virtual
clusters. As a large number of VM images might be present, the
most important thing is to determine how to store those images in
the system efficiently. There are common installations for most
users or applications, such as operating systems or user-level
programming libraries. These software packages can be
preinstalled as templates (called template VMs). With these
templates, users can build their own software stacks. New OS
instances can be copied from the template VM. User-specific
components such as programming libraries and applications can
be installed to those instances. Three physical clusters are shown
on the left side of Figure 3.18. Four virtual clusters are created on
the right, over the physical clusters. The physical machines are
also called host systems. In contrast, the VMs are guest systems.
The host and guest systems may run with different operating

What is data center automation?


Data center automation is the process by which routine workflows
and processes of a data center—scheduling, monitoring,
maintenance, application delivery, and so on—are managed and
executed without human administration. Data center automation
increases agility and operational efficiency. It reduces the time IT
needs to perform routine tasks and enables them to deliver
services on demand in a repeatable, automated manner. These
services can then be rapidly consumed by end users.

Why data center automation is


important
The massive growth in data and the speed at which
businesses operate today mean that manual monitoring,
troubleshooting, and remediation is too slow to be
effective and can put businesses at risk. Automation can
make day-two operations almost autonomous. Ideally, the
data center provider would have API access to the
infrastructure, enabling it to inter-operate with public
clouds so that customers could migrate data or workloads
from cloud to cloud. Data center automation is
predominantly delivered through software solutions that
grant centralized access to all or most data center
resources. Traditionally, this access enables the
automation of storage, servers, network, and other data
center management tasks.
Data center automation is immensely valuable because it
frees up human computational time and:
 Delivers insight into server nodes and configurations
 Automates routine procedures like patching, updating, and
reporting
 Produces and programs all data center scheduling and
monitoring tasks
 Enforces data center processes and controls in agreement
with standards and policies

Tools for data center


automation
APIs
An API provides a set of protocols for building and
integrating with application software. Infrastructure that
offers APIs for toolsets like configuration
management and OpenStack can save companies
resources, time, and money, and can deliver consistency
in developer environments.

Configuration management tools


 Ansible - Ansible Tower is Red Hat's automation platform for
Red Hat Linux and more. Ansible Tower is a software
framework that supports disciplines ranging from agile
development to DevOps to continuous delivery.
 Puppet - Puppet is a framework and language that systems
operations professionals use to define operations like software
deployment so that they can be automated. The Puppet
language creates the definitions and workflow that are
implemented by the Puppet framework. Puppet brings a
common language and compatibility across a broad range of
devices. IT departments use Puppet to automate intricate
processes involving many pieces of hardware and software.
 Chef - Chef is a suite of products that is open-source and
commercial. Chef is written in Ruby and provides a framework
in which users can write recipes. Those recipes can implement
processes that span an entire infrastructure or focus on a
single component. The three components of Chef are Chef,
Inspect, and Habitat. These components can be used
individually or together for a complete DevOps framework.
 OpenStack - controls large pools of compute, storage, and
networking resources throughout a data center, managed
through a dashboard or through the OpenStack API.
OpenStack is an operating system that helps build a cloud
infrastructure or manage local resources as though they were
a cloud. This means automating the building, teardown, and
management of virtual servers and other virtualized
infrastructure. It’s worth noting that Red Hat offers an open-
source enterprise edition of OpenStack for improved support.

You might also like