Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
30 views

Cloud Computing Notes

Uploaded by

mshashikant169
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Cloud Computing Notes

Uploaded by

mshashikant169
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 84

Cloud Computing Introduction

Introduction
Cloud refers to a Network or Internet. Cloud is something, which is present at remote location.
Cloud can provide services over network, that is, on public networks or on private networks, that
is, Wide Area Networks (WANs), Local Area Networks (LANs), or Virtual Private Networks
(VPNs). Applications such as e-mail, web conferencing, customer relationship management (CRM),
all run in cloud.

Figure 1: Examples of Cloud Computing

1.1 What is Cloud Computing?


Cloud computing has embarked a revolution in accessing, provisioning and consumption of the
information and computing in the ICT industry. It has emerged as a novel paradigm of high-
performance and large-scale computing that actuates relocation of computing and data from
desktops and personal computers to big data centers. Cloud is a construct (infrastructure) that
allows to access application that actually resides at a remote location of another internet connected
device, most often, this will be a distant datacenter. Cloud computing takes the technology,
services, and applications that are similar to those on the Internet and turns them into a self-service
utility (Figure 1).
Cloud provides an abstraction based on the notion of pooling physical resources and presenting
them as a virtual resource. It is a new model for provisioning resources, for staging applications,
and for platform-independent user access to services.
“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a
shared pool of configurable computer resources (networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal management effort or service
provider interaction” (Figure 2).
Notes

Cloud Computing
Figure 2: Cloud Scenario

Cloud computing refers to manipulating, configuring, and accessing the applications online. It
offers online data storage, infrastructure and application and involves both a combination of
software and hardware- based computing resources delivered as a network service.
Example:Suppose we want to install MS-Word in our organization’s computer. We have to bought
the CD/DVD of it an install it or can setup a S/W distribution server to automatically install this

The client The service providers pay for hardware and maintenance
does not
need to pay
for
hardware

application on your machine. Every time Microsoft issued a new version, we have to perform the
same task. If some other company hosts your application, that is, they handle the cost of servers
and manage the software updates. The customers are charged as per their utilization, that is, as
per

the usage (Figure 3). It reduces the cost of using that software along with the reduction in the cost
of installation of heavy servers. Additionally, cloud aids in reducing the cost of electricity bills.
Figure 3: Cloud Client-Server Perspective

1.2 Cloud Properties: Google’s Perspective


Cloud computing is user-centric: Once you as a user are connected to the cloud, whatever is
stored there—documents, messages, images, applications, whatever—becomes yours. In addition,
not only is the data yours, but you can also share it with others. In effect, any device that accesses
your data in the cloud also becomes yours.
Cloud computing is task-centric: Instead of focusing on the application and what it can do, the
focus is on what you need done and how the application can do it for you., Traditional applications
—word processing, spreadsheets, email, and so on—are becoming less important than the
documents they create.
Cloud computing is powerful: Connecting hundreds or thousands of computers together in a
cloud creates a wealth of computing power impossible with a single desktop PC. Cloud computing
is accessible: Because data is stored in the cloud, users can instantly retrieve more information from
multiple repositories. You’re not limited to a single source of data, as you are with a desktop PC.
Cloud computing is intelligent: With all the various data stored on the computers in a cloud, data
mining and analysis are necessary to access that information in an intelligent manner. Cloud
computing is programmable: Many of the tasks necessary with cloud computing must be
automated. For example, to protect the integrity of the data, information stored on a single
computer in the cloud must be replicated on other computers in the cloud. If that one computer
goes offline, the cloud’s programming automatically redistributes that computer’s data to a new
computer in the cloud.

2
Notes

Unit 01: Cloud Computing Introduction

1.3 History of Cloud Computing


Before emerging the cloud computing, there was Client/Server computing which is basically a
centralized storage in which all the software applications, all the data and all the controls are
resided on the server side. If a single user wants to access specific data or run a program, he/she
need to connect to the server and then gain appropriate access, and then he/she can do his/her
business (Figure 4).

Centralized storage All the computers Emergence of

Distributed Computing
in which all the are networked the concept of

Cloud Computing
software together and
cloud
Client-Server

applications, all
Computing
share their
the data and all resources when computing
the controls are needed.
resided on the
server side.

Figure 4: Convergence to Cloud Computing

Then after, distributed computing came into picture, where all the computers are networked
together and share their resources when needed. On the basis of above computing, there was
emergence of the concept of cloud computing.
Cloud computing was invented in the early 1960s by J.C.R Licklider (Joseph Carl RobnettLicklider),
an American Psychologist and Computer Scientist. During his network research work on ARPANet
(Advanced Research Project Agency Network), trying to connect people and data all around the
world, he gave an introduction tocloud computing technique which we all know today.
Born on March 11th, 1915 in St. Louis, Missouri, US, J.C.R Licklider pursued his initial studies from
Washington University in 1937 and received a BA Degree with three specializations including
physics, maths, psychology. Later in the year 1938, Licklider completed his MA in psychology and
received his Ph.D. from the University of Rochester in the year 1942. His interest in Information
Technology and looking at his years of service in different areas and achievements, made him
appointed as Head of IPTO at ARPA (US Department of Defense Advanced Research Project
Agency) in the Year 1962. His aim led to ARPANet, a forerunner of today’s Internet.
At around in 1961, John MacCharty suggested in a speech at MIT that computing can be sold like a
utility, just like a water or electricity. In 1999, Salesforce.com started delivering of applications to
users using a simple website. The applications were delivered to enterprises over the Internet, and
this way the dream of computing sold as utility were true.
The beauty of the cloud computing phase went on running throughout the era of the 21 st Century.
In 2002, Amazon started Amazon Web Services, providing services like storage, computation and
even human intelligence. However, only starting with the launch of the Elastic Compute Cloud in
2006 a truly commercial service open to everybody existed.
By 2008, Google too introduced its beta version of the search engine. In 2009, Google Apps also
started to provide cloud computing enterprise applications. Earlier announced by Microsoft in the
year 2008, it released its cloud computing service named Microsoft Azure for testing, deployment
and managing applications and services.
In the year 2012, Google Compute Engine was released but was rolled to the public. By the end of
Dec 2013, Oracle introduced Oracle Cloud with three primary services for business (IaaS, PaaS and
SaaS). Currently, as per records, Linux and Microsoft Azure share most of their work parallel.
Notes

Cloud Computing

1.4 Evolution of Cloud Computing


Growth of cloud computing is not an instantaneous task but has transited from several
intermediate stages. Beginning from an era of mainframe computing where huge and powerful
mainframe systems supported many users connected through dummy terminals and running data
management applications. There exist five intermediary stages from mainframe computing to the
use of personal stand-alone computers running desktop applications deriving personal
computingto the influx of interconnected computers converging to networking computing.This
stage saw the growth of local networks connected to other networks creating a globally
interconnected network such as the Internet for utilizing remote applications and resources. The
networked computers usually functioned in autonomic fashion resulting in autonomic computing
or followed client-server architectures resulting in client-server computing.Figure 5 lists the
evolution of cloud computing. The development of grid computing followed by the rise of cloud
computing and is characterized as:

 Development of grid computing offered sharing of computing power and resources spread
across multiple geographical domains.
 The recent stage involves rise of cloud computing where service-oriented, market-based
computing applications are predominant.
 Virtualization meets the Internet.

Figure 5: Evolution of Cloud Computing

1.5 Essential Cloud Computing Concepts


“Cloud” makes reference to the two essential concepts:

1. Abstraction: Cloud computing abstracts the details of system implementation from users
and developers. Applications run on physical systems that aren't specified, data is stored in
locations that are unknown, administration of systems is outsourced to others, and access
by users is ubiquitous (Present or found everywhere).
2. Virtualization: Cloud computing virtualizes systems by pooling and sharing resources.
Systems and storage can be provisioned as needed from a centralized infrastructure, costs

4
Notes

Unit 01: Cloud Computing Introduction

are assessed on a metered basis, multi-tenancy is enabled, and resources are scalable with agility.

Advantages of Cloud Computing


Cloud computing is an emerging technology that almost every company is being switched to from its on-premises
technologies. Whether it is public, private, or hybrid, cloud computing has become an essential factor for companies to
rise up to the competition. Let us find out why the cloud is so much preferred over the on-premises technologies.
Cost Efficiency: The biggest reason behind companies shifting to cloud computing is that it takes considerably lesser
cost than any on-premise technology. Now, companies need not store data in disks anymore as the cloud offers
enormous storage space, saving money and resources. CapEx and OpEx costs are reduced because resources are only
acquired when needed and are only paid for when used.
High speed: Cloud computing lets us deploy the service quickly in fewer clicks. This quick deployment lets us get the
resources required for our system within minutes.
Excellent Accessibility: Storing information in the cloud allows us to access it anywhere and anytime regardless of the
machine making it a highly accessible and flexible technology of the present times.
Back-up and Restore data: Once data is stored in the cloud, it is easier to get its back-up and recovery, which is quite a
time-consuming process in on-premise technology.
Manageability: Cloud computing eliminates the need for IT infrastructure updates and maintenance since the service
provider ensures timely, guaranteed, and seamless delivery of our services and also takes care of all the maintenance and
management of our IT services according to the service-level agreement (SLA).
Sporadic Batch Processing: Cloud computing lets us add or subtract resources and services
according to our needs. So, if the workload is not 24/7, we need not worry about the resources and
services getting wasted and we won’t end up stuck with unused services.
Strategic Edge: Cloud computing provides a company with a competitive edge over its competitors
when it comes to accessing the latest and mission-critical applications that it needs without having
to invest its time and money on their installations.

Disadvantages of Cloud Computing


Every technology has both positive and negative aspects that are highly important to be discussed
before implementing it.
Vulnerability to Attacks: Storing data in the cloud may pose serious challenges of information
theft since in the cloud every data of a company is online. Security breach is something that even
the best organizations have suffered from and it’s a potential risk in the cloud as well. Although
advanced security measures are deployed on the cloud, still storing confidential data in the cloud
can be a risky affair.
Network Connectivity Dependency: Cloud computing is entirely dependent on the Internet. This
direct tie-up with the Internet means that a company needs to have reliable and consistent Internet
service as well as a fast connection and bandwidth to reap the benefits of cloud computing.
Downtime: Downtime is considered as one of the biggest potential downsides of using cloud
computing. The cloud providers may sometimes face technical outages that can happen due to
various reasons, such as loss of power, low Internet connectivity, data centers going out of service
for maintenance, etc. This can lead to a temporary downtime in the cloud service.
Vendor Lock-In: When in need to migrate from one cloud platform to another, a company might
face some serious challenges because of the differences between vendor platforms. Hosting and
running the applications of the current cloud platform on some other platform may cause support
issues, configuration complexities, and additional expenses. The company data might also be left
vulnerable to security attacks due to compromises that might have been made during migrations.
Limited Control: Cloud customers may face limited control over their deployments. Cloud services
run on remote servers that are completely owned and managed by service providers, which makes
it hard for the companies to have the level of control that they would want over their back-end
infrastructure.

1.6 Components of Cloud Computing


Cloud computing solution is made up of several elements and these elements make up the three
Notes

Cloud Computing
components of a cloud computing solution (Figure 6).

a) Clients
b) The data center, and
c) Distributed servers.

6
Notes

Unit 01: Cloud Computing Introduction

Figure 6: Cloud Computing Components

A. Clients: Devices that end users interact with to manage their information on cloud. There can be different types
of clients such as:

 Mobile Clients: Includes PDAs or smartphones, like a Blackberry, Windows Mobile Smartphone, or an iPhone.
 Thin Clients: Computers that do not have internal hard drives, but rather let the server do all the work, but
then display the information.
 Thick Clients: Thick clients are regular computer, using a web browser like Firefox or Internet Explorer to
connect to the cloud.
A thin client is a computing device that's connected to a network. Unlike a typical PC or “fat client,” that has the
memory, storage and computing power to run applications and perform computing tasks on its own, a thin client
functions as a virtual desktop, using the computing power residing on the networked servers.
Advantages of Using Thin Clients
Thin clients are becoming an increasingly popular solution, because of their price and effect on the environment.

 Lower hardware costs:Thin clients are cheaper than thick clients because they do not contain as much hardware.
They also last longer before they need to be upgraded or become obsolete.
 Lower IT costs:Thin clients are managed at the server and there are fewer points of failure.
 Security: Since the processing takes place on the server and there is no hard drive, there’s less chance of
malware invading the device. Also, since thin clients don’t work without a server, there’s less chance of them being
physically stolen.
 Data security: Since data is stored on the server, there’s less chance for data to be lost if the client computer
crashes or is stolen.
 Less power consumption:Thin clients consume less power than thick clients. This means you’ll pay less to
power them, and you’ll also pay less to air-condition the office.
 Ease of repair or replacement: If a thin client dies, it’s easy to replace. The box is simply swapped out and the
user’s desktop returns exactly as it was before failure.
 Less noise: Without a spinning hard drive, less heat is generated and quieter fans can be used on the thin
client.
B. Datacenter: Datacenter has a collection of servers where the application to which you subscribe is housed. It is
a large room in the basement of your building or a room full of servers on the other side of the world that you access
via the Internet. There is a growing trend in the IT world of virtualizing servers. The software can be installed allowing
multiple instances of virtual servers to be used. There can be half a dozen virtual servers running on one physical
server.
Notes

Cloud Computing

C. Distributed Servers: The distributed servers are in geographically disparate locations. They
give the service provider more flexibility in options and security. For instance, Amazon has
their cloud solution in servers all over the world. If something were to happen at one site,
causing a failure, the service would still be accessed through another site.

Other Components of Cloud Computing


There are other components of cloud computing such as cloud services;platforms; applications;
storage; and infrastructure.
Cloud Services: Cloud services, products and solutions that are used and delivered real-time via
internet media.Example:

• Identity - OpenID, OAuth, etc.


• Integration - Amazon Simple Queue Service.
• Payments - PayPal, Google Checkout.
• Mapping - Google Maps, Yahoo! Maps.
Cloud Applications: Applications that use cloud computing in software architecture so that users
don't need to install but they can use the application using a computer.Example:

• Peer-to-peer - BitTorrent, SETI, and others.


• Web Application - Facebook.
• SaaS - Google Apps, SalesForce.com, and others
Cloud Platform: A service in the form of a computing platform consisting of hardware and
infrastructure software. Service in the form of a computing platform which contains infrastructure
hardware and software. Example:

• Web Application Frameworks - Python Django, Rubyon Rails, .NET


• Web Hosting
• Propietary- Force.com
Cloud Storage: Cloud storage involves the process of storing data as a service.Example:

• Database- Google Big Table, Amazon SimpleDB.


• Network Attached Storage- NirvanixCloudNAS, MobileMe iDisk
Cloud Infrastructure: Cloud infrastructure involves the delivery of computinginfrastructure as a
service.Example:

• Grid Computing- Sun Grid.


• Full Virtualization- GoGrid, Skytap.
• Compute- Amazon Elastic Compute Cloud (Amazon EC2)

1.7 Characteristics of Cloud Computing


Cloud computing harnesses the power of the Internet to allow organizations to remain productive
despite the COVID-19 pandemic and work from home arrangements. The technology helps
businesses maximize their resources because they don’t need to buy their physical servers.
Everything is online. Everything is in the cloud.
On-demand self-service– A user can provision computing capabilities, such as server time and
storage, as needed without requiring human interaction.
Broader network access– Capabilities are available over a network and typically accessed by
the users’ mobile phones, tablets, laptops, and workstations.
Shared resource pooling– The provider’s computing resources are pooled to serve multiple users
using a multi-tenant model, with different physical and virtual resources dynamically assigned and

8
Notes

Unit 01: Cloud Computing Introduction

reassigned according to consumer demand. Examples of resources include storage, processing, memory, and network
bandwidth.
Rapid elasticity– Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward as needed. For the user, the capabilities
available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
Measured service– Cloud systems automatically control and optimize resource use by leveraging a metering
capability appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). The
resource usage can be monitored, controlled, and reported, providing transparency for both the provider and user of the
service. It follows a “Pay as you grow” modelor for internal IT departments to provide IT chargeback capabilities. The
usage of cloud resources is measured and user is charged based on some metrics such as amount of CPU cycles used,
amount of storage space used, number of network I/O requests etc. are used to calculate the usage charges for the cloud
resources.
Performance-Dynamic allocation of resources as per the application workloads helps to easily scale up or down and
maintain performance.
Reduced costs-Cost benefits for applications as only as much computing and storage resources are required can be
provisioned dynamically and upfront investment in purchase of computing assets to cover worst case requirements is
avoided.
Outsourced Management-Cloud computing allows the users to outsource the IT infrastructure
requirements to external cloud providers and save upfront capital investments. This helps in
easiness of setting IT infrastructure and pay only for the operational expensesfor the cloud resources used.

Figure 7: Service-Oriented Architecture

Multitenancy: Multitenancy allows multiple users to make use of the same shared resources. Modern applications
such as Banking, Financial, Social networking, e-commerce, B2B etc. are deployed in cloud environments that support
multi-tenanted applications.
Service Oriented Architecture (SOA): SOAis essentially a collection of services which communicate with each
other.SOA provides a loosely-integrated suite of services that can be used within multiple business domains (Figure
7). The approach here is usually implemented by Web service model.

1.8 Issues of Cloud Computing


Security and Privacy :Security has indeed been a primary, and valid, concern from the start of cloud computing
technology: you are unable to see the exact location where your data is stored or being processed. This increases the cloud
computing risks that can arise during the implementation or management of the cloud.When we say security and privacy,
we are talking about the user data that is stored on Cloud Service Providers (CSP) data centers. A CSP should abide by the
rules of not sharing confidential data or any data that matters to the users. The data centers must be secure and privacy of
the data should be maintained by a CSP. There is always concern about the actual location of your data, where it is
stored and processed. Before onboarding, yourself on the cloud computing platform one should always check the data
security and data recovery (in case of disaster) policy of the CSP.

Cost Management and Containment :Cloud computing can be expensive if you don’t know how to manage your
computing resources and take maximum advantage of them. Many times, the organizations dwell in a mindset of pay-
as-you-go and spend more on cloud than they would have on on-premise infrastructure. One should always optimize
the cost by financial analytics and reporting the usage for better monitoring of cost.For the most part, cloud computing
can save businesses money. In the cloud, an organization can easily ramp up its processing capabilities without
making large investments in new hardware. Businesses can instead access extra processing through pay-as-you-go
models from public cloud providers. However, the on-demand and scalable nature of cloud computing services make it
sometimes difficult to define and predict quantities and costs.
Notes

Cloud Computing
Lack of Resources/ Expertise:Cloud challenges companies and enterprises. As the usage of cloud
technologies is increasing, tools to manage it are getting sophisticated, finding experts on top of
this in cloud computing is becoming a bottleneck to many organizations. The organizations are
increasingly placing more workloads in the cloud while cloud technologies continue to rapidly
advance. Due to these factors, organizations are having a tough time keeping up with the tools.
Also, the need for expertise continues to grow. Such challenges can be minimized through
additional training of IT and development staff.Many companies are adopting automated cloud
management technologies but it’s always better to train individuals to satisfy the need of time.
Presently, DevOps tools like Chef and Puppet are heavily used in the IT industry.
Governance/ Control:In cloud computing, infrastructure resources are under CSP’s control and
end-users or companies have to abide by the governance policies from CSP. The traditional IT
teams have no control over how and where their data is and processed. IT governance should
assure how infrastructure assets from CSP are being used. To overcome the downfalls and
challenges, onboarding to Cloud, IT must adapt its orthodox way of governance and process
control to the induct cloud. Now, IT is playing an important role in benchmarking cloud services
requirements and policies. Thus, the proper IT governance should ensure IT assets are
implemented and used according to agreed-upon policies and procedures; ensure that these assets
are properly controlled and maintained; and ensure that these assets are supporting your
organization’s strategy and business goals.
Compliance:When organizations are moving their native data to a cloud, they need to comply
with particular general body policies if the data is from public sources. Although, finding a cloud
provider who will comply with these policies is difficult to find, or one needs to negotiate on that
front.Many CSPs are coming with flexible compliance policies for data acquisition and cloud
infrastructure.An issue for anyone using backup services or cloud storage. Every time a company
moves data from the internal storage to a cloud, it is faced with being compliant with industry
regulations and laws.Depending on the industry and requirements, every organization must
ensure thesestandards are respected and carried out.This is one of the many challenges facing
cloud computing, and although the procedure can take a certain amount of time, the data must be
properly stored.
Managing Multiple Clouds: The challenges facing cloud computing haven’t just been
concentrated in one, single cloud.The state of multi-cloud has grown exponentially in recent years.
But managing multi-cloud infrastructure contrary to a single cloud is very challenging given all the
above data-driven challenges. The companies are shifting or combining public and private clouds
and, as mentioned earlier, tech giants like Alibaba and Amazon are leading the
way.Approximately, 81% of companies are having multi-cloud strategies and have a hybrid cloud
structure (public and private clouds). Companies are opting for a multi-cloud scenario because
some of the services are cost-effective in public and to manage cost-effectively this cloud model
has been very successful in recent years. However, managing such highly networked architecture
is a difficult task.
Performance:When a business moves to the cloud it becomes dependent on the service
providers. The next prominent challenges of moving to cloud computing expand on this
partnership.The performance of the organization’s BI and other cloud-based systems is also tied
to the performance of the cloud provider when it falters. When your provider is down, you are also
down.Cloud computing is on-demand compute service and supports multitenancy, thus
performance should not suffer over the acquisition of new users. The CSP should maintain enough
resources to serve all the users and any ad-hoc requests.
Building a Private Cloud:Creating an internal or private cloud will cause a significant benefit:
having all the data in-house. But IT managers and departments will need to face building and
gluing it all together by themselves, which can cause one of the challenges of moving to cloud

1
Notes

Unit 01: Cloud Computing Introduction

computing extremely difficult.Many tasks such as grabbing an IP address cloud software layer, setting up a virtual local
area network (VLAN), load balancing, firewall rule-setting for the IP address, server software patch, arranging nightly
backup queue are quite complex associated tasks for a private cloud.Although building a private cloud isn’t a top
priority for many organizations, for those who are likely to implement such a solution, it quickly becomes one of the
main challenges facing cloud computing – private solutions should be carefully addressed.Many companies are planning
to do so because the cloud will on-premise and they will have all the data authority over shared cloud resources.
Segmented Usage and Adoption: Most organizations did not have a robust cloud adoption strategy in place when
they started to move to the cloud. Instead, ad-hoc strategies sprouted, fueled by several components. One of them was
the speed of cloud adoption. Another one was the staggered expiration of data center contracts/equipment, which led to
intermittent cloud migration. Finally, there also were individual development teams using the public cloud for specific
applications or projects.
Migration:One of the main cloud computing industry challenges in recent years concentrates on migration. This is a process
of moving an application to a cloud. An although moving a new application is a straightforward process, when it comes to
moving an existing application to a cloud environment, many cloud challenges arise.

1.9 Challenges in Cloud Computing


Hosting and running the applications of the current cloud platform on some other platform may cause support issues,
configuration complexities, and additional expenses. The company data might also be left vulnerable to security attacks
due to compromises that might have been made during migrations. The various challenges being faced include:
Service Quality:Service quality should be good and is a major concern of the end-user. The whole ecosystem of cloud
computing is presented in virtual environments and thus the CSP should give what is promised in terms of service, be it
compute resources or customer satisfaction.
Interoperability:CSP’s services should be flexible enough to integrate itself into other platforms and services
provided by other CSPs. The data pipeline should be easy to integrate and should drive improved performance. There
are a lot of challenges in cloud computing like Big data analysis, long hall transfer, transferring data problems but still, it
is the best computing resource available to date.
Availability and Reliability:Data and service from CSP should be available at all times irrespective of the external
condition or the ideal condition. Computing resources should be available for the users and their operability should be
reliable.
Portability:If the users want to migrate from one CSP to others, the vendor should not lock-in customer data or
services and the migration should be ease. There are different laws over data in different countries.
Cloud Integration:Several companies, especially those with hybrid cloud environments report issues associated with
having their on-premise apps and tools and public cloud for working together.According to survey, 62% of respondents
said integration of legacy systems as their biggest challenge in multi-cloud. Although, combining new cloud-based apps
and legacy systems needs resources, expertise, and time but still several companies are considering that the perks of
cloud computing dominate the backlogs of this technology.
Vendor Lock-in:Entering a cloud computing agreement is easier than leaving it. “Vendor lock-in” happens when
altering providers is either excessively expensive or just not possible.It could be that the service is nonstandard or that
there is no viable vendor substitute.It is important to guarantee the services you involve are typical and transportable to
other providers, and above all, understand the requirements.When in need to migrate from one cloud platform to
another, a company might face some serious challenges because of the differences between vendor platforms.

1.10 Applications of Cloud Computing


Cloud Service Providers (CSP) are providing many types of cloud services and now if we say that cloud computing has
touched every sector by providing various cloud applications. The sharingand management of resources is easy in
cloud computing that’s why it is one of the dominant fields of computing. Many properties have made it an active
component in different fields.

1.Online Data Storage: Cloud computing allows storing data like files, images, audios, and videos,
etc. on the cloud storage. The organization need not set physical storage systems to store a huge
volume of business data which costs so high nowadays. As they are growing technologically, data
generation is also growing with respect to time, and storing that becoming problem. In that
situation, Cloud storage is providing this service to store and access data any time as per
requirement. Example: Google Drive, DropBox, iCloud etc.
Notes

Cloud Computing
2.Backup and Recovery: Cloud vendors provide security from their side by storing safe to the
data as well as providing a backup facility to the data. They offer various recovery application for
retrieving the lost data. In the traditional way, backup of data is a very complex problem and also
it is very difficult sometimes impossible to recover the lost data. But cloud computing has made
backup and recovery applications very easy where there is no fear of running out of backup
media or loss of data.
3.Bigdata Analysis: We know the volume of big data is so high, such that, storing that in the
traditional data management system for an organization is impossible. Cloud computing has
resolved that problem by allowing the organizations to store their large volume of data in cloud
storage without worrying about physical storage. Next comes analyzing the raw data and finding
out insights or useful information from it is a big challenge as it requires high-quality tools for
data analytics. Cloud computing provides the biggest facility to organizations in terms of storing
and analyzing big data.
4.Anti-Virus Applications: Previously, organizations were installing antivirus software within
their system even if we will see we personally also keep antivirus software in our system for
safety from outside cyber threats. But, nowadays, cloud computing provides cloud antivirus
software which means the software is stored in the cloud and monitors your
system/organization’s system remotely. This antivirus software identifies the security risks and
fixes them. Sometimes also they give a feature to download the software.
5. E-commerce Application: Cloud-based e-commerce allows responding quickly to the
opportunities which are emerging. Users respond quickly to the market opportunities as well as
the traditional e-commerce responds to the challenges quickly. Cloud-based e-commerce gives a
new approach to doing business with the minimum amount as well as minimum time possible.
Customer data, product data, and other operational systems are managed in cloud environments.
6.Cloud computing in Education: Cloud computing in the education sector brings an
unbelievable change in learning by providing e-learning, online distance learning platforms, and
student information portals to the students. It is a new trend in education that provides an
attractive
environment for learning, teaching, experimenting, etc. to students, faculty members, and
researchers. Everyone associated with the field can connect to the cloud of their organization
and access data and information from there.
7. Technology-enhanced Learning or Education as a Service (EaaS):There are the following
education applications offered by the cloud-

Example:
• Google Apps for Education: Google Apps for Education is the most widely used platform
for free web-based email, calendar, documents, and collaborative study.
• Chromebooks for Education: Chromebook for Education is one of the most important
Google's projects. It is designed for the purpose that it enhances education innovation.

1
Notes

Unit 01: Cloud Computing Introduction

• Tablets with Google Play for Education: It allows educators to quickly implement the latest technology solutions
into the classroom and make it available to their students.

8. Testing and development: Setting up the platform for development and finally performing different types of testing to
check the readiness of the product before delivery requires different types of IT resources and infrastructure. But Cloud
computing provides the easiest approach for development as well as testing even if deployment by using their IT resources
with minimal expenses. Organizations find it more helpful as they got scalable and flexible cloud services for product
development, testing, and deployment.
9. E-Governance Applications: Cloud computing can provide its services to multiple activities conducted by the
government. It can support the government to move from the traditional ways of management and service providers to
an advanced way of everything by expanding the availability of the environment, making the environment more scalable
and customized. It can help the government to reduce the unnecessary cost in managing, installing, and upgrading
applications and doing all these with help of could computing and utilizing that money public service.
10. Cloud Computing in Medical Fields: In the medical field also nowadays cloud computing is used for storing and
accessing the data as it allows to store data and access it through the internet
without worrying about any physical setup. It facilitates easier access and distribution of
information among the various medical professional and the individual patients. Similarly, with help of cloud computing
offsite buildings and treatment facilities like labs, doctors making emergency house calls and ambulances information, etc
can be easily accessed and updated remotely instead of having to wait until they can access a hospital computer.

11. Entertainment Applications: Many people get entertainment from the internet, in that case, cloud computing is the
perfect place for reaching to a varied consumer base. Therefore, different types of entertainment industries reach near the
target audience by adopting a multi-cloud strategy.Cloud-based entertainment provides various entertainment
applications such as online music/video, online games and video conferencing, streaming services, etc and it can reach
any device be it TV, mobile, set-top box, or any other form. It is a new form of entertainment called On-Demand
Entertainment (ODE).Entertainment industries use a multi-cloud strategy to interact with the target audience. Cloud
computing offers various entertainment applications such as online games and video conferencing.
• Online games:Today, cloud gaming becomes one of the most important entertainment media. It offers various online
games that run remotely from the cloud. The best cloud gaming services are Shaow, GeForce Now, Vortex, Project xCloud,
and PlayStation Now.
• Video conferencing apps:Video conferencing apps provides a simple and instant connected experience. It allows us to
communicate with our business partners, friends, and relatives using a cloud-based video conferencing. The benefits of
using video conferencing are that it reduces cost, increases efficiency, and removes interoperability.
12. Art Applications: Cloud computing offers various art applications for quickly and easily design attractive cards,
booklets, and images. Some most commonly used cloud art applications are given below:
Notes

Cloud Computing

• Moo: One of the best cloud art applications. It is used for designing & printing business
cards, postcards, & mini cards.
• Vistaprint: Vistaprint allows us to easily design various printed marketing products such as
business cards, Postcards, Booklets, and wedding invitations cards.
• Adobe Creative Cloud: Adobe creative cloud is made for designers, artists, filmmakers, and
other creative professionals. It is a suite of apps which includes PhotoShop image editing
programming, Illustrator, InDesign, TypeKit, Dreamweaver.
13. Management Applications: Cloud computing offers various cloud management tools which
help admins to manage all types of cloud activities, such as resource deployment, data

integration, and disaster recovery. These management tools also provide administrative
control over the platforms, applications, and infrastructure.Some important management
applications are:
• Toggl: Toggl helps users to track allocated time period for a particular project.
• Evernote: Evernote allows you to sync and save your recorded notes, typed notes, and
other notes in one convenient place. It is available for both free as well as a paid version. It
uses platforms like Windows, macOS, Android, iOS, Browser, and Unix.
• Outright: Outright is used by management users for the purpose of accounts. It helps to
track income, expenses, profits, and losses in real-time environment.
• GoToMeeting: GoToMeeting provides Video Conferencing and online meeting apps, which
allows you to start a meeting with your business partners from anytime, anywhere using
mobile phones or tablets. Using GoToMeeting app, you can perform the tasks related to the
management such as join meetings in seconds, view presentations on the shared screen, get
alerts for upcoming meetings, etc.
14. Social Applications: Social cloud applications allow a large number of users to connect with
each other using social networking applications such as Facebook, Twitter, Linkedln, etc. There
are the following cloud based social applications-
• Facebook: Facebook is a social networking website which allows active users to share files,
photos, videos, status, more to their friends, relatives, and business partners using the
cloud storage system. On Facebook, we will always get notifications when our friends like
and comment on the posts.
• Twitter: Twitter is a social networking site. It is a microblogging system. It allows users to
follow high profile celebrities, friends, relatives, and receive news. It sends and receives
short posts called tweets.
• LinkedIn: LinkedIn is a social network for students, freshers, and professionals.
With respect to this as a cloud, the market is growing rapidly and it is providing various services
day by day. So, in the future cloud computing is going to touch many more sectors by providing
more applications and services.

Summary
• Cloud computing offers various cloud management tools which help admins to manage all
types of cloud activities, such as resource deployment, data integration, and disaster recovery.
• Cloud computing refers to manipulating, configuring, and accessing the applications online.

1
Notes

Unit 01: Cloud Computing Introduction

• Cloud computing virtualizes systems by pooling and sharing resources. Systems and storage
can be provisioned as needed from a centralized infrastructure, costs are assessed on a metered
basis, multi-tenancy is enabled, and resources are scalable with agility.
• Cloud computing eliminates the need for IT infrastructure updates and maintenance since the
service provider ensures timely, guaranteed, and seamless delivery of our services and also
takes care of all the maintenance and management of our IT services according to the service-
level agreement (SLA).
• Cloud computing can be expensive if you don’t know how to manage your computing
resources and take maximum advantage of them.
• Cloud computing lets us deploy the service quickly in fewer clicks. This quick deployment lets
us get the resources required for our system within minutes.

Keywords
 Service Oriented Architecture (SOA): SOAis essentially a collection of services which
communicate with each other.SOA provides a loosely-integrated suite of services that can be
used within multiple business domains.
 Abstraction: Cloud computing abstracts the details of system implementation from users
and developers. Applications run on physical systems that aren't specified, data is stored in
locations that are unknown, administration of systems is outsourced to others, and access by
users is ubiquitous.
 Cloud:Cloud refers to a Network or Internet. A cloud is usually defined as a large group of
interconnected computers. These computers include network servers or personal computers.
 Cloud computing:Cloud computing is a model for enabling ubiquitous, convenient, on-
demand network access to a shared pool of configurable computer resources (networks,
servers, storage, applications, and services) that can be rapidly provisioned and released
with minimal management effort or service provider interaction.
 Cloud computing collaboration: The users from multiple locations within a corporation, and
from multiple organizations, desired to collaborate on projects that crossed company and
geographic boundaries. Projects had to be housed in the “cloud” of the Internet, and accessed
from any Internet-enabled location. Cloud-collaboration is also termed as Internet-based
group collaboration.
 Multitenancy: In cloud computing, multitenancy means that multiple customers of a cloud
vendor are using the same computing resources. Despite the fact that they share resources,
cloud customers aren't aware of each other, and their data is kept totally separate.
 Thick clients:Thick clients are regular computers, using a web browser like Firefox or Internet
Explorer to connect to the cloud.
Notes

Cloud Computing Architecture and Models


Introduction
Cloud refers to a Network or Internet. Cloud is something, which is present at remote location.
Cloud can provide services over network, that is, on public networks or on private networks, that
is, Wide Area Networks (WANs), Local Area Networks (LANs), or Virtual Private Networks
(VPNs). Applications such as e-mail, web conferencing, customer relationship management (CRM),
all run in cloud.
Cloud computing is a subscription-based delivery model that provides scalability, fast delivery and
greater IT efficiencies. Cloud has as removed many physical and financial barriers to aligning IT
needs with evolving business goals. With a promise to deliver better applications, platforms and
infrastructure quickly and cheaply, cloud computing has become a major force for business
innovation across all industries.

2.1 Why Cloud Computing Matters


Technology has evolved fast over the last few years. People used to run businesses in different ways before the
invention of cloud computing. Like in other industries, technology advancement has an impact on the business as well.
Before businesses started relying on it, managers had to run applications from servers on their premises. Some even
had to hire extra IT experts to help them create their data centres. This reduced their profit margins because they had to
pay them a lot of cash. However, things have changed nowadays.
A major revolution has been brought to the business world by cloud technology. This technology has made business to
run smoothly plus experience growth. Cloud technology leverages virtualization. Cloud services are maintained at a
remote data centre. You can also find them in Amazon web services. This technology can be applied to all companies. So,
how does cloud technology trigger business development? Let us find out here below:
Cloud technology has changed the business world a lot. It has improved efficiency, productivity, and overall
performance. There are a few reasons why cloud technology matters for business growth:
Cost-Effectiveness:This technology is a game-changer because you do not have to hire extra IT experts or resources
to help you in data management. You will only pay for the services you need to remain competitive or enjoy growth in
your business. You do not pay for the services you no longer need. Businesses can grow fast when using this technology
because there will be no maintenance, labor, or purchasing costs. You only pay a monthly subscription fee for the
services your business needs from your cloud technology service provider.
Digitalizes Business:These days, businesses are switching to digital marketing strategies. There are many businesses
around the world. And for you to remain competitive, you have to embrace digital transformation. But this does not
mean going paperless only. Do you want your business to grow fast? Then, why don’t you make your business
operations digital? If you want to avoid losses during the digital migration, consider hiring a reliable cloud technology
company. Outsourcing the entire digital transformation process will cut the risk of improper migration. Do stick to the
traditional ways of operating because they are costly, slow, and outdated.
Offers Better Data Backups and Recovery: Cloud technology offers a lot of benefits, but cloud storage tends to be
the greatest deal. Cloud storage can be accessed from anywhere, anytime, and with any device. Thus, businesses with
remote workers or offices can benefit a lot from this technology. Are you looking for a reliable business backup and
recovery process? Cloud technology is the ideal choice for you because it has an integrated process that keeps your
relevant business data safe all the time. You no longer worry about cyber-attacks, natural disasters, or physical thefts as
your data is secured and protected. Your data is not stored in one place because the service provider will distribute it to
remote data centres. Even if a disaster hits your business premises, you will retrieve the necessary data stored in the
clouds. Thus, resuming your operations normally as if nothing happened.
Seamless Scalability:This is another reason why you need cloud technology in your business. If your business uses
the IaaS cloud, it will be easier to subscribe to the whole IT infrastructure. Hence, you will not need to buy hardware
during the process. If there is a need to add or remove servers in your infrastructure, scaling up and down will be quick
and seamless.
Flexibility:Cloud-based services are ideal for businesses with growing or fluctuating bandwidth demands. If your
needs increase, it’s easy to scale up your cloud capacity, drawing on the service’s remote servers. Likewise, if you need
to scale down again, the flexibility is baked into the

1
Notes

Cloud Computing

service. This level of agility can give businesses using cloud computing a real advantage over
competitors– it’s not surprising that businesses identify operational agility as a key reason for
cloud adoption.
Increasing Business Competitiveness:Many businesses are forced to be quick and efficient
when adapting to the marketplace changes because competition keeps rising up. They can benefit
a lot from the flexible and customizable cloud technology solutions. This may help them increase
their agility in their operations.
Geographical Dispersion:Cloud computing allows you to work anytime and from anywhere,
provided you have an internet connection. Since most established cloud services also offer mobile
apps, you’re not even restricted by which device you’ve got to hand.Businesses can offer more
flexible working perks to employees, so they can enjoy the work-life balance that suits them–
without productivity taking a hit. Home office is an attractive option for many employees and now,
thanks to cloud services, an increasingly accessible idea too.
For Developers:Cloud computing provides increased amounts of storage and processing power
to run the applications they develop. Cloud computing also enables new ways to access
information, process and analyze data, and connect people and resources from any location
anywhere in the world. In essence, it takes the lid off the box; with cloud computing, developers
are no longer boxed in by physical constraints.
For IT Departments:For IT departments, cloud computing offers more flexibility in computing
power, often at lower costs. With cloud computing, IT departments don't have to engineer for peak-
load capacity, because the peak load can be spread out among the external assets in the cloud. And,
because additional cloud resources are always at the ready, companies no longer have to purchase
assets (servers, workstations, and the like) for infrequent intensive computing tasks. If you need
more processing power, it's always there in the cloud—and accessible on a cost-efficient basis.
For End-Users:For end users, cloud computing offers all these benefits and more. An individual
using a web-based application isn't physically bound to a single PC, location, or network, his
applications and documents can be accessed wherever he is, whenever he wants.Gone is the fear of
losing data if a computer crashes out. Documents hosted in the cloud always exist, no matter what
happens to the user's machine.

2.2 Issues in Cloud Computing


Hosting and running the applications of the current cloud platform on some other platform may cause support issues,
configuration complexities, and additional expenses. The company data might also be left vulnerable to security attacks
due to compromises that might have been made during migrations. The various challenges being faced include:
Operational Complexity: One of the things that makes cloud services more difficult from traditional ones is that there’s
a lot of operational complexity that comes along with the shift. Building the capabilities, software and systems required
to maintain a business in the cloud isn’t always easy, which can make it difficult to scale and deliver products and
services to an ever- growing market.This may not apply to those who just want to take advantage of cloud-based
services, but is a serious concern for those starting a cloud-based business.
Lack of Customization: While cloud-based software can be great for getting things done, it may lack one feature that
some businesses demand: customization. Generally, cloud-based software providers deliver a great product, but it’s also
a generic one meant to serve the needs of a wide range of businesses and individuals. For some companies who need
customized software to do business, this may simply not be an option.
Data Security and Privacy: Security has indeed been a primary, and valid, concern from the start of cloud computing
technology: you are unable to see the exact location where your data is stored or being processed. This increases the
cloud computing risks that can arise during the implementation or management of the cloud.When we say security and
privacy, we are talking about the user data that is stored on Cloud Service Providers (CSP) data centers. A CSP should
abide by the rules of not sharing confidential data or any data that matters to the users. The data centers must be secure
and privacy of the data should be maintained by a CSP. There is always concern about the actual location of your data,
where it is stored and processed. Before onboarding, yourself on the cloud computing platform one should always check
the data security and data recovery (in case of disaster) policy of the CSP.
The cloud can provide amazing security, if done right, but not every vendor is of the same quality.Some may not offer
services that can provide you with the level of security you need for sensitive data, and major leaks can occur anywhere,
cloud or otherwise, if proper precautions aren’t taken.In terms of security concerns of cloud technology, we don’t find
answers to some questions. Mysterious threats like website hacking and virus attack are the biggest problems of cloud
computing data security.Before utilizing cloud computing technology for a business, entrepreneurs should think about
these things. Once you transfer important data of your organization to a third party, you should make sure you have a

1
Notes

cloud security and management system.Cybersecurity experts are more aware of cloud security than any other IT
professional.According to Crowd Research Partners survey, 9 out of 10 cybersecurity experts are concerned regarding
cloud security. Also, they are worried about the violation of confidentiality, data privacy, and data leakage and loss.
Performance:When a business moves to the cloud it becomes dependent on the service providers. The next prominent
challenges of moving to cloud computing expand on this partnership.The performance of the organization’s BI and other
cloud-based systems is also tied to the performance of the cloud provider when it falters. When your provider is down,
you are also down.Cloud computing is on-demand compute service and supports multitenancy, thus performance
should not suffer over the acquisition of new users. The CSP should maintain enough resources to serve all the users
and any ad-hoc requests.
Dealing with Multi-Cloud Environments: These days, maximum companies are not only working
on a single cloud. As per the RightScale report revelation, nearly 84% of the companies are
following a multi-cloud strategy and 58% already have their hybrid cloud tactic that is combined
with the public and private cloud.A long-term prediction on the future of cloud computing
technology gives a more difficulty encountered by the teams of IT infrastructure. However, the
professionals have also suggested the top practices like re-thinking procedures, training staff,
tooling, active vendor relationship management, and doing the study.
Cloud Migration: Although releasing a new app in the cloud is a very simple procedure,
transferring an existing application to a cloud computing environment is tougher.According to the
report, 62% said that their cloud migration projects were tougher than they anticipated. Alongside
this, 64% of migration projects took more time than predicted and 55% went beyond their
budgets.Especially, some organizations migrating their apps to the cloud reported downtime
during migration (37%), issues syncing data before cutover (40%), the problem having migration
tools to work well (40%), slow data migration (44%), configuring security issues (46%), and time-
consuming troubleshooting (47%).And to solve over these issues, nearly 42% of the IT experts
said they wished they had increased their budgets, around 45% wished to have employed an in-
house professional, 50% wanted to set a longer project duration, 56% of them wanted they had
performed more pre-migration testing.
Cloud Integration: Finally, several companies, especially those with hybrid cloud environments
report issues associated with having their on-premise apps and tools and public cloud for working
together.According to survey, 62% of respondents said integration of legacy systems as their
biggest challenge in multi-cloud.Likewise, in a Software One report on cloud cost, 39% of those
assessed said integrating legacy systems was one of their biggest worries while utilizing the cloud.
Also, combining new cloud-based apps and legacy systems needs resources, expertise, and time.
Unauthorized Service Providers: It is a new concept for most of the business organizations. A
normal businessman is not able to verify the genuineness of the service provider agency. It’s very
difficult for them to check the whether the vendors meet the security standards or not. There is
need for an ICT consultant to evaluate the vendors against the worldwide criteria. It is necessary
to verify that the vendor must be operating this business for a sufficient time without having any
negative record in past. Vendor continuing business without any data loss complaint and have a
number of satisfied clients. Market reputation of the vendor should be unblemished.
Hacking of Brand: Cloud involves some major risk factors like hacking. Some professional
hackers are able to hack the application by breaking the efficient firewalls and steal the sensitive
information of the organizations. A cloud provider hosts numerous clients; each can be affected by
actions taken against any one of them. When any threat came into the main server it affects all the
other clients also. As in distributed denial of service attacks server requests that inundate a
provider from widely distributed computers.
Cloud Management:Managing a cloud is not an easy task. It consists of a lot of technical
challenges. A lot of dramatic predictions are famous about the impact of cloud computing. People
think that traditional IT department will be outdated and research supports the conclusions that
cloud impacts are likely to be more gradual and less linear. Cloud services can easily change and
update by the business users. It does not involve any direct involvement of IT department. It is a
service provider’s responsibility to manage the information and spread it across the organization.
So, it is difficult to manage all the complex functionality of cloud computing.
Sustainability: Sustainability refers to minimizing the effect of cloud computing on environment.
Indeed, citing the server’s effects on the environmental effects of cloud computing, in areas where
climate favors natural cooling and renewable electricity is readily available. The countries with
favorable conditions, such as Finland, Sweden, and Switzerland are trying to attract cloud
computing data centers. But other than nature’s favors, would these countries have enough
technical infrastructure to sustain the high-end clouds.

1
Notes

Cloud Computing

Cost Management and Containment:Cloud computing can be expensive if you don’t know how to manage your
computing resources and take maximum advantage of them. Many times, theorganizations dwell in a mindset of pay-as-
you-go and spend more on cloud than they would have on on-premise infrastructure. One should always optimize the
cost by financial analytics and reporting the usage for better monitoring of cost.For the most part, cloud computing can
save businesses money. In the cloud, an organization can easily ramp up its processing capabilities without making large
investments in new hardware. Businesses can instead access extra processing through pay-as-you-go modelsfrom
public cloud providers. However, the on-demand and scalable nature of cloud computing services make it sometimes
difficult to define and predict quantities and costs.
Lack of Resources/ Expertise:Cloud challenges companies and enterprises. As the usage of cloud technologies is increasing,
tools to manage it are getting sophisticated, finding experts on top of this in cloud computing is becoming a bottleneck to
many organizations. The organizations are increasingly placing more workloads in the cloud while cloud technologies
continue to rapidly advance. Due to these factors, organizations are having a tough time keeping up with the tools. Also,
the need for expertise continues to grow. Such challenges can be minimized through additional training of IT and
development staff.Many companies are adopting automated cloud management technologies but it’s always better to train
individuals to satisfy the need of time. Presently, DevOps tools like Chef and Puppet are heavily used in the IT industry.
Governance/ Control: In cloud computing, infrastructure resources are under CSP’s control and end-users or
companies have to abide by the governance policies from CSP. The traditional IT teams have no control over how and
where their data is and processed. IT governance should assure how infrastructure assets from CSP are being used. To
overcome the downfalls and challenges, onboarding to Cloud, IT must adapt its orthodox way of governance and
process control to the induct cloud. Now, IT is playing an important role in benchmarking cloud services requirements
and policies. Thus, the proper IT governance should ensure IT assets are implemented and used according to agreed-
upon policies and procedures; ensure that these assets are properly controlled and maintained; and ensure that these
assets are supporting your organization’s strategy and business goals.
Compliance: When organizations are moving their native data to a cloud, they need to comply with particular general
body policies if the data is from public sources. Although, finding a cloud provider who will comply with these policies is
difficult to find, or one needs to negotiate on that front.Many CSPs are coming with flexible compliance policies for data
acquisition and cloud infrastructure.An issue for anyone using backup services or cloud storage. Every time a company
moves data from the internal storage to a cloud, it is faced with being compliant with industry regulations and
laws.Depending on the industry and requirements, every organization must ensure thesestandards are respected and
carried out.This is one of the many challenges facing cloud computing, and although the procedure can take a certain
amount of time, the data must be properly stored.
Building a Private Cloud: Creating an internal or private cloud will cause a significant benefit: having all the data in-
house. But IT managers and departments will need to face building and gluing it all together by themselves, which can
cause one of the challenges of moving to cloud computing extremely difficult.Many tasks such as grabbing an IP address
cloud software layer, setting up a virtual local area network (VLAN), load balancing, firewall rule-setting for the IP
address, server software patch, arranging nightly backup queue are quite complex associated tasks for a private
cloud.Although building a private cloud isn’t a top priority for many organizations, for those who are likely to
implement such a solution, it quickly becomes one of the main challenges facing cloud computing – private solutions
should be carefully addressed.Many companies are planning to do so because the cloud will on-premise and they will
have all the data authority over shared cloud resources.
Segmented Usage and Adoption:Most organizations did not have a robust cloud adoption strategy in place when they
started to move to the cloud. Instead, ad-hoc strategies sprouted, fueled by several components. One of them was the
speed of cloud adoption. Another one was the staggered expiration of data center contracts/equipment, which led to
intermittent cloud migration. Finally, there also were individual development teams using the public cloud for specific
applications or projects.

2.3 Cloud Architecture


Cloud architecture indicates how individual technologies are integrated to create clouds. The IT
environments abstract, pool, and share scalable resources across a network. How all the
components and capabilities necessary to build a cloud are connected in order to deliver an online
platform on which the applications can run.

A. Cloud Technologies
Certain technologies that are working
behind the cloud computing platforms making cloud
computing flexible, reliable, usable. These technologies are listed below:

 Virtualization
 Service-Oriented Architecture (SOA)
 Grid Computing

1
Notes

 Utility Computing
Virtualization is a technique, which allows to share single physical instance of an application or
resource among multiple organizations or tenants (customers). It does so by assigning a logical

2
Notes

Cloud Computing
name to a physical resource and

2
Notes

providing a pointer to that physical resource when demanded(Figure 1). The multitenant architecture offers virtual
isolation among the multiple tenants and therefore the organizations can use and customize the application as though they
each have its own instance running.
Figure 1: Virtualization Scenario

Service-Oriented Architecture (SOA)helps to use applications as a service for other applications regardless the type of
vendor, product or technology (Figure 2). Therefore, it is possible to exchange of data between applications of different
vendors without additional programming or making changes to services.
Grid Computingis like distributed computing in which a group of computers from multiple locations are connected with
each other to achieve common objective. These computer resources are heterogeneous and geographically dispersed. Grid
Computing breaks complex task into smaller pieces. These smaller pieces are distributed to CPUs that reside within the
grid.

Figure 2: Service-Oriented Architecture (SOA)

Utility computingis based on Pay per Use model. It offers computational resources on demand as a metered service. Cloud
computing, grid computing, and managed IT services are based on the concept of utility computing.

B. Cloud Architecture Ends


Cloud computing architecture comprises of many cloud components, each of them are loosely coupled. We can broadly
divide the cloud architecture into two parts:

 Front End
 Back End
Each of the ends is connected through a network, usually via Internet.

2
Notes

Cloud Computing

Front end refers to the client part of


cloud computing system. It consists of interfaces and
applications that are required to access the cloud computing platforms, e.g., Web Browser.
Back end refers to the cloud itself. It consists of all the resources required to provide cloud
computing services. It comprises of huge data storage, virtual machines, security mechanism,
services, deployment models, servers, etc.

C. Cloud Layered Architecture


Cloud architecture describes its working mechanism. It includes the dependencies on which it
works and the components that work over it. Cloud is a recent technology that is completely
dependent on the Internet for its functioning. Cloud architecture can be divided into four layers
based on the access of the cloud by the user (Figure 3).

Figure 3: Layered Cloud Architecture

a) Layer 1 (User/Client Layer)


 This layer is the lowest layer in the cloud architecture. All the users or client belong to this
layer. This is the place where the client/user initiates the connection to the cloud. The client
can be any device such as a thin client, thick client, or mobile or any handheld device that
would support basic functionalities to access a web application.
 Thin client here refers to a device that is completely dependent on some other system for its
complete functionality. In simple terms, they have very low processing capability.
 Similarly, thick clients are general computers that have adequate processing capability. They
have sufficient capability for independent work.
 Usually, a cloud application can be accessed in the same way as a web application. But
internally, the properties of cloud applications are significantly different. Thus, this layer
consists of client devices.
b) Layer 2 (Network Layer)
 This layer allows the users to connect to the cloud. The whole cloud infrastructure is
dependent on this connection where the services are offered to the customers. This is
primarily the Internet in the case of a public cloud.The public cloud usually exists in a specific
location and the user would not know the location as it is abstract. Public cloud can be
accessed all over the world.
 In the case of a private cloud, the connectivity may be provided by a local area network
(LAN).

2
Notes

 Even in this case, the cloud completely depends on the network that is used.
 Usually, when accessing the public or private cloud, the users require minimum bandwidth, which is sometimes defined
by the cloud providers.
 This layer does not come under the purview of Service Level Agreements (SLAs), that is, SLAs do not take into account the
Internet connection between the user and cloud for quality of service (QoS).
Layer 3 (Cloud Management Layer)

 Layer 3 consists of software that are used in managing the cloud. The software can be a cloud OS, a software that acts as an
interface between the data center (actual resources) and the user, or a management software that allows managing
resources. This software usually allow resource management (scheduling, provisioning, etc.), optimization (server
consolidation, storage workload consolidation), and internal cloud governance.
 This layer comes under the purview of SLAs, that is, the operations taking place in this layer
would affect the SLAs that are being decided upon between the users and the service providers.
 Any delay in processing or any discrepancy in service provisioning may lead to an SLA violation.
 As per rules, any SLA violation would result in a penalty to be given by the service provider.

Layer 4 (Hardware Resource Layer)

 Layer 4 consists of provisions for actual hardware resources. Usually, in the case of a public cloud, a data center is used in
the back end.
 Similarly, in a private cloud, it can be a data center, which is a huge collection of hardware resources interconnected to each
other that is present in a specific location or a high configuration system.
 This layer comes under the purview of SLAs. This is the most important layer that governs the SLAs.This layer affects the SLAs
most in the case of data centers.
 Whenever a user accesses the cloud, it should be available to the users as quickly as possible and should be within the time
that is defined by the SLAs.
 If there is any discrepancy in provisioning the resources or application, the service provider has to pay the penalty. Hence,
the datacenter consists of a high-speed network connection and a highly efficient algorithm to transfer the data from the
datacenter to the manager.
 There can be a number of datacenters for a cloud, and similarly, a number of clouds can share a datacenter.

D. Cloud Deployment Models


A cloud infrastructure may be operated in one of the following deployment models:

 public cloud,
 private cloud,
 community cloud, or
 hybrid cloud.

2
Notes

Cloud Computing

Figure 4: Public Cloud Scenarios

The differences are based on how exclusive the computing resources are made to a Cloud
Consumer.

• Public Cloud:Public cloud is one in which the cloud infrastructure and computing resources are
made available to the general public over a public network (Figure 4). A public cloud is owned by
an organization selling cloud services, and serves a diverse pool of clients.

• Private Cloud: Private cloud gives a single Cloud Consumer’s organization the exclusive access
to and usage of the infrastructure and computational resources. It may be managed either
by:cloud consumer organization and may be hosted on the organization’s premises (that is, on-
site private clouds depicted in Figure 5), ora third party, outsourced to a hosting company (that
is, outsourced private clouds depicted in Figure 6).

Figure 5: On-site Private Cloud

Figure 6: Outsourced Private Cloud

• Hybrid Cloud: Hybrid cloud(Figure 7) is a composition of two or more clouds (on-site private,
on-site community, off-site private, off-site community or public) that remain as distinct entities

2
Notes

but are bound together by standardized or proprietary technology that enables data and
application portability.

Figure 7: Hybrid Cloud Model Scenario 1

Figure 8: Hybrid Cloud Model Scenario 2

• Community Cloud: Community cloudserves a group of cloud consumers which have shared concerns such as
mission objectives, security, privacy and compliance policy, rather than serving a single organization as does a private cloud
(Figure 9). Similar to private clouds, a community cloud may be managed by: organizations and may be implemented on
customer premise (that is,on-site community cloud), or, a third party, outsourced to a hosting company (that is,outsourced
community cloud).
Figure 9: Community Cloud Model

2
Notes

Cloud Computing

An on-site community cloudcomprised of a number of participant organizations (Figure 10 and


Figure 11). A cloud consumer can access the local cloud resources, and also the resources of other
participating organizations through the connections between the associated organizations.

Figure 10: On-site Community Cloud Scenario

Figure 11: Outsourced Community Cloud

2
Notes

2.4Cloud Business Models


Cloud business models are all built on top of cloud computing, a concept that took over around 2006
when former Google’s CEO Eric Schmit mentioned it. It’s often not clear just what “the cloud” actually is,
how it helps existing businesses or how entrepreneurs can use it to start or augment a new
business.Most cloud-based business models can be classified as cloud services delivery. While the
models are primarily monetized via subscriptions, they are monetized via pay-as-you-go revenue
models and hybrid models (subscriptions + pay-as-you-go).

I. NIST Cloud Computing Reference Model


NIST's long-term goal is to provide leadership and guidance around the cloud computing paradigm to catalyze its use
within industry and government. NIST aims to shorten the adoption cycle, which will enable near-term cost savings and
increased ability to quickly create and deploy safe and secure enterprise solutions. NIST aims to foster cloud computing
practices that support interoperability, portability, and security requirements that are appropriate and achievable for
important usage scenarios.
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable
computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released
with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five
essential characteristics, three service models, and four deployment models.
Five Essential Characteristics
• On-demand Self-service: A consumer can unilaterally provision computing capabilities, such as server time and network
storage, as needed automatically without requiring human interaction with each service’s provider.
• Broad Network Access: Capabilities are available over the network and accessed through standard mechanisms that promote
use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and personal digital assistants [PDAs]).
• Resource Pooling: The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model,
with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is
a sense of location independence in that the customer generally has no control or knowledge over the exact location of the
provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter).
Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.
• Rapid Elasticity: Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out
and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be
unlimited and can be purchased in any quantity at any time.
• Measured Service: Cloud systems automatically control and optimize resource use by leveraging a metering capability3 at
some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts).
Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of
the utilized service.

A. Cloud Service Models


 Software as a Service (SaaS):The capability provided to the consumer is to use the provider’s applications running on a cloud
infrastructure. The applications are accessible from various client devices through a thin client interface such as a Web
browser (e.g., Web-based email).The consumer does not manage or control the underlying cloud infrastructure including
network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of
limited user-specific application configuration settings.
 Platform as a Service (PaaS): The capability provided to the consumer is to deploy onto the
cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported
by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers,
operating systems, or storage, but

2
Notes

Cloud Computing

has control over the deployed applications and possibly application hosting environment
configurations.
 Infrastructure as a Service (IaaS): The capability provided to the consumer is to provision
processing, storage, networks, and other fundamental computing resources where the
consumer is able to deploy and run arbitrary software, which can include operating systems
and applications. The consumer does not manage or control the underlying cloud infrastructure
but has control over operating systems, storage, deployed applications, and possibly limited
control of select networking components (e.g., host firewalls).

B. Cloud Deployment Models

• Private Cloud:The cloud infrastructure is operated solely for an organization. It may be


managed by the organization or a third party and may exist on premise or off premise.
• Community Cloud: The cloud infrastructure is shared by several organizations and supports a
specific community that has shared concerns (example: mission, security requirements, policy,
and compliance considerations). It may be managed by the organizations or a third-party and
may exist on premise or off premise.
• Public Cloud: The cloud infrastructure is made available to the general public or a large
industry group and is owned by an organization selling cloud services.
• Hybrid Cloud: The cloud infrastructure is a composition of two or more clouds (private,
community, or public) that remain unique entities but are bound together by standardized or
proprietary technology that enables data and application portability (e.g., cloud bursting for
load balancing between clouds).

C. Actors in Cloud Computing Reference Model


There are certain actors that form important part of the cloud computing reference model (Figure
13 and Figure 14). Figure 15 depicts the interactions among the actors of cloud computing.

Figure 13: Actors in Cloud Computing Reference Model

2
Notes

Figure 14: NIST Cloud Computing Reference Model

Figure 15: Interactions Among Actors in Cloud Computing

Cloud consumer may request service from a cloud broker instead of contacting a cloud provider
directly. Cloud broker may create a new service by combining multiple services or by enhancing
an existing service. The actual cloud providers are invisible to the cloud consumer and the cloud
consumer interacts directly with the cloud broker. Cloud carriers provide the connectivity and
transport of cloud services from cloud providers to cloud consumers. As illustrated in Figure, a
cloud provider participates in and arranges for two unique service level agreements (SLAs), one
with a cloud carrier (e.g. SLA2) and one with a cloud consumer (e.g. SLA1) (Figure 16).

Figure 16: SLA Management Between Cloud Consumer and Cloud Carrier

3
Notes

Cloud Computing

A cloud provider arranges service level agreements (SLAs) with a cloud carrier and may request
dedicated and encrypted connections to ensure the cloud services are consumed at a consistent
level according to the contractual obligations with the cloud consumers. In this case, the provider
may specify its requirements on capability, flexibility and functionality in SLA2 in order to provide
essential requirements in SLA1. For a cloud service, a cloud auditor conducts independent
assessments of the operation and security of the cloud service implementation. The audit may
involve interactions with both the cloud consumer and the cloud provider.
Cloud consumer is a principal stakeholder for cloud computing service. It can be a person or
organization that maintains a business relationship with, and uses the service from a cloud
provider. Cloud consumer browses the service catalogue from a cloud provider, requests the
appropriate service, sets up service contracts with the cloud provider, and uses the service. Cloud
consumer may be billed for the service provisioned, and needs to arrange payments accordingly.
Cloud provider can be a person, or an organization.It is an entity responsible for making a service
available to interested parties.Acloud provider can acquire and managethe computing
infrastructure required for providing the services, run the cloud software that provides the
services; and make arrangement to deliver the cloud services to the Cloud Consumers through
network access. A cloud provider’s activities can be described in five major areas:

• service deployment,
• service orchestration,
• cloud service management,
• security and privacy
Service orchestration refers to the composition of system components to support the cloud
providers activities in arrangement, coordination and management of computing resources in order
to provide cloud services to cloud consumers. Cloud service management includes all of the service-
related functions that are necessary for the management and operation of those services required by
or proposed to cloud consumers.
Cloud auditor is a party that can perform an independent examination of cloud service controls
with the intent to express an opinion thereon. Audits are performed to verify conformance to
standards through review of objective evidence. Cloud auditor can evaluate the services provided
by a cloud provider in terms of security controls, privacy impact, performance, etc. An auditor may
ensure that fixed content has not been modified and that the legal and business data archival
requirements have been satisfied. As cloud computing evolves, the integration of cloud services
can be too complex for cloud consumers to manage. Cloud consumer may request cloud services
from a cloud broker, instead of contacting a cloud provider directly.
Cloud broker is an entity that manages the use, performance and delivery of cloud services and
negotiates relationships between cloud providers and cloud consumers.A cloud broker can provide
services in three categories:

• Service Intermediation: A cloud broker enhances a given service by improving some specific
capability and providing value-added services to cloud consumers. The improvement can be
managing access to cloud services, identity management, performance reporting, enhanced
security, etc.
• Service Aggregation: A cloud broker combines and integrates multiple services into one or
more new services. The broker provides data integration and ensures the secure data
movement between the cloud consumer and multiple cloud providers.
• Service Arbitrage: Service arbitrage is similar to service aggregation except that the services
being aggregated are not fixed. Service arbitrage means a broker has the flexibility to choose
services from multiple agencies. The cloud broker, for example, can use a credit-scoring
service to measure and select an agency with the best score.
Cloud carrier acts as an intermediary that provides connectivity and transport of cloud services
between cloud consumers and cloud providers. Cloud carriers provide access to consumers
through network, telecommunication and other access devices. For example, cloud consumers can
obtain cloud services through n/w access devices, such as computers, laptops, mobile phones,

3
Notes

mobile Internet devices (MIDs), etc. The distribution of cloud services is normally provided by
network and telecommunication carriers or a transport agent, where a transport agent refers to a
business organization that provides physical transport of storage media such as high-capacity
hard drives.

Summary
• Cloud computing signifies a major change in the way we run various applications and store our
information. Everything is hosted in the “cloud”, a vague assemblage of computers and servers
accessed via the Internet, instead of the method of running programs and data on a single
desktop computer.
• With cloud computing, the software programs you use are stored on servers accessed via the
Internet and are not run from your personal computer. Hence, even if your computer stops
working, the software is still available for use.
• The “cloud” itself is the key to the definition of cloud computing. The cloud is usually defined
as a large group of interconnected computers. These computers include network servers or
personal computers.
• Cloud computing has its ancestors both as client/server computing and peer-to-peer
distributed computing. It is all about how centralized storage of data and content facilitates
collaborations, associations and partnerships.
• With cloud storage, data is stored on multiple third-party servers, rather than on the dedicated
servers used in traditional networked data storage.
• Cloud storage system stores multiple copies of data on multiple servers and in multiple
locations. If one system fails, then it only requires changing the pointer to stored object's
location.
• Jericho Forum has designed the Cloud Cube Model to help select cloud formations for secure
collaboration.

Keywords
Cloud:The cloud is usually defined as a large group of interconnected computers. These computers
include network servers or personal computers.

3
Notes

Unit 03: Cloud Services

Distributed Computing:Distributed computing is any computing that involves multiple computers


remote from each other that each have a role in a computation problem or information processing.
Group collaboration software:It provides tools for groups of people or organizations to share
information and coordinate activities.
Infrastructure as a Service (IaaS): The capability provided to the consumer is to provision processing,
storage, networks, and other fundamental computing resources where the consumer is able to
deploy and run arbitrary software, which can include operating systems and applications.
Private Cloud:The cloud infrastructure is operated solely for an organization. It may be managed by
the organization or a third party and may exist on premise or off premise.

Cloud Services

Introduction
Cloud services refer to any IT services that are provisioned and accessed from a cloud computing
provider. Cloud computing is a broad term that incorporates all delivery and service models of
cloud computing and related solutions. Cloud services are delivered over the internet and
accessible globally from the internet.Cloud computing services and deployment models describe
how the services delivery is carried out in cloud computing. These indicate the topological layouts
for the cloud computing. The entities basically correspond to the operational components in cloud
computing.
Cloud services provide many IT services traditionally hosted in-house, including provisioning an
application/database server from the cloud, replacing in-house storage/backup with cloud storage
and accessing software and applications directly from a web browser without prior
installation.Cloud services provide great flexibility in provisioning, duplicating and scaling
resources to balance the requirements of users, hosted applications and solutions. Cloud services
are built, operated and managed by a cloud service provider, which works to ensure end-to-end
availability, reliability and security of the cloud.

3.1 Classification of Cloud Services


Cloud services are provisioned through the cloud service models.Currently, cloud service models
can be abstracted into two high levels of classifications, namely, Archetypal and Hybrid cloud
service models.
a) Classic or Archetypal Category
The classic or archetypal category pertains to the three original cloud service models (Figure 1):

 Software-as-a-Service (SaaS): SaaS is a software delivery model that helps users to access applications through a simple
interface over the Internet. The providers of SaaS possess total control on the applications and they enable the users to access
them. The users have an illusion as if their applications are locally hosted without being bothered of the application
background details. The typical SaaS examples are social media platforms, email boxes, Facebook, Google Apps etc.
 Platform-as-a-Service (PaaS): PaaS model is more of an urbane model that offers building, testing,
deployment, and hosting environments for applications created by users or otherwise acquired from them. The prominent
platforms of Microsoft Azure, Google App Engine are perfect PaaS cloud models.
 Infrastructure-as-a-Service (IaaS): IaaS is an undemanding model for delivering the cloud service. It provides an actual
physical infrastructure support that includes computing, storing, networking, and other primary resources to the users. The
users benefit by renting resources from the IaaS providers and using them on demand instead of incurring their own
infrastructure. Examples- Amazon EC2, Nimbus etc.

Platform-as-a-Service (Accesibility to information, messaging, integrated services, connectivity etc.)

Software-as-a-Service (Commercial Software Acessibilty) Infrastructure-as-a-


Notes

Cloud Computing
Figure 1: Cloud Classic Service Models

Figure 2: Cloud Service Models

Infrastructure-as-a-Service (IaaS)

• Provides virtual machines, virtual storage, virtual infrastructure, and other hardware assets as
resources that clients can provision.Service provider manages all infrastructure, while the client
is responsible for all other aspects of the deployment.
• Can include the operating system, applications, and user interactions with the system.Figure 3
shows the different concepts associated with IaaS.

Enterprise Virtual Data


Cloud Hosting
nfrastructur
Ie Centers
) (VDC

Figure 3: Different Concepts Associated with IaaS

Enterprise Infrastructure: Such infrastructure is offered by internal business networks, such as


private clouds and virtual local area networks, which utilize pooled server and networking
resources and in which a business can store their data and run the applications they need to operate
day-to-day. The expanding businesses can scale their infrastructure in accordance with their

4
Notes

Unit 03: Cloud Services


growth whilst private clouds (accessible only by the business itself) can protect the storage and
transfer of the sensitive data that some businesses are required to handle.
Cloud Hosting: The hosting of websites on virtual servers which are founded upon pooled
resources from underlying physical servers. A website hosted in the cloud, for example, can benefit
from the redundancy provided by a vast network of physical servers and on demand scalability to
deal with unexpected demands placed on the website.
Virtual Data Centers (VDC): A virtualized network of interconnected virtual servers which can be
used to offer enhanced cloud hosting capabilities, enterprise IT infrastructure or to integrate all of
these operations within either a private or public cloud implementation.
Advantages of IaaS: Figure 4lists the different advantages of IaaS.

Scalability

No Investment in Hardware Utility Style Costing Location Independence


Physical Security of Data Centre Locations No Single Point of Failure

Figure 4: Advantages of IaaS

• Scalability: Resource is available as and when the client needs it and, therefore, there are no
delays in expanding capacity or the wastage of unused capacity
• No Investment in Hardware: The underlying physical hardware that supports an IaaS service is
set up and maintained by the cloud provider, saving the time and cost of doing so on the client
side
• Utility Style Costing: The service can be accessed on demand and the client only pays for the
resource that they actually use.
• Location Independence: The service can usually be accessed from any location as long as there
is an internet connection and the security protocol of the cloud allows it.

• Physical Security of Data Centre Locations: The services available through a public cloud, or
private clouds hosted externally with the cloud provider, benefit from the physical security
afforded to the servers which are hosted within a data center.
• No Single Point of Failure: If one server or network switch, for example, were to fail, the
broader service would be unaffected due to the remaining multitude of hardware resources
and redundancy configurations. For many services if one entire data center were to go offline,
nevermind one server, the IaaS service could still run successfully.

Uses of IaaS: IaaS is useful in the following situations:

• Where Demand is Very Volatile- any time there are significant spikes and troughs in terms of
demand on the infrastructure amazon.in, Snapdeal, Flipkart- during festival season.
• For new enterprise without capital to invest in hardware. Example- entrepreneurs starting on a
shoestring budget.
• Where the enterprise is growing rapidly and scaling hardware would be problematic. Example-
A company that experience huge success immediately.
• Where the enterprise is growing rapidly and scaling hardware would be problematic. Example-
A company that experience huge success immediately - animato, Pinterest.
• For specific line of business, trial or temporary infrastructural needs.

5
Notes

Cloud Computing

Examples of IaaS

o Amazon Web Services: A public cloud that offers subscribers access to virtual servers for product deployment, Cloud
storage, tools for development, testing, and analytics. The application provides a ready-to-use environment to
develop and test the product and offers the full cloud infrastructure for its deployment and maintenance.
o Microsoft Azure: Combination of IaaS and platform as a service, the software offers 100+ services for software
development, administration, and deployment, provides tools for working with innovative technologies (big data,
machine learning, Internet of Things), etc.
o IBM Infrastructure: IBM uses its in-house services to store the data of infrastructure users, enabling remote data
access via Cloud computing. IBM servers support AI, blockchain, and the Internet of Things. The infrastructure also
provides Cloud storage and virtual development environments, enabled on the subscription basis.
o Google Cloud Infrastructure: The large network of international servers that provides users access to remote Cloud
data centers. Companies can store their information in Asia, Europe, Latin America, which minimizes the risk of a
security breach.

Platform-as-a-Service (PaaS)
PaaS is a category of cloud computing service that provides a platform and environment to allow developers to build
applications and services over the internet (Figure 5). PaaS services are hosted in the cloud and accessed by users simply
via their web browser. PaaS allows the users to create software applications using tools supplied by the provider. PaaS
services can consist of preconfigured features that customers can subscribe to; they can choose to include the features that
meet their requirements while discarding those that do not.

Figure 5: PaaS Offerings

Cloud consumers not manage or control the underlying cloud infrastructure including network,
servers, operating systems, or storage, but has control over the deployed applications and possibly
configuration settings for the application-hosting environment. PaaS is expected to grow more
than 3,000% by 2026. From $1.78b to $68.38b, more than double SaaS, the expected growth
during the same period. Other PaaS characteristics are:
PaaS provides virtual machines, operating systems, applications, services, development
frameworks, transactions, and control structures. The clients can deploy its applications on the
cloud infrastructure or use applications that were programmed using languages and tools that are
supported by the PaaS service provider. The service provider manages the cloud infrastructure,
the operating systems, and the enabling software. The client is responsible for installing and
managing the application that it is deploying.PaaS provides service– Programming IDE in order to
develop their service among PaaS. It integrates the full functionalities which supported from the
underling runtime environment. It offers some development tools, such as profiler, debugger and
testing environment.Examples of PaaS Service Providers: Microsoft Windows Azure, Google App
Engine, Hadoop etc.
PaaS providers can assist developers from the conception of their original ideas to the creation of
applications, and through to testing and deployment. This is all achieved in a managed
mechanism.As with most cloud offerings, PaaS services are generally paid for on a subscription
basis with clients ultimately paying just for what they use. Following are the PaaS offerings:

o Operating System
o Coding &Server-side Scripting Environment

5
Notes

Unit 03: Cloud Services


o Database Management System
o Server Software
o Support and Hosting
o Storage&Network Access
o Tools for Design and Development
How PaaS Works
PaaS allows users to create software applications using tools supplied by the provider.PaaS services
can consist of preconfigured features that customers can subscribe to; they can choose to include
the features that meet their requirements while discarding those that do not.The infrastructure and
applications are managed for customers and support is available. The services are constantly
updated, with existing features upgraded and additional features added.
Advantages of PaaS: PaaS helps to create an abstracted environment that supports an efficient,
cost-effective, and repeatable process for the creation and deployment of high-quality
applications.The focus is on development, not Ops such as,

o Programmers’ development environment


o Presentation layer: HTML, CSS, JavaScript
o Control layer: Web Server code
o Data layer: Data Model

o Optionally, analytics

o Uses of PaaS: PaaShas the following uses:

• Users don’t Need to Invest in Physical Infrastructure: Being able to ‘rent’ virtual infrastructure has both cost benefits
and practical benefits. They don’t need to purchase hardware themselves or employ the expertise to manage it. This
leaves them free to focus on the development of applications.
• Makes development possible for ‘non-experts’: with some PaaS offerings anyone can develop an application. They
can simply do this through their web browser utilizing one-click functionality, example, WordPress.
• Flexibility: customers can have control over the tools that are installed within their platforms and can create a platform
that suits their specific requirements. They can ‘pick and choose’ the features they feel are necessary.
• Adaptability: Features can be changed if circumstances dictate that they should.
• Teams in various locations can work together: As an internet connection and web browser are all that is required,
developers spread across several locations can work together on the same application build.
• Security: Security is provided, including data security and backup and recovery.

Examples of PaaS (Figure 6)

Magento Commerce Cloud


Apache Stratos
AWS Elastic
Beanstalk

Figure 6: Examples of PaaS


o AWS Elastic Beanstalk: A web platform for software deployment and management, powered by the AWS Cloud.
Users upload their applications to the service, and it automatically monitors the performance, load capacity, and checks
for deployment errors.
o Apache Stratos: Cloud computing platform for arranging PHP and MySQL. The PaaS provides users with ready-
to-use tools for database development and testing, performance monitoring, integration, and billing.
o Magento Commerce Cloud: Magento cloud offers tools for e-commerce development, testing, deployment, and
maintenance. The Cloud environment allows accessing the store settings anytime and anywhere as well as automates
the key processes.

5
Notes

Cloud Computing

Software-as-a-Service (SaaS)
SaaS facilitates complete operating environment with applications, management, and the user interface. The
applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-
based email). The consumer does not manage or control the underlying cloud infrastructure including network,
servers, OSs, storage, or even individual application capabilities.Examples: Google Apps, SalesForce.com, EyeOS etc.
SaaS describes any cloud service where consumers are able to access software applications over the internet. The
applications are hosted in “the cloud” and can be used for a wide range of tasks for both individuals and organizations.
Google, Twitter, Facebook and Flickr are all examples of SaaS, with users able to access the services via any internet-
enabled device (Figure 7). The enterprise users are able to use applications for a range of needs, including accounting
and invoicing, tracking sales, planning, performance monitoring and communications (including webmail and instant
messaging).

Figure 7: SaaS Offerings

SaaS is often referred to as software-on-demand and utilizing it is akin to renting software rather
than buying it. The SaaS users, however, subscribe to the software rather than purchase it, usually
on a monthly basis. The applications are purchased and used online with files saved in the cloud
rather than on individual computers.
Advantages of SaaS

• No Additional Hardware Costs: Processing power required to run the applications is supplied
by the cloud provider.
• No Initial Setup Costs: Applications are ready to use once the user subscribes.
• Pay for What You Use: If a piece of software is only needed for a limited period then it is only
paid for over that period and subscriptions can usually be halted at any time.
• Usage is Scalable: If a user decides they need more storage or additional services, for example,
then they can access these on demand without needing to install new software or hardware.
• Updates are Automated: Whenever there is an update it is available online to existing
customers, often free of charge. No new software will be required as it often is with other types
of applications and the updates will usually be deployed automatically by the cloud provider.
• Cross-Device Compatibility: SaaS applications can be accessed via any internet enabled device,
which makes it ideal for those who use a number of different devices, such as internet enabled
phones and tablets, and those who don’t always use the same computer.
• Accessible from Any Location: Rather than being restricted to installations on individual
computers, an application can be accessed from anywhere with an internet enabled device.
• Applications can be Customized and White-labelled: With some software, customization is
available meaning it can be altered to suit the needs and branding of a particular customer.

Examples of SaaS (Error! Reference source not found.)

Google’s G Suite Microsoft Office 365 Salesforce

5
Notes

Unit 03: Cloud Services


Figure 8: Examples of SaaS

o Google’s G Suite: Top cloud service provides businesses with access to management,
communication, and organization tools and uses cloud for data computing. Gmail, Google Drive,

Google Docs, Google Planner, Hangouts—these are all SaaS tools that can be accessed anytime and anywhere.
o Microsoft Office 365: The series of web services that provide business owners and individuals with access to
Microsoft Office main tools directly from their browsers. Users can access Microsoft editing tools, business email,
communication instruments, and documentation software.
o Salesforce: The most popular CRM on the market that unites marketing, communication, e- commerce. Salesforce
uses cloud computing benefits to provide access to its services and internal data. Business owners can keep track of
their sales, client relations, communications, and relevant tasks from any device. Salesforce can be integrated into the
website — the information about incoming leads will be sent to the platform automatically.

Difference Between IaaS, PaaS, and SaaS


Table 1lists the key differences between IaaS, PaaS and SaaS. Figure 9depicts the summarization of the three cloud
service models.
Table 1: Key Differences between IaaS, PaaS and SaaS

IaaS PaaS SaaS

Provides a virtual data center to store It provides virtual platforms Provides web software
information and create platforms for and tools to create, test, and and apps to complete
app development, testing, & deploy apps business tasks
deployment

Provides access to resources such as Provides runtime Provides software as a


virtual machines, virtual storage etc. environments & deployment service to the end-users
tools for applications

Used by the network architects Used by developers Used by the end users

Provides only Infrastructure Provides Infrastructure + Provides Infrastructure


Platform + Platform +Software

Figure 9: Summarization of Three Cloud Service Models

3.2 Cloud Service Providers


The cloud service providers are also called as service vendors. A vendor that provides IT solutions
and/or services to end users and organizations. This broad term incorporates all IT businesses
that provide products and solutions through services that are on-demand, pay per use or a hybrid
delivery model. Service vendors incorporate all the IT businesses that provide products and
solutions through services that are on-demand, pay-per-use or a hybrid delivery model.A service
provider's delivery model generally differs from conventional IT product manufacturers or

5
Notes

Cloud Computing
developers. Typically, a service provider does not require purchase of an IT product by a user or
organization.A service provider builds, operates and manages these IT products, which are
bundled and delivered as a service/solution. In turn, a customer accesses this type of solution
from a service provider via several different sourcing models, such as a monthly or annual
subscription fee.

Different Service Providers (Figure 10)


Hosting Service Provider

Cloud Service Provider

Storage Service Provider

Software-as-a-Service (SaaS) Provider

Figure 10: Different Types of Service Providers

• Hosting Service Provider- A type of Internet hosting service that allows individuals and
organizations to make their website accessible via the World Wide Web. Web host companies
provide space on a server owned or leased for use by clients, as well as providing Internet
connectivity, typically in a data center.
• Cloud Service Provider- Offer cloud-based services
• Storage Service Provider- A Storage service provider (SSP) is any company that provides
computer storage space and related management services. SSPs also offer periodic backup and
archiving.
• Software-as-a-Service (SaaS) Provider- SaaS providers allow the users to connect to and use cloud-
based apps over the Internet. Common examples are email, calendaring and office tools (such as
Microsoft Office 365).
Cloud service provider can be a third-party company offering a cloud-based platform,
infrastructure, application, or storage services. Much like a homeowner would pay for a utility such
as electricity or gas, companies typically have to pay only for the amount of cloud services they use,
as business demands require. The cloud services can reduce business process costs when compared
to on-premise IT. Such services are managed by the Cloud Service Provider (CSPs). CSP provide all
the resources needed for the application and hence the company needs not to worry about resource

5
Notes

Unit 03: Cloud Services

allocation. Cloud services can dynamically scale up based on users’ needs. CSP companies establish public clouds, manage
private clouds, or offer on-demand cloud computing components (also known as cloud computing services) like IaaS, PaaS,
and SaaS.
Cloud Service Providers are helpful way to access computing services that you would otherwise have to provide on your
own, such as:

• Infrastructure: The foundation of every computing environment. This infrastructure could include networks, database
services, data management, data storage (known in this context as cloud storage), servers (cloud is the basis for serverless
computing), and virtualization.
• Platforms: Tools needed to create and deploy applications. These platforms could include operating systems, middleware,
and runtime environments.
• Software: Ready-to-use applications. This software could be custom or standard applications provided by independent
service providers.

3.3 CSP Platforms and Technologies


The development of a cloud computing application happens by leveraging platforms and
frameworks of the CSPs (Figure 11). CSPs provide different types of services, from the bare-metal
infrastructure to customizable applications serving specific purposes.

Google AppEngine

NetApp

Amazon Web Services (AWS)

Microsoft Azure

Force.com and Salesforce.com

IBM

Hadoop

Manjrasoft Aneka

Figure 11: Different Types of Cloud Services Offered by Service Providers

Google App Engine:Often referred to as GAE or simply App Engine. GAE is a cloud-based PaaS
for developing and hosting web applications in Google-managed data centers. The applications are
sandboxed and run across multiple servers. GAE offers automatic scaling for web applications—as
the number of requests increases for an application, App Engine automatically allocates more
resources for the web application to handle the additional demand. It primarily supports Go, PHP,
Java, Python, Node.js, .NET, and Ruby applications, although it can also support other languages
via "custom runtimes". The service is free up to a certain level of consumed resources and only in
standard environment but not in flexible environment. The fees are charged for additional storage,
bandwidth, or instance hours required by the application.

5
Notes

Cloud Computing

GAE was first released as a preview version in April 2008 and came out of preview in September
2011. It offers:

• Write code once and deploy


• Absorb spikes in traffic
• Easily integrate with other Google services
• Scale your applications from zero to planet scale without having to manage infrastructure.
• Free up your developers with zero server management and zero configuration deployments.
• Stay agile with support for popular development languages and a range of developer tools.

Google Web Toolkit (GWT):An open-source set of tools that allows web developers to create
and maintain JavaScript front-end applications in Java. Other than a few native libraries,
everything is Java source that can be built on any supported platform with the included GWT Ant
build files. It is licensed under the Apache License 2.0 (Figure 12).

Figure 12: Characteristics of Google Web Toolkit

NetApp:NetApp, Inc. is an American hybrid cloud data services and data management company
headquartered in Sunnyvale, California. It has ranked in the Fortune 500 since 2012. It was
founded in 1992 with an IPO in 1995, NetApp offers cloud data services for management of
applications and data both online and physically.An organization that creates storage and data
management solutions for their customers. NetApp was one of the first companies in the cloud,
offering data center consolidation and storage services, as well as virtualization. The products
include a platform OS, storage services, storage security, software management, and protection
software. NetApp competes in the computer data storage hardware industry.In 2009, NetApp
ranked second in market capitalization in its industry behind EMC Corporation, now Dell EMC,
and ahead of Seagate Technology, Western Digital, Brocade, Imation, and Quantum.In the total
revenue of 2009, NetApp ranked behind EMC, Seagate, Western Digital, and Brocade, Xyratex, and
Hutchinson Technology.According to a 2014 IDC report, NetApp ranked second in the network
storage industry "Big 5's list", behind EMC (DELL), and ahead of IBM, HP and Hitachi.According to
Gartner's 2018 Magic Quadrant for Solid-State Arrays, NetApp was named a leader, behind Pure
Storage Systems. In 2019, Gartner named NetApp as #1 in Primary Storage.
NetApp goal is to deliver cost efficiency and accelerate business breakthroughs. NetApp products
could be integrated with a variety of software products, mostly for ONTAP systems. Other
provisions from NetApp include:

• Automation- NetApp provides a variety of automation services directly to its products with
HTTP protocol or through middle-ware software.
• Docker- NetApp Trident software provides a persistent volume plugin for Docker containers
with both orchestrators Kubernetes and Swarm and supports ONTAP, Azure NetApp Files
(ANF), Cloud Volumes and NetApp Kubernetes Service in cloud.
• Backup and Recovery- Cloud Backup integrates with nearly all Backup & Recovery products for
archiving capabilities since it is represented as ordinary NAS share for B&R software. The
backup and recovery software from competitor vendors like IBM Spectrum Protect, EMC
NetWorker, HP Data Protector, Dell vRanger, and others also have some level of integrations
with NetApp storage systems.

5
Notes

Unit 03: Cloud Services

Amazon Web
Services (AWS):Amazon Web Services (AWS) is a subsidiary of Amazon
providing on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-
as-you-go basis. AWS provides a variety of basic abstract technical infrastructure and distributed computing building
blocks and tools. It offers services on many
different fronts, from storage to platform to databases.As of 2021, AWS comprises over 200
products and services including computing, storage, networking, database, analytics, application services, deployment,
management, machine learning, mobile, developer tools, and tools for the Internet of Things (IoT).

Case Study: The most popular cloud services from AWS include:

• Amazon Elastic Compute Cloud (Amazon EC2):Amazon EC2 allows the users to rent virtual
computers on which to run their own computer applications. EC2 encourages scalable
deployment of applications by providing a web service through which a user can boot an Amazon Machine Image (AMI) to
configure a virtual machine, which Amazon calls an "instance", containing any software desired. A user can create, launch,
and terminate server- instances as needed, paying by the second for active servers– hence the term "elastic". EC2 provides
users with control over the geographical location of instances that allows for latency optimization and high levels of
redundancy. In November 2010, Amazon switched its own retail website platform to EC2 and AWS.
• Amazon SimpleDB (Simple Database Service): Amazon SimpleDB is a distributed database written in Erlang by
Amazon.com. It is used as a web service in concert with Amazon Elastic Compute Cloud (EC2) and Amazon S3 and is part of
Amazon Web Services. It was announced on December 13, 2007.
• Amazon Simple Storage Service (Amazon S3): Amazon S3 is a service offered by Amazon Web Services (AWS) that provides object
storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to
run its global e-commerce network. Amazon S3 can be employed to store any type of object, which allows for uses like storage
for Internet applications, backup and recovery, disaster recovery, data archives, data lakes for analytics, and hybrid cloud
storage.
• Amazon CloudFront: Amazon CloudFront is a content delivery network (CDN) operated by Amazon Web Services. Content
delivery networks provide a globally-distributed network of proxy servers that cache content, such as web videos or other
bulky media, more locally to consumers, thus improving access speed for downloading the content.
• Amazon Simple Queue Service (Amazon SQS): Amazon SQS is a distributed message queuing service introduced by
Amazon.com in late 2004. It supports programmatic sending of messages via web service applications as a way to
communicate over the Internet. SQS is intended to provide a highly scalable hosted message queue that resolves issues
arising from the common producer-consumer problem or connectivity between producer and consumer.
• Amazon Elastic Block Store (Amazon EBS): Amazon Elastic Block Store (EBS) provides raw block-level storage that can be
attached to Amazon EC2 instances and is used by Amazon Relational Database Service (RDS). Amazon EBS provides a range
of options for storage performance and cost. These options are divided into two major categories: SSD-backed storage for
transactional workloads, such as databases and boot volumes (performance depends primarily on IOPS), and disk-backed
storage for throughput intensive workloads, such as MapReduce and log processing (performance depends primarily on
MB/s).

Microsoft: Microsoft offers a number of cloud services for organizations of any size (Figure 13):

• Azure Services Platform- Windows Azure/ Microsoft Azure, commonly referred to as Azure is a cloud computing service
created by Microsoft for building, testing, deploying, and managing

5
Notes

Cloud Computing

applications and services through Microsoft-managed data centers. It provides software as a


service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS) and supports
many different programming languages, tools, and frameworks, including both Microsoft-
specific and third-party software and systems.
• SQL Services- Microsoft SQL Server is a relational database management system developed by
Microsoft. As a database server, it is a software product with the primary function of storing
and retrieving data as requested by other software applications—which may run either on the
same computer or on another computer across a network. Microsoft markets at least a dozen
different editions of Microsoft SQL Server, aimed at different audiences and for workloads
ranging from small single-machine applications to large Internet-facing applications with many
concurrent users.

Azure Services Platform SQL Services


.NET Services Exchange Online SharePoint Services
Microsoft Dynamics CRM

Figure 13: Cloud Services Offered by Microsoft

• .NET Services- The.NET Framework is a software framework developed by Microsoft that runs
primarily on Microsoft Windows. It includes a large class library called Framework Class
Library and provides language interoperability across several programming languages.
Programs written for.NET Framework execute in a software environment named the Common
Language Runtime (CLR). The CLR is an application virtual machine that provides services
such as security, memory management, and exception handling. As such, computer code
written using.NET Framework is called "managed code".
• Exchange Online- Work smarter, anywhere, with hosted email for business.
• SharePoint Services- SharePoint is a web-based collaborative platform that integrates with
Microsoft Office. Launched in 2001, SharePoint is primarily sold as a document management
and storage system.
• Microsoft Dynamics CRM- Microsoft Dynamics is a line of enterprise resource planning and
customer relationship management software applications. Microsoft Dynamics forms part of
"Microsoft Business Solutions". Dynamics can be used with other Microsoft programs and
services, such as SharePoint, Yammer, Office 365, Azure and Outlook. The Microsoft Dynamics
focus-industries are retail, services, manufacturing, financial services, and the public sector.
Microsoft Dynamics offers services for small, medium, and large businesses.

Salesforce.com: Salesforce works on the three primary areas, sales cloud, service cloud and your
cloud. It has 3 primary offerings, Force.com, Salesforce.com CRM, AppExchange.

• Sales Cloud- The popular cloud computing sales application.

5
Notes

Unit 03: Cloud Services

• Service Cloud- The platform for customer service that lets companies tap into the power of customer conversations no
matter where they take place.
• Your Cloud- Powerful capabilities to develop custom applications on its cloud computing
platform.

IBM:IBM offers cloud computing services to help businesses of all sizes take advantage of this increasingly attractive
computing model.IBM is applying its industry-specific consulting expertise and established technology record to offer
secure services to companies in public, private, and hybrid cloud models. Some of their services include:

• Industry-specific business consulting services for cloud computing


• Technology consulting, design, and implementation services
• Cloud security

Hadoop:Apache Hadoop is a collection of open-source software utilities that facilitates using a network of many computers
to solve problems involving massive amounts of data and computation. It provides a software framework for distributed
storage and processing of big data using the MapReduce programming model.All the modules in Hadoop are designed with
a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the
framework.Apache Hadoop software allows for the distributed storage and processing of large datasets across clusters of
computers using simple programming models. Hadoop is designed to scale up from a single computer to thousands of
clustered computers, with each machine offering local computation and storage.In this way, Hadoop can efficiently store
and process large datasets ranging in size from gigabytes to petabytes of data.
Aneka: ANEKA cloud platform is a software platform and a framework for developing distributed applications on the cloud. It
harnesses the computing resources of a heterogeneous network of workstations and servers or data centers on demand. Aneka
provides developers with a rich set of APIs for transparently exploiting these resources. The system administrators can
leverage on a collection of tools to monitor and control the deployed infrastructure. This can be a public cloud available to anyone
through the Internet, or a private cloud constituted by a set of nodes with restricted access.All in all, Aneka-based computing
cloud is a collection of physical and virtualized resources connected through a network, which are either the Internet or a
private intranet.

3.4 Criteria for Choosing the Best Cloud Service Provider


The first step in switching to cloud computing is determining what kind of cloud services you could be interested in.
Then, choosing a cloud computing service is a long-term investment. Your application will heavily rely on third-party
capacities, and you need to make sure that the provider is legitimate and fits your needs. Also, check for the latest cloud
service provider trends (Figure 14).

Figure 14: Global Cloud Market Trends

6
Notes

Cloud Computing

Here’s a short criteria checklist on choosing a trustworthy cloud provider:

• Financial Stability: Your cloud provider should be well-financed and receive steady profits
from the infrastructure. If the company shuts down due to monetary issues, your solutions will
be in jeopardy, too. In the worst-case scenario, you will have to cease the support of your
solutions, or, in a better case, migrate to a new provider, which is an expensive and time-
consuming process.
• Industries that Prefer the Solution: Before committing to a cloud services company, take a look
at its existing clients and examine their markets. Ideally, the provider should be popular among
companies in your niche, or at least in the neighbouring ones. Another road to take is asking
competitors and partners about their favourite choices.
• Datacenter Locations: To avoid safety risks, make sure that cloud providers can enable
geographical distribution for your data. Ideally, you want to locate your data on servers in Asia,
Europe, America, without betting on a single region. Also, pay attention to countries— some,
like Japan or Germany, are known to be more secure, whereas Russia, for instance, is not the
safest option.
• Security Programs: Take a look at the security programs of your favourite cloud providers. The
majority of companies have dedicated papers and e-books that discuss this matter in detail—
take your time to go through them. Start with taking a look at security documentation of the
top cloud providers- AWS, G Suite, Microsoft Azure, Salesforce. You can use these pages as
references during your safety research.
• Encryption Standards: Make sure the cloud provider specifies the use of encryption. The
provider should encrypt the data both when it’s being transferred to the cloud and during the
storage itself. No matter what is the stage of data storage, the information should be secured
end-to-end, so there is no way even for developers of the service to access the file contents.
• Check Accreditation and Auditing: The most common online auditing standard is SSAE— the
procedure that verifies that the online service checked the safety of its data-storing practices.
ISO 27001 certificate verified that a cloud provider complies with international safety
standards for data storage.
• Look for solutions that offer Free Cloud Backup: OneDrive, Google Drive, Dropbox, and Box
offer free space to create cloud backup copies, both manually and automatically.

Cloud Service Providers Prospective Trends


Cloud services have been among the most popular platforms lately, with companies like Microsoft,
Amazon, Google leading the way for technology growth. Instead of relying on their servers,
companies prefer outsourcing their storage to trusted providers, passing over the responsibility
for supporting the infrastructure and assuring security.
Let’s take a brief look at the statistics of the cloud computing market to see its latest trends:

o In 2020, the market is expected to demonstrate the growth rate of 17%;


o In 2019, Cloud infrastructure accounted for about 3% of overall IT infrastructure;
o IP traffic in Cloud services is expected to grow up to 19.5 zettabytes in 2021.
o The trust of the number of businesses in cloud service is rapidly increasing.

3.5 Other Cloud Services


Cloud computing offers large number of services (Figure 15). The cloud service offerings evolve
almost daily, as the technology migrates from the traditional on-premise model to the new cloud
model:

6
Notes

Unit 03: Cloud Services

Figure 15: Other Cloud Services

Storage-as-a-Service (STaaS)
• Cloud service model in which a company leases or rents its storage infrastructure to another company or individuals to store
either files or objects.
• Economy of scale in the service provider’s infrastructure theoretically allows them to provide storage much more cost-
effectively than most individuals or corporations can provide their own storage when the total cost of ownership is
considered.
• STaaS is generally seen as a good alternative for a small or mid-sized business that lacks the
capital budget infrastructure. and/or technical personnel to implement and maintain their own storage
• Small companies and individuals often find this to be a convenient methodology for managing backups, and providing
cost savings in personnel, hardware and physical space.
• STaaS operates through a web-based API that is remotely implemented through its interaction with the client application’s
in-house cloud storage infrastructure for input/output (I/O) and read/write (R/W) operations.
• If the company ever loses its local data, the network administrator could contact the STaaS provider and request a copy
of the data.
• For an end-user-level cloud storage, Dropbox, Google Drive, Apple’s iCloud and Microsoft OneDrive are among the
leading end-user-level cloud storage providers.
• For enterprise-level cloud storage, Amazon S3, Zadara, IBM’s SoftLayer and Google Cloud Storage are some of the more
popular providers.

Data-as-a-Service (DaaS)
• In the DaaS computing model (a more advanced, fine-grained form of STaaS), data (as opposed to files) is readily
accessible through a Cloud-based platform.
• Data (either from databases or object containers) is supplied “on-demand” via cloud platforms (as opposed to the
traditional, on-premise models in which the data remains in the customer’s hands) and the vendor provides the tools
that make it easier to access and explore.
• Based on Web Services standards and Service-oriented Architecture (SOA), DaaS provides a
dynamic infrastructure for delivering information on demand to geographical location or users, regardless of their
organizational separation– and, in the providers with a number of significant opportunities. process, presents solution
• DaaS eliminates redundancy and reduces associated expenditures by accommodating vital data in a single location,
allowing data use and/or modification by multiple users via a single update point.

6
Notes

Cloud Computing

• Typical business applications include customer relationship management (CRM), enterprise


resource planning (ERP), e-commerce and supply chain systems and, more recently, Big Data
analytics.
• Some of the best-known enterprise-level DaaS providers are Oracle’s Data Cloud, Amazon
DynamoDB, Microsoft SQL Database (formerly known as SQL Azure) and Google
Cloud’s Datastore.
• For an open-source projects, Apache Cassandra, CockroachDB or CouchDB will almost certainly
catch your eye.

Communication-as-a-Service (CaaS)
Communications as a Service (CaaS) is an outsourced enterprise communications solution that can
be leased from a single vendor (Figure 16). CaaS vendor is responsible for
all hardware and software management and offers guaranteed Quality of Service (QoS). CaaS
allows businesses to selectively deploy communications devices and modes on a pay-as-you-go,
as- needed basis. Such communications can include:Voice over IP (VoIP or Internet telephony);
Instant messaging (IM); and Collaboration and Video conference applications using the fixed and
mobile devices.

Figure 16: Communication-as-a-Service Scenario

Advantages of CaaS

o Hosted and Managed Solutions


o Fully Integrated, Enterprise - Class Unified Communications
o No Capital Expenses Needed
o Flexible Capacity and Feature Set
o No Risk of Obsolescence
o No Facilities and Engineering Costs Incurred
o Guaranteed Business Continuity

Monitoring-as-a-Service (MaaS)
A concept that combines the benefits of cloud computing technology and traditional on-premise IT
infrastructure monitoring solutions (Figure 17). MaaS is a new delivery model that is suited for
organizations looking to adopt a monitoring framework quickly with minimal investments.MaaS is
a framework that facilitates the deployment of monitoring functionalities for various other
services and applications within the cloud. The most common application for MaaS is online state
monitoring, which continuously tracks certain states of applications, networks, systems, instances
or any element that may be deployable within the cloud. MaaS makes it easier for users to deploy
state monitoring at different levels of cloud services.

6
Notes

Unit 03: Cloud Services

Figure 17: Monitoring-as-a-Service Scenario

Advantages of MaaS

oReady to Use Monitoring Tool Login: The vendor takes care of setting up the hardware
infrastructure, monitoring tool, configuration and alert settings on behalf of the customer. The
customer gets a ready to use login to the monitoring dashboard that is accessible using an
internet browser. A mobile client is also available for the MaaS dashboard for IT
administrators.
o Inherently Available 24x7x365: Since MaaS is deployed in the cloud, the monitoring dashboard
itself is available 24x7x365 that can be accessed anytime from anywhere. There are no
downtimes associated with the monitoring tool.
o Easy Integration with Business Processes: MaaS can generate alert based on specific business
conditions. MaaS also supports multiple levels of escalation so that different user groups can
get different levels of alerts.
o Cloud Aware and Cloud Ready: Since MaaS is already in the cloud, MaaS works well with
other cloud-based products such as PaaS and SaaS. MaaS can monitor Amazon and Rackspace
cloud infrastructure. MaaS can monitor any private cloud deployments that a customer might
have.
o Zero Maintenance Overheads: As a MaaS, customer, you don’t need to invest in a network
operations centre. Neither do you need to invest an in-house team of qualified IT engineers to
run the monitoring desk since the MaaS vendor is doing that on behalf of the customer.
Assets Monitored by MaaS

oServers and Systems Monitoring: Server Monitoring provides insights into the reliability of the
server hardware such as Uptime, CPU, Memory and Storage. Server monitoring is an essential
tool in determining functional and performance failures in the infrastructure assets.
o Database Monitoring: Database monitoring on a proactive basis is necessary to ensure that
databases are available for supporting business processes and functions. Database monitoring
also provides performance analysis and trends which in turn can be used for fine tuning the
database architecture and queries, thereby optimizing the database for your business
requirements.
o Network Monitoring: Network availability and network performance are two critical
parameters that determine the successful utilization of any network – be it a LAN, MAN or
WAN network. Disruptions in the network affect business productivity adversely and can bring
regular operations to a standstill. Network monitoring provides pro-active information about
network performance bottlenecks and source of network disruption.

6
Notes

Cloud Computing

o Storage Monitoring: A reliable storage solution in your network ensures anytime availability of
business-critical data. Storage monitoring for SAN, NAS and RAID storage devices ensures that
your storage solution are performing at the highest levels. Storage monitoring reduces
downtime of storage devices and hence improves availability of business data.
o Applications Monitoring: Applications Monitoring provides insight into resource usage,
application availability and critical process usage for different Windows, Linux and other open-
source operating systems-based applications. Applications Monitoring is essential for mission
critical applications that cannot afford to have even a few minutes of downtime. With
Application Monitoring, you can prevent application failures before they occur and ensure
smooth operations.
o Cloud Monitoring: Cloud Monitoring for any cloud infrastructure such as Amazon or
Rackspace gives information about resource utilization and performance in the cloud. While
cloud infrastructure is expected to have higher reliability than on-premise infrastructure, quite
often resource utilization and performance metrics are not well understood in the cloud. Cloud
monitoring provides insight into exact resource usage and performance metrics that can be used
for optimizing the cloud infrastructure.
o Virtual Infrastructure Monitoring: Virtual Infrastructure based on common hypervisors such as
ESX, Xen or Hyper-V provides flexibility to the infrastructure deployment and provides
increased reliability against hardware failures. Monitoring virtual machines and related
infrastructure gives information around resource usage such as memory, processor and storage.

Database-as-a-Service (DBaaS)
Database as a Service (DBaaS) is an architectural and operational approach enabling DBAs to
deliver database functionality as a service to internal and/or external customers. DBaaS
architectures support following required capabilities: customer side provisioning and
management of database instances using on-demand, self-service mechanisms; automation of
monitoring with provider-defined service definitions, attributes and quality SLAs; and fine-grained
metering of database usage enabling show-back reporting or charge-back for both internal and
external functionality for each individual consumer.

Notes: Why DBaaS?


DBaaS standardizes and optimizes the platform requirements which eliminates the need to deploy,
manage and support dedicated database hardware and software for each project’s multiple
development, testing, production, and failover environments. DBaaS architectures are inherently
designed for elasticity and resource pooling. DBaaS providers deliver production and non-
production database services that support average daily workload requirements & are not
impacted by: resource limitations, time-sensitive projects; and hardware limitations and budgets.

Setting up DBaaS
In order to set-up DBaaS, a cloud administrator will need to:

o Define roles and users in the self-service portal.


o Installing the agent to manage all “unmanaged hosts” so self-discovery of any DBaaS
environments that are created.
o Set quotas, privileges.
o Software library will allow automation.
o Provisioning will set who will be granted and how much allocated to each customer,
administrator and/or business unit.

6
Notes

Unit 03: Cloud Services

Network-as-a-Service (NaaS)
In NaaS, the users who do not want to use their own networks take help from service providers to
host the network infrastructure. The connectivity and bandwidth are provided by the service
provider for the contracted period. NaaS represents the network as transport connectivity. The
network virtualization is done in this service.
NaaS is “an emerging procurement model to consume network infrastructure via a flexible
operating expense (OpEx) subscription inclusive of hardware, software, management tools,
licenses, and lifecycle services.”
What's Driving the Trend Toward NaaS?
Traditional network model requires capital expenses (CapEx) for physical networks with switches,
routers, and licensing. The do-it-yourself IT model requires time for planning and deployment as
well as expertise to install and configure infrastructure and to ensure security access policies are
in place. This model involves the following:

o Diligent monitoring for updates and security patches is essential due to rapid changes in
technology and security threats.
o Provisioning a new service is a manual process that requires a technician to deploy and
configure equipment at various locations.
o Service provisioning and issue resolution have historically been lengthy processes.
o As networks have grown in complexity—with more mobile users connecting from everywhere
and with the expansion to cloud—IT teams have been challenged to keep pace.

NaaS Service Models

o Connectivity Cloud: A model in which a private fiber fabric or wireline "Middle Mile" network is
used to bypass often less-optimal public (internet) routing and congestion to provide connectivity
for critical Enterprise resource and services access. It is controlled via a distributed software
platform, the model supports "cloud-aligned" elastic consumption including on-demand
provisioning, any-to-any connectivity, and flexible bandwidth deployment through both portal
and programmable API operation and introspection.By integrating the platform API with
provisioning and application deployment playbooks, the resulting WAN can realize an
infrastructure as code paradigm for Wide Area Networks - "network-as-code". They resulting
services include custom WAN interconnectivity, hybrid cloud and multi-cloud connectivity.
oVirtual Private Network (VPN): A tunnel overlay that extends a private network and the
resources contained in the network across networks like the public Internet. It enables a host
computer to send and receive data across shared or public networks as if it were a private
network with the functionality and policies of the private network.
o Virtual Network Operation: Model common in mobile networks in which a telecommunications
manufacturer or independent network operator builds and operates a network (wireless, or
transport connectivity) and sells its communication access capabilities to third parties (commonly
mobile phone operators) charging by capacity utilization. A mobile virtual network operator
(MVNO), is a mobile communications services provider that does not own the radio spectrum or
wireless network infrastructure over which it provides services. Commonly a MVNO offers its
communication services using the network infrastructure of an established mobile network
operator.
Benefits of NaaS
NaaS is a cloud model that enables users to easily operate the network and achieve the outcomes
they expect from it without owning, building, or maintaining their own infrastructure.NaaS can

6
Notes

Cloud Computing

replace hardware-centric VPNs, load balancers, firewall appliances, and Multiprotocol Label
Switching (MPLS) connections. The users can scale up and down as demand changes, rapidly
deploy services, and eliminate hardware costs.NaaS offers ROI (return on investment), enabling
customers to trade CapEx for OpEx and refocus person hours on other priorities.Figure 18 depicts
the different benefits of NaaS.

IT Simplicity and Automation

Improved Application Experience


Access from Anywhere

Benefits of NaaS

Flexibility
Visibility and Insights
and
alabi
Sclity

Enhanced Security

Figure 18: Benefits of NaaS

oIT Simplicity and Automation- Businesses benefit when they align their costs with actual usage.
They don't need to pay for surplus capacity that goes unused, and they can dynamically add
capacity as demands increase. Businesses that own their own infrastructure must implement
upgrades, bug fixes, and security patches in a timely manner. Often, IT staff may have to travel to
various locations to implement changes. NaaS enables the continuous delivery of new fixes,
features and capabilities. It automates multiple processes such as onboarding new users and
provides orchestration and optimization for maximum performance. This can help to eliminate
the time and money spent on these processes.
o Access from Anywhere- Today's workers may require access to the network from anywhere—
home or office—on any devices and without relying on VPNs. NaaS can provide enterprises with
global coverage, low-latency connectivity enabled by a worldwide POP backbone, and negligible
packet loss when connecting to SaaS applications, platform-as-a-service (PaaS)/infrastructure-as- a-
service (IaaS) platforms, or branch offices
o Visibility and Insights- NaaS provides proactive network monitoring, security policy
enforcement, advanced firewall and packet inspection capabilities, and modeling of the
performance of applications and the underlying infrastructure over time. Customers may also
have an option to co-manage the NaaS.
o Enhanced Security- NaaS results in tighter integration between the network and the network
security. Some vendors may "piece together" network security. By contrast, NaaS solutions need
to provide on-premise and cloud-based security to meet today’s business needs.
o Flexibility-NaaS services are delivered through a cloud model to offer greater flexibility and
customization than conventional infrastructure. Changes are implemented through software, not
hardware. This is typically provided through a self-service model. IT teams can, for example,
reconfigure their corporate networks on demand and add new branch locations in a fraction of
the time. NaaS often provides term-based subscription with usage billing and multiple payment
options to support various consumption requirements.
o Scalability- NaaS is inherently more scalable than traditional, hardware-based networks. NaaS
customers simply purchase more capacity instead of purchasing, deploying, configuring, and
securing additional hardware. This means they can scale up or down quickly as needs change.

6
Notes

Unit 03: Cloud Services

o Improved Application Experience- NaaS provides AI-driven capabilities to help ensure SLAs and
SLOs for capacity are met or exceeded. NaaS provides the ability to route application traffic to
help ensure outstanding user experience and to proactively address issues that occur.

Healthcare-as-a-Service (HaaS)
Gone are days when healthcare organizations used to store patient data in piles of papers and files.
Not only was that inconvenient and time-consuming, but also expensive in terms of both money
and resources. With the exponential growth in technology, more and more healthcare businesses
are moving to the cloud.Cloud computing has impacted the essential divisions of society, especially
the healthcare industry.
Technology-enabled Healthcare includes telehealth, telecare, telemedicine, tele-coaching, mHealth
and self-care services that can put people in control of their own health, wellbeing and support,
keeping them safe, well and independent and offering them and their family’s peace of mind.
Cloud computing in healthcare market can be segmented as:

A. Global Cloud Computing in Healthcare Market, By Type


o Clinical Information Systems

o Non-Clinical Information Systems

B. Global Cloud Computing in Healthcare Market, By Pricing Model


o Pay-As-You-Go
o Spot Pricing Model
C. Global Cloud Computing in Healthcare Market, By Service Model
o Software-as-a-Service
o Infrastructure-as-a-Service
o Platform-as-a-Service
Benefits of Cloud-based HaaS: In the healthcare industry, efficiency is essential. The doctors,
drug manufacturers, and illness prevention departments should always stay on schedule or go
ahead in time if possible. Why? Since many lives and the public’s health are at stake, every second
counts and can potentially change things. While local computing works, it’s not as efficient as the
newer cloud standard.
o Faster Services: With cloud, the healthcare companies can process data and transfer
information instantly. The workers can complete their tasks remotely, removing the need to
wait for slow servers or travel to a specific office to access files or information. This is all
possible thanks to the cloud’s powerful processing, faster servers, and ease of access.
o Improved Collaboration: Collaboration is also a vital part of every healthcare company. In
hospitals, doctors, nurses, and other staff need to accurately transfer and receive information
like patients’ data and other related things.However, local servers and programs are slow to
process data, get easily congested with multiple user access, or don’t have effective remote and
collaboration functions.
o Supports Both Consumer and Specialized Programs: Cloud computing supports both consumer
and specialized programs. A healthcare company can choose to deploy its applications to a
cloud provider or subscribe to a commercial service. Accordingly, healthcare workers can now
send, upload, and access files in varying formats and sizes remotely, converse via chat, call, or
video feed, and use more collaboration methods.
o Cost-Efficient Operations: The organizations can save money when choosing cloud computing
over local processing. The need to hire people who can maintain and solve issues of local
servers is less as cloud providers ensure their platforms are always up and running. The cost of
repairs and maintenance most of the time outweighs the expense of paying for cloud
subscriptions monthly.Now, it is more practical to get a cloud platform package than build an
entire server system in an organization. Most favour the faster cloud deployment than waiting for
weeks or months to complete and potentially troubleshoot a local system. Cloud computing is excellent at

6
Notes

Cloud Computing

scaling, a healthcare company will only need to buy the space or computing power it requires.
Additionally, Cloud offers easy downgrade plans with cost reductions.
o Enhanced Patient Care Efficiency: Cloud computing can enhance patient care in many
ways.When accepting new patients, doctors and staff can quickly check the online database to
check a person’s potential existing medical records. As a result, they can spend more time on
actual consultations and not on the paperwork.Next, a hospital can efficiently distribute patient
information like condition, status, schedules, and medication to nurses and doctors. The
workers can also avoid information inaccuracy when data mixes with other groups or new
entries overlap existing ones.
o Better Data Management: If a healthcare organization uses local computing, chances are, it only
has limited methods of storing and accessing data. The offline databases are more constrained,
they can restrict users to access faster or do things they can typically perform on commercial
programs.Cloud providers employ powerful technologies, it’s now possible to store more
complex data and file types without worrying about slow down or errors. Additionally,
organizing massive data collections won’t be as hard compared to doing it on a local server.
Above all, healthcare workers can instantly upload/access information and files remotely
without messing up the system or waiting for queues and slow loading times.
o Improved Privacy: Unlike offline processing, cloud computing provides more privacy and
security for the healthcare sector. Cloud platforms offer high-level encryption, multiple stacks
of protection, and superior threat detection methods, it’s harder for criminals to infiltrate a
healthcare company’s system.

Education as a Service (EaaS)


EaaS involves providing learning and development to learners/teachers through various
electronic media such as the internet, audio, video, etc. EaaS learning is education done through
Internet. It is often called "e-learning".EaaS model provides learners with another or enhancement
to more expensive four-year degree programs and delivers customized learning opportunities for
learners. It facilitates academic organizations and businesses to utilize courses that align with
their offering without having to get excessive learning materials that are not being executed in its
program. It’s offerings range from intangible benefits (Increment in knowledge, aptitude,
professional expertise, skill); produced with the help of a set of tangible (infrastructure); and
Intangible (faculty expertise and learning) aids.
Need for Education-as-a-Service (EaaS)
o Affordable computers, faster Broadband connectivity, and rich educational content has made it
possible for ICT to revolutionise education.
o These advances allow training providers to improve the delivery of their content, and there’s a
fundamental need for this change.
o The current educational paradigm sees students paying large sums of money for courses that
take several years to complete.
o Graduates are left with considerable amounts of debt and the rising costs make it even more
difficult for mature students to return to education.
o Returns on Investment (ROI) with regard to tuition fees is a key consideration for many learners entering
any type of training programme, and the current model is not always providing value for money.
o Education-as-a-Service (EaaS) affords students the opportunity to gain accreditation by choosing modules
that are relevant to their career goals and only paying for what they need

Benefits of EaaS
o Learners pay for the Education They Want/ Need: Mostly, the degrees and training courses can
be costly and often require the students to follow a dictated set of modules. EaaS gives students
the option to pick and choose the modules they want to purchase according to their needs. The
training structure is tailored to the student, by the student, and their time and money are not
wasted on irrelevant learning.
o Advocates Flexible Learning: The flexibility of EaaS allows students to learn at a time, place,
and pace that they choose. This method of learning often goes hand-in-hand with blended
learning, as both models give the student control and responsibility of their own learning.

6
Notes

Unit 03: Cloud Services

o Learner-centric: The traditional education sees the teacher making all decisions regarding
curriculum. They impose the place, time and pace of content delivery. This old-fashioned
approach forces students to take a passive role in their education. In contrast, learner-centric
courses encourage students to be active in designing and executing their own educational
journey. This manner of learning is supported by constructivist theory– the idea that humans
generate knowledge and meaning from interactions between their experiences and ideas. This
theory is key to corporate learning practices and is commonly used to inform adult educational
programmes. By getting students to use previous experiences and existing knowledge in their
learning, a deeper understanding of the content can be achieved.
o Encourages Agile Content Development by Course Designers: Almost continuous access to the
Internet and the popularity of social media has taught many of us to expect dynamic content –
content that is constantly being changed or updated, based on new information as it is made
available. As such, developing content for an online or blended learning platform requires new
ideas and the continual update of learning materials.

Function-as-a-Service (FaaS)
FaaS is a concept of serverless computing via serverless architectures where developers can
leverage this to deploy an individual “function”, action, or piece of business logic.
Principles of FaaS:

o Complete abstraction of servers away from the developer.


o Billing based on consumption and executions, not server instance sizes.
o Services that are event-driven and instantaneously scalable.
o FaaS services provide a platform allowing customers to develop, run, and manage application
functionalities without the complexity of building and maintaining the infrastructure
typically associated with developing and launching a complete application.

Example:
o AWS Lambda: The service allows accessing software code without server setting and
management. Developers need only to upload the code, and the solution will automatically
connect the app to servers, language runtimes, OS, and highlight the functional code
fragments. From that point, developers only choose features for editing.
o Azure Functions: The platform uses trigger mechanisms to highlight functions. Developers
can set events that will lead to changes in code — for instance, a particular user input
(interaction with an app or provided data) can turn on a function (like showing a pop-up or
opening a page). The developers set up these triggers and responses without building the
software infrastructure.
o IBM Open Whisk: Similar to Lambda and Azure, IBM Open Whisk reacts to trigger effects and
produces a series of organized outputs. Developers only have to set up action sequences and
describe possible trigger events. The action itself will be enabled by IBM’s infrastructure— the
users don’t have to control these aspects.

Summary
• Cloud computing signifies a major change in the way we run various applications andstore our
information. Everything is hosted in the “cloud”, a vague assemblage ofcomputers and servers
accessed via the Internet, instead of the method of running programsand data on a single
desktop computer.
• Technology-enabled Healthcare includes telehealth, telecare, telemedicine, tele-coaching,
mHealth and self-care services that can put people in control of their own health, wellbeing and
support, keeping them safe, well and independent and offering them and their family’s peace of
mind.

7
Notes

Cloud Computing

• The first step in switching to cloud computing is determining what kind of cloud services you
could be interested in. Then, choosing a cloud computing service is a long-term investment. Your
application will heavily rely on third-party capacities, and you need to make sure that the
provider is legitimate and fits your needs.
• IBM offers cloud computing services to help businesses of all sizes take advantage of this
increasingly attractive computing model. IBM is applying its industry-specific consulting
expertise and established technology record to offer secure services to companies in public,
private, and hybrid cloud models.
• NaaS is “an emerging procurement model to consume network infrastructure via a flexible
operating expense (OpEx) subscription inclusive of hardware, software, management tools,
licenses, and lifecycle services”.
• SaaS describes any cloud service where consumers are able to access software applications over
the internet. The applications are hosted in “the cloud” and can be used for a wide range of tasks
for both individuals and organizations.

Keywords
Amazon Simple Storage Service (Amazon S3): Amazon S3 is a service offered by Amazon Web Services
(AWS) that provides object storage through a web service interface. Amazon S3 uses the same
scalable storage infrastructure that Amazon.com uses to run its global e-commerce network.
Function-as-a-Service:FaaS is a concept of serverless computing via serverless architectures where
developers can leverage this to deploy an individual “function”, action, or piece of business logic.
Database as a Service: Database as a Service (DBaaS) is an architectural and operational approach
enabling DBAs to deliver database functionality as a service to internal and/or external customers.
Cloud Hosting: The hosting of websites on virtual servers which are founded upon pooled
resources from underlying physical servers.
Virtual Data Centers (VDC): A virtualized network of interconnected virtual servers which can be
used to offer enhanced cloud hosting capabilities, enterprise IT infrastructure or to integrate all of
these operations within either a private or public cloud implementation.
Communications as a Service:Communications as a Service (CaaS) is an outsourced enterprise
communications solution that can be leased from a single vendor. CaaS vendor is responsible for
all hardware and software management and offers guaranteed Quality of Service (QoS).

7
Notes

7
Virtualization
Introduction
In computing, virtualization or virtualisation is the act of creating a virtual (rather than actual)
version of something, including virtual computer hardware platforms, storage devices, and
computer network resources.Virtualization began in the 1960s, as a method of logically dividing
the system resources provided by mainframe computers between different applications. Since
then, the meaning of the term has broadened.Virtualization technology has transformed hardware
into software. It allows to run multiple Operating Systems (OSs) as virtual machines (Figure
1).Each copy of an operating system is installed in to a virtual machine.

Figure 1: Virtualization Scenario

You can see a scenario over here that we have a VMware hypervisor that is also called as a Virtual
Machine Manager (VMM). On a physical device, a VMware layer is installed out and, on that layer, we
have six OSs that are running multiple applications over there, these can be the same kind of OSs or
these can be the different kinds of OSs in it.

Why Virtualize
1. Share same hardware among independent users- Degrees of Hardware parallelism increases.
2. Reduced Hardware footprint through consolidation- Eases management and energy usage.
3. Sandbox/migrate applications- Flexible allocation and utilization.
4.Decouple applications from underlying Hardware- Allows Hardware upgrades without impacting an OS image.

Virtualization enables sharing of resources much easily, it helps in increasing the degree of hardware level parallelism,
basically, there is sharing of the same hardware unit among different kinds of independent units, if we say that we have
the same physical hardware and on that physical hardware, we have multiple OSs. There can be different users running
on different kind of OSs. Therefore, we have a much more processing capability with us. This also helps in increasing the
degree of hardware parallelism as well as there is a reduced hardware footprint throughout the VM consolidation. The
hardware footprint that is overall hardware consumption also reduces out the amount of hardware that is wasted out that
can also be reduced out. This consequently helps in easing out the management process and also to reduce the amount of
energy that would have been otherwise consumed out by a particular hardware if we would have invested in large
number of hardware machines would have been used otherwise. Virtualization helps in sandboxing capabilities or
migrating different kinds of applications that in turn enables flexible allocations and utilization of the resources.
Additionally, the decoupling of the applications from the underlying hardware is much easier and further aids in allowing
more and more hardware upgrades without actually impacting any particular OS image.
Virtualization raises abstraction. Abstraction pertains to hiding of the inner details from a particular user. Virtualization
helps in enhancing or increasing the capability of abstraction. It is very similar to how the virtual memory operates. It
helps to access the larger address spaces physical memory mapping is actually hidden by an OS with the help of paging. It
can be similar to hardware emulators where codes are allowed on one architecture to run on a different physical device
such as virtual devices central processing unit, memory or network interface cards etc. No botheration is actually
required out regarding the hardware details of a particular machine. The confinement to the excess of hardware details
helps in raising out the abstraction capability through virtualization.
Basically, we have certain requirements for virtualization, first is the efficiency property. Efficiency means that all
innocuous instructions are executed by the hardware independently. Then, the resource control property means that it is
impossible for the programs to directly affect any kind of system resources. Furthermore, there is an equivalence
Note
s
Cloud
property that indicatesComputing
that we have a program which has a virtual machine manager or hypervisor that performs in a
particular manner, indistinguishable from another program that is running on it.

Before and After Virtualization


Before virtualization, the single physical infrastructure was used to run a single OS and its applications, which results in
underutilization of resources (Figure 2). The nonshared nature of the hardware forces the organizations to buy a new
hardware to meet their additional computing needs. For example, if any organization wants to experiment or simulate
their new idea, they have to use separate dedicated systems for different experiments. So, to complete their research
work successfully, they tend to buy a new hardware that will increase the CapEx and OpEx. Sometimes, if the
organization does not have money to invest more on the additional resources, they may not be able to carry out some
valuable experiments because of lack of resources. So, people started thinking about sharing a single infrastructure for
multiple purposes in the form of virtualization.

Figure 2: Before Virtualization

Figure 3: Post Virtualization Scenario

After virtualization was introduced, different OSs and applications were able to share a single
physical infrastructure (Figure 3). The virtualization reduces the huge amount invested in buying
additional resources. The virtualization becomes a key driver in the IT industry, especially in cloud
computing. Generally, the terms cloud computing and virtualization are not same. There are
significant differences between these two technologies.
Virtual Machine (VM):A VM involves anisolated guest OS installation within a normal host
OS.From the user perspective, VM is software platform like physical computer that runs OSs and
apps.VMs possess hardware virtually.

Factors Driving the Need of Virtualization


Increased Performance and Computing Capacity: PCs are having immense computing power.

76
Nowadays, the average end-user desktop PC is powerful enough to meet almost all the needs of
everyday computing, with extra capacity that Is rarely used. Almost all these PC share resources
enough to host a VMM and execute a VM with by far acceptable performance. The same
consideration applies to the high-end side of the PC market, where supercomputers can provide
immense compute power That can accommodate the execution of hundreds or thousands of VMs.

Increased performance and computing capacity


Underutilized hardware and software resources

Lack of space Greening initiatives


Rise of administrative costs

Underutilized Hardware and Software Resources- Hardware and softwareunderutilization is occurring due to:
increased performance and computing capacity, and the effect of limited or sporadic use of resources. The computers today
are so powerful that in most cases only a fraction of their capacity is used by an application or the system. Moreover, if we
consider the IT infrastructure of an enterprise, many computers are only partially utilized whereas they could be used
without interruption on a 24/7/365 basis.For example, desktop PCs mostly devoted to office automation tasks and used by
administrative staff are only used during work hours, remaining completely
unused overnight. Using these resources for other purposes after hours could improve the
efficiency of the IT infrastructure. To transparently provide such a service, it would be necessary to deploy a completely
separate environment, which can be achieved through virtualization.
Lack of Space: The continuous need for additional capacity, whether storage or compute power, makes data centers grow
quickly. Companies such as Google and Microsoft expand their infrastructures by building data centers as large as football
fields that are able to host thousands of nodes. Although this is viable for IT giants, in most cases enterprises cannot afford
to build another data center to accommodate additional resource capacity. This condition, along with hardware under-
utilization, hassled to the diffusion of a technique called server consolidation, for which virtualization technologies are
fundamental.
Greening Initiatives:Recently, companies are increasingly looking for ways to reduce the amount of energy they consume
and to reduce their carbon footprint. Data centers are one of the major power consumers; they contribute consistently to the
impact that a company has on the environment. Maintaining a data center operation not only involves keeping servers on, but
a great deal of energy is also consumed in keeping them cool. Infrastructures for cooling have a significant impact on the carbon
footprint of a data center. Hence, reducing the number of servers through server consolidation will definitely reduce the impact
of cooling and power consumption of a data center. Virtualization technologies can provide an efficient way of consolidating
servers.
Rise of Administrative Costs: The power consumption and cooling costs have now become higher than the cost of IT
equipment. Moreover, the increased demand for additional capacity, which translates into more servers in a data center, is
also responsible for a significant increment in administrative costs. Computers—in particular, servers—do not operate all
on their own, but they require care and feeding from system administrators. Common system administration tasks include
hardware monitoring, defective hardware replacement, server setup and updates, server resources monitoring, and
backups. These are labor-intensive operations, and the higher the number of servers that have to be managed, the higher
the administrative costs. Virtualization can help reduce the number of required servers for a given workload, thus reducing
the cost of the administrative personnel.

 Share same hardware among independent users. Degrees of hardware


parallelism increases.

 Reduced hardware footprint through consolidation Eases management, energy


usage.

 Sandbox/migrate applications
Note
s
Cloud
Computing
Flexible allocation & utilization.

 Decouple applications from underlying Hardware

Allows hardware upgrades without impacting an OS image.

Features of Virtualization
 Virtualization Raises Abstraction
o Similar to Virtual Memory: To access larger address space, physical memory mapping is
hidden by OS using paging.
o Similar to Hardware Emulators: Allows code on one architecture to run on a different physical
device, such as, virtual devices, CPU, memory, NIC etc.
o No botheration about the physical hardware details.
 Virtualization Requirements
o Efficiency Property: All innocuous instructions are executed by the hardware.
o Resource Control Property: It must be impossible for programs to directly affect system
resources.
o Equivalence Property: A program with a VMM performs in a manner indistinguishable from
another.Except: Timing & resource availability.

Virtualized Environments
Virtualization is a broad concept that refers to the creation of a virtual version of something,
whether hardware, a software environment, storage, or a network.In a virtualized environment,
there are three major components (Figure 4):

o Guest: Represents the system component that interacts with the virtualization layer rather
than with the host, as would normally happen.
o Host: Represents the original environment where the guest is supposed to be managed.
o Virtualization Layer: Responsible for recreating the same or a different environment where
the guest will operate.

Figure 4:Virtualized Environment

The components of virtualized environments include: In the case of hardware virtualization, the
guest is represented by a system image comprising an OS and installed applications. These are
installed on top of virtual hardware that is controlled and managed by the virtualization layer, also
called the VMM. The host is instead represented by physical hardware, & in some cases OS, that

78
defines an environment where VMM is running. The guest— Applications and users—interacts
with a virtual network, such as a virtual private network (VPN), which is managed by specific
software (VPN client) using physical network available on the node. VPNs are useful for creating
an illusion of being within a different physical network & thus accessing the resources in it, which
would otherwise not be available. The virtual environment is created by means of a software
program. The ability to use software to emulate a wide variety of environments creates a lot of
opportunities, previously less attractive because of excessive overhead introduced by the
virtualization layer.

10.1 How Does Virtualization Work


For virtualizing the infrastructure, a virtualization layer is installed. It can involve the use of Bare-
metal or Hosted Hypervisor architecture.It is important to understand how virtualization actually
works. Firstly, in virtualization a virtual layer is installed on the systems. There are two prominent
virtualization architectures, bare-metal and hosted hypervisor.
In a hosted architecture, a host OS is firstly installed out then a piece of software that is called as a
hypervisor or it is called as a VM monitor or Virtual Machine Manager (VMM) (Figure 5). The
VMM is installed on the top of host OS. The VMM allows the users to run different kinds of guests
OSs within their own application window of a particular hypervisor. Different kinds of hypervisors
can be Oracle's VirtualBox, Microsoft Virtual PC, VMware Workstation.

Figure 5: Hosted vs Bare-Metal Virtualization

Did you Know?


VMware server is a free application that is supported by Windows as well as by Linux OSs.

In a bare metal architecture, one hypervisor or VMM is actually installed on the bare metal
hardware. There is no intermediate OS existing over here. The VMM communicates directly with
the system hardware and there is no need for relying on any host OS. VMware ESXi and Microsoft
Hyper-V are different hypervisors that are used for bare-metal virtualization.

A. Hosted Virtualization Architecture


A hosted virtualization architecture requires an OS (Windows or Linux) installed on the computer. The
virtualization layer is installed as application on the OS.

Figure 6illustrates the hosted virtualization architecture. At the lower layer, we have the shared
hardware with a host OS running on this shared hardware. Upon the host OS, a VMM is running
that and is creating a virtual layer which is enabling different kinds of OSs to run concurrently. So,
you can see a scenario we have a hardware then we add an operating system then a hypervisor is
added and different kinds of virtual machines can run on that particular virtual layer and each
virtual machine can be running same or different kind of OSs.
Note
s
Cloud
Computing

Figure 6: Hosted Virtualization Architecture

Advantages of Hosted Architecture


• Ease of installation and configuration
• Unmodified Host OS & Guest OS
• Run on a wide variety of PCs

Disadvantages of Hosted Architecture


• Performance degradation
• Lack of support for real-time OSs

B. Bare-Metal Virtualization Architecture


In a bare metal architecture, there is an underlying hardware but no underlying OS. There is just a
VMM that is installed on that particular hardware and on that there are multiple VMs that are
running on a particular hardware unit. As illustrated in theFigure 7, there is shared hardware that
is running a VMM on which multiple VMs are running with simultaneous execution of multiple
OSs.
Advantages of Bare-Metal Architecture
• Improved I/O performance
• Supports Real-time OS
Disadvantages of Bare-Metal
Architecture
• Difficult to install & configure
• Depends upon hardware platform

Figure 7: Bare-Metal Virtualization Scenario

10.2 Types of Virtualization


Virtualization covers a wide range of emulation techniques that are applied to different areas of
computing. A classification of these techniques helps us better understand their characteristics

80
and
Note
s
Cloud
Computing
use.Before discussing virtualization techniques, it is important to know about protection rings in
OSs. The protection rings are used to isolate the OS from untrusted user applications. The OS can
be protected with different privilege levels (Figure 8).

Figure 8: Protection Rings in OSs

Protection Rings in OSs


In protection ring architecture, the rings are arranged in hierarchical order from ring 0 to ring 3.
The Ring 0 contains the programs that are most privileged, and ring 3 contains the programs that
are least privileged. Normally, the highly trusted OS instructions will run in ring 0, and it has
unrestricted access to physical resources. Ring 3 contains the untrusted user applications, and it
has restricted access to physical resources. The other two rings (ring 1 & 2) are allotted for device
drivers. 26 The protection ring architecture restricts the misuse of resources and malicious
behavior of untrusted user-level programs. For example, any user application from ring 3 cannot
directly access any physical resources as it is the least privileged level. But the kernel of the OS at
ring 0 can directly access the physical resources as it is the most privileged level.Depending on the
type of virtualization, the hypervisor and guest OS will run in different privilege levels. Normally,
the hypervisor will run with the most privileged level, and the guest OS will run at the least
privileged level than the hypervisor. There are 4 virtualization techniques namely,
• Full Virtualization (Hardware Assisted Virtualization/ Binary Translation).
• Para Virtualization or OS assisted Virtualization.
• Hybrid Virtualization
• OS level Virtualization

Full Virtualization: The VM simulates hardware to allow an unmodified guest OS to be run in


isolation. There are 2 type of full virtualizations in the enterprise market, software-assisted and
hardware-assisted full virtualization. In both the cases, the guest OSs source information is not
modified.
The software-assisted full virtualization is also called as Binary Translation (BT) and it completely
relies on binary translation to trap and virtualize the execution of sensitive, non-virtualizable
instructions sets. It emulates the hardware using the software instruction sets. It is often criticized
for performance issue due to binary translation. The software that fall under software-assisted (BT)
include:
• VMware workstation (32Bit guests)
• Virtual PC
• VirtualBox (32-bit guests)
• VMware Server

The hardware-assisted full virtualization eliminates the binary translation and directly interrupts
with hardware using the virtualization technology which has been integrated on X86 processors
since 2005 (Intel VT-x and AMD-V). The guest OS’s instructions might allow a virtual context
execute privileged instructions directly on the processor, even though it is virtualized. There is

82
several enterprise software that support hardware-assisted– Full virtualization which falls under
hypervisor type 1 (Bare metal) such as:
• VMware ESXi /ESX
• KVM
• Hyper-V
• Xen

Para Virtualization:The para-virtualization works differently from the full virtualization. It


doesn’t need to simulate the hardware for the VMs. The hypervisor is installed on a physical server
(host) and a guest OS is installed into the environment. The virtual guests are aware that it has
been virtualized, unlike the full virtualization (where the guest doesn’t know that it has been
virtualized) to take advantage of the functions. Also, the guest source codes can be modified with
sensitive information to communicate with the host. The guest OSs require extensions to make API
calls to the hypervisor.
Comparatively, in the full virtualization, guests issue hardware calls but in para virtualization,
guests directly communicate with the host (hypervisor) using the drivers. The list of products
which supports para virtualization are:
• Xen (Figure 9)
• IBM LPAR
• Oracle VM for SPARC (LDOM)
• Oracle VM for X86 (OVM)

However, due to the architectural difference between windows-based and Linux-based Xen
hypervisor, Windows OS can’t be para-virtualized. It does for Linux guest by modifying the
kernel. VMware ESXi doesn’t modify the kernel for both Linux and Windows guests.

Figure 9: Xen Supports both Full-Virtualization and Para-Virtualization

Hybrid Virtualization (Hardware Virtualized with PV Drivers): In the hardware-assisted full


virtualization, the guest OSs are unmodified and many VM traps occur and thus high CPU
overheads which limit the scalability. Para virtualization is a complex method where guest kernel
needs to be modified to inject the API. Therefore, due to the issues in full- and para- virtualization,
engineers came up with hybrid paravirtualization, that is, a combination of both full and
paravirtualization. The VM uses paravirtualization for specific hardware drivers (where there is a
bottleneck with full virtualization, especially with I/O & memory intense workloads), and host
uses full virtualization for other features. The following products support hybrid virtualization:
• Oracle VM for x86
Note
s
Cloud
Computing
• Xen
• VMware ESXi

OS Level Virtualization: It is widely used and is also known as “containerization”. The host OS
kernel allows multiple user spaces aka instance. Unlike other virtualization technologies, there is
very little or no overhead since it uses the host OS kernel for execution. Oracle Solaris zone is one
of the famous containers in the enterprise market. The list of other containers:
• Linux LCX
• Docker
• AIX WPAR

Processor Virtualization: It allows the VMs to share the virtual processors that are abstracted from
the physical processors available at the underlying infrastructure (Figure 10). The virtualization
layer abstracts the physical processor to the pool of virtual processors that is shared by the VMs.
The virtualization layer will be normally any hypervisors. But processor virtualization can also be
achieved from distributed servers.

Figure 10: Processor Virtualization

Memory Virtualization:Another important resource virtualization technique is memory


virtualization (Figure 11). It involves the process of providing a virtual main memory to the VMs is
known as memory virtualization or main memory virtualization. In main memory virtualization,
the physical main memory is mapped to the virtual main memory as in the virtual memory
concepts in most of the OSs.
The main idea of main memory virtualization is to map the virtual page numbers to the physical
page numbers. All the modern x86 processors are supporting main memory virtualization. The
main memory virtualization can also be achieved by using the hypervisor software. Normally, in
the virtualized data centers, the unused main memory of the different servers will consolidate as a
virtual main memory pool and can be given to the VMs.

84
Figure 11: Memory Virtualization

Storage Virtualization: A form of resource virtualization where multiple physical storage disks are
abstracted as a pool of virtual storage disks to the VMs (Figure 12). Normally, the virtualized
storage will be called a logical storage.

Figure 12: Storage Virtualization

Storage virtualization is mainly used for maintaining a backup or replica of the data that are stored
on the VMs. It can be further extended to support the high availability of the data. It efficiently
utilizes the underlying physical storage. Other advanced storage virtualization techniques are
storage area networks (SAN) and network-attached storage (NAS).
Network Virtualization:It is a type of resource virtualization in which the physical network can be
abstracted to create a virtual network (Figure 13).Normally, the physical network components like
router, switch, and Network Interface Card (NIC) will be controlled by the virtualization software
to provide virtual network components. Virtual network is a single software-based entity that
contains the network hardware and software resources. Network virtualization can be achieved
from internal network or by combining many external networks. It enables the communication
between the VMs that share the physical network. There are different types of network access given
to the VMs such as bridged network, network address translation (NAT), and host only.
Note
s
Cloud
Computing

Figure 13: Network Virtualization

Data Virtualization: Data virtualization offers the ability to retrieve the data without knowing its
type and the physical location where it is stored (Figure 14). It aggregates the heterogeneous data from
the different sources to a single logical/virtual volume of data. This logical data can be accessed
from any applications such as web services, E-commerce applications, web portals, Software-as-a-
Service (SaaS) applications, and mobile application.It hides the type of the data and the location of
the data for the application that access it and ensures the single point access to data by aggregating
data from different sources. It is mainly used in data integration, business intelligence, and cloud
computing.

Figure 14: Data Virtualization

Application Virtualization:Application virtualization is the enabling technology for SaaS of cloud


computing that offers the ability to the user to use the application without the need to install any
software or tools in the machine (Figure 15). The complexity of installing the client tools or other
supported software is reduced. Normally, the applications will be developed and hosted in the
central server. The hosted application will be again virtualized, and the users will be given the
separated/isolated virtual copy to access.

86
Figure 15: Application Virtualization

10.3 Pros of Virtualization


Increased Security
The ability to control the execution of a guest in a completely transparent manner opens new
possibilities for delivering a secure, controlled execution environment. VM represents an emulated
environment in which the guest is executed. All the operations of the guest are generally
performed against the VM, which then translates & applies them to the host.By default, the file
system exposed by the virtual computer is completely separated from the one of the host
machines. This becomes the perfect environment for running applications without affecting other
users in the environment.
Managed Execution
Virtualization of the execution environment not only allows increased security, but a wider range
of features also can be implemented such as:

o Sharing: Virtualization allows creation of a separate computing environments within same


host, thereby, making it possible to fully exploit capabilities of a powerful guest (that would
otherwise be underutilized).
o Aggregation: A group of separate hosts can be tied together and represented to guests as a
single virtual host. This function is naturally implemented in middleware for distributed
computing, with a classical example represented by cluster management software, which
harnesses the physical resources of a homogeneous group of machines and represents them
as a single resource.
o Emulation: Guest programs are executed within an environment that is controlled by the
virtualization layer, which ultimately is a program. This allows for controlling and tuning the
environment that is exposed to the guests.
o Isolation: Virtualization allows providing guests—whether they are OSs, applications, or other
entities—with a completely separate environment, in which they are executed. The guest
program performs its activity by interacting with an abstraction layer, which provides access
to the underlying resources.
Portability
Concept of portability applies in different ways according to the specific type of virtualization
considered. In the case of a hardware virtualization solution, the guest is packaged into a virtual
image that, in most cases, can be safely moved and executed on top of different virtual machines.
Note
s
Cloud
Computing
o In the case of programming-level virtualization, as implemented by the JVM or the .NET
runtime, the binary code representing application components (jars or assemblies) can be
run without any recompilation on any implementation of the corresponding virtual machine.
o This makes the application development cycle more flexible and application deployment very
straight forward: One version of the application, in most cases, is able to run on different
platforms with no changes.
o Portability allows having your own system always with you and ready to use as long as the
required VMM is available. This requirement is, in general, less stringent than having all the
applications and services you need available to you anywhere you go.

More Efficient Use of Resources


Multiple systems can securely coexist and share the resources of the underlying host, without
interfering with each other. This is a prerequisite for server consolidation, which allows adjusting
the number of active physical resources dynamically according to the current load of the system,
thus creating the opportunity to save in terms of energy consumption and to be less impacting on
the environment.

10.4 Cons of Virtualization


Virtualization also has downsides. The most evident is represented by a performance decrease of
guest systems as a result of the intermediation performed by the virtualization layer. In addition,
sub-optimal use of the host because of the abstraction layer introduced by virtualization
management software can lead to a very inefficient utilization of the host or a degraded user
experience.Less evident, but perhaps more dangerous, are the implications for security, which are
mostly due to the ability to emulate a different execution environment.
Performance Degradation- Performance is definitely one of the major concerns in using
virtualization technology. Since virtualization interposes an abstraction layer between the guest
and the host, the guest can experience increased latencies. For instance, in case of hardware
virtualization, where the intermediate emulates a bare machine on top of which an entire system
can be installed, the causes of performance degradation can be traced back to the overhead
introduced by following activities:

o Maintaining the status of virtual processors


o Support of privileged instructions (trap and simulate privileged instructions)
o Support of paging within VM
o Console functions
Inefficiency and Degraded User Experience- Virtualization can sometime lead to an inefficient use
of the host. In particular, some of the specific features of the host cannot be exposed by the
abstraction layer and then become inaccessible. In the case of hardware virtualization, this could
happen for device drivers: VM can sometime simply provide a default graphic card that maps only
a subset of the features available in the host. In the case of programming-level VMs, some of the
features of the underlying OSs may become inaccessible unless specific libraries are used. For
example, in the first version of Java the support for graphic programming was very limited and the
look and feel of applications was very poor compared to native applications. These issues have
been resolved by providing a new framework called Swing for designing the user interface, and
further improvements have been done by integrating support for the OpenGL libraries in the
software development kit.
Security Holes and New Threats-Virtualization opens the door to a new and unexpected form of
phishing. The capability of emulating a host in a completely transparent manner led the way to
malicious programs that are designed to extract sensitive information from the guest. In the case of
hardware virtualization, malicious programs can preload themselves before the operating system
and act as a thin virtual machine manager toward it. The operating system is then controlled and
can be manipulated to extract sensitive information of interest to third parties.

88
Software Licensing Considerations- This is becoming less of a problem as more software
vendors adapt to the increased adoption of virtualization, but it is important to check with your
vendors to clearly understand how they view software use in a virtualized environment.
Possible Learning Curve- Implementing and managing a virtualized environment will require IT
staff with expertise in virtualization. On the user side a typical virtual environment will operate
similarly to the non-virtual environment. There are some applications that do not adapt well to the
virtualized environment – this is something that your IT staff will need to be aware of and address
prior to converting.

Summary
 Virtualization opens the door to a new and unexpected form of phishing. The capability of
emulating a host in a completely transparent manner led the way to malicious programs that
are designed to extract sensitive information from the guest.
 Virtualization raises abstraction. Abstraction pertains to hiding of the inner details from a
particular user. Virtualization helps in enhancing or increasing the capability of abstraction.
 Virtualization enables sharing of resources much easily, it helps in increasing the degree of
hardware level parallelism, basically, there is sharing of the same hardware unit among
different kinds of independent units.
 In protection ring architecture, the rings are arranged in hierarchical order from ring 0 to ring 3.
The Ring 0 contains the programs that are most privileged, and ring 3 contains the programs
that are least privileged.
 In a bare metal architecture, one hypervisor or VMM is actually installed on the bare metal
hardware. There is no intermediate OS existing over here. The VMM communicates directly
with the system hardware and there is no need for relying on any host OS.
 The para-virtualization works differently from the full virtualization. It doesn’t need to
simulate the hardware for the VMs. The hypervisor is installed on a physical server (host) and
a guest OS is installed into the environment.
 The software-assisted full virtualization is also called as Binary Translation (BT) and it
completely relies on binary translation to trap and virtualize the execution of sensitive, non-
virtualizable instructions sets.
 Memory virtualization is an important resource virtualization technique. In the main memory
virtualization, the physical main memory is mapped to the virtual main memory as in the
virtual memory concepts in most of the OSs.

Keywords
 Virtualization: Virtualization is a broad concept that refers to the creation of a virtual
version of something, whether hardware, a software environment, storage, or a network.
 Hardware-assisted full virtualization: Hardware-assisted full virtualization eliminates
the binary translation and directly interrupts with hardware using the virtualization technology
which has been integrated on X86 processors since 2005.
 Data Virtualization: Data virtualization offers the ability to retrieve the data without
knowing its type and the physical location where it is stored.
 Application Virtualization: Application virtualization is the enabling technology for SaaS
of cloud computing that offers the ability to the user to use the application without the need
to install any software or tools in the machine.
 Memory Virtualization: It involves the process of providing a virtual main memory to
the VMs is known as memory virtualization or main memory virtualization.
Notes
 Network Virtualization: It is a type of resource virtualization in which the physical
network can be abstracted to create a virtual network.

Virtual Machine

Introduction
A software that creates a virtualized environment between the computer platform and the end-user in
which the end user can operate software. It provides an interface identical to the underlying bare
hardware. The Operating System (OS) creates the illusion of multiple processes, each executing on its
own processor with its own (virtual) memory.Virtual machines are “an efficient, isolated duplicate of a
real machine”- Popek and Goldberg. Popek and Goldberg introduced conditions for computer
architecture to efficiently support system virtualization.
Virtual machine is a software that creates a virtualized environment between the computer platform
and the end user in which the end user can operate software. The concept of virtualization applied to
the entire machine involves:

 mapping of virtual resources or state to real resources.


 use of real machine instructions to carry out actions specified by the virtual machine instructions.
 Implemented by adding a layer of software to a real machine to support the desired VMs
architecture.
VMs are a number of discrete identical execution environments on a single computer, each of which
runs an OS (Figure 1). These allow applications written for one OS to be executed on a machine which
runs a different OS which provide a greater level of isolation between processes than is achieved when
running multiple processes on the same instance of an OS.

23
Notes

Figure 1: Virtual Machine Scenario

Virtual Machines: Virtual Computers Within Computers


VM is no different than any other physical computer like a laptop, smart phone or server. It has a CPU,
memory, disks to store your files and can connect to the internet if needed. While the parts that make
up your computer (called hardware) are physical and tangible, VMs are often thought of as virtual
computers or software-defined computers within physical servers, existing only as code.

11.1 Virtual Machine Attributes


Virtual machine is a software that creates a virtualized environment between the computer platform
and the end user in which the end user can operate software.The section below discusses the different
characteristics for the same.

Properties of a Virtual Machine


Virtual Hardware

o Each VM has its own set of virtual hardware (e.g., RAM, CPU, NIC, etc.) upon which an operating
system and applications are loaded.

23
Notes

Cloud Computing

o OS sees a consistent, normalized set of hardware regardless of the actual physical hardware
components.
Partitioning

o Multiple applications and OSs can be supported within a single physical system.
o There is no overlap amongst memory as each Virtual Memory has its own memory space.
Isolation

o VMs are completely isolated from host machine and other VMs. If a VM crashes, all others are
unaffected.
o Data does not leak across VMs.
Identical Environment

o VMs can have a number of discrete identical execution environments on a single computer, each of
which runs an OS.
Other VM Features

o Each VM has its own set of virtual hardware (e.g., RAM, CPU, NIC, etc.) upon which an operating
system and applications are loaded.
o OS sees a consistent, normalized set of hardware regardless of the actual physical hardware
components.
o Host system resources are shared among the various VMs. For example, if a host system has 8GB
memory where VMs are running, this amount will be shared by all the VMs, depending upon the
size of the allocation.
o One of the best features of using Virtual machines is we can run multiple OSs/VMs in parallel on
one host system.
o VMs are isolated from one another, thus secure from malware or threat from any other
compromised VM running on the same host.
o Direct exchange of data and mutual influencing are prevented.
o Transfer of VMs to another system can be implemented by simply copying the VM data since the
complete status of the system is saved in a few files.
o VMs can be operated on all physical host systems that support the virtualization environment
used.

Virtual Machine Architecture


o Runtime software is the virtualization software that implements the Process VM. It is
implemented at the API level of the computer architecture above the combined layer of OS and
Hardware. This emulates the user-level instructions as well as OS or library calls.
o For the system VM, the virtualization software is called Virtual Machine Monitor(VMM).
o This software is present between the host hardware machine and the guest software.
o VMM emulates the hardware ISA allowing the guest software to execute a different ISA.

Virtual Machine Taxonomy


Figure 3 depicts the taxonomy for the virtual machines. Let us discuss each one of them below:

23
Notes

Figure 3: Taxonomy of Virtual Machines

Process Virtual Machines: These are also known as Application VM (Figure 4). The virtualization
below the API or ABI, providing virtual resources to a single process executed on a machine is called as
the process virtualization. It is created for the process alone, destroyed when process finishes.

Figure 4: Process VM

Multiprogrammed Systems: Each application is given effectively separate access to resources,


managed by the OS.
Emulators and Translators:

o Executes program binaries compiled for different instruction sets.


o Slower, requiring hardware interpretation.
o Optimization through storing blocks of converted code for repeated
execution. Optimizers, same ISA: Perform code optimization during translation
and execution. High-Level-Language VM:

o Cross-platform compatibility.
o Programs written for an abstract machine, which is mapped to real hardware through a VM.
 Sun Micro systems Java VM
 Microsoft Common Language Infrastructure, .NET framework.

System Virtual Machines: These correspond to the virtualized hardware below the ISA. The single
host can run multiple isolated OSs (Figure 5). The servers running different OSs but in isolation
between concurrent systems. The hardware managed by the Virtual Machine Manager
(VMM).Classically, the

23
Notes

Cloud Computing

VMM runs on bare hardware, directly interacting with resources. It intercepts and interprets guest OS
actions.

Figure 5: System Virtual Machines

Uses of Virtual Machines


 Building and deploying apps to the cloud.
 Trying out a new operating system (OS), including beta releases.
 Spinning up a new environment to make it simpler and quicker for developers to run dev-test
scenarios.
 Backing up your existing OS.
 Accessing virus-infected data or running an old application by installing an older OS.
 Running software or apps on operating systems that they were not originally intended for.

Benefits of Virtual Machines


While VMs run like individual computers with individual operating systems and applications, they
have the advantage of remaining completely independent of one another and the physical host
machine. A piece of software called a hypervisor or virtual machine manager, lets you run different
operating systems on different virtual machines at the same time. This makes it possible to run Linux
VMs, for example, on a Windows OS or to run an earlier version of Windows on more current
Windows OS. As the VMs are independent of each other, they are also extremely portable. You can
move a VM on a hypervisor to another hypervisor on a completely different machine almost
instantaneously.Because of their flexibility and portability, virtual machines provide many benefits,
such as:
Cost Savings- Running multiple virtual environments from one piece of infrastructure means that
you can drastically reduce your physical infrastructure footprint. This boosts your bottom line—
decreasing the need to maintain nearly as many servers and saving on maintenance costs and
electricity.
Agility and Speed- Spinning up a VM is relatively easy and quick and is much simpler than
provisioning an entire new environment for your developers. Virtualisation makes the process of
running dev-test scenarios a lot quicker.
Lowered Downtime- VMs are so portable and easy to move from one hypervisor to another on a
different machine—this means that they are a great solution for backup, in the event the host goes
down unexpectedly.
Scalability- VMs allow you to more easily scale your apps by adding more physical or virtual servers
to distribute the workload across multiple VMs. As a result you can increase the availability and
performance of your apps.
Security Benefits- Because VMs run in multiple OSs, using a guest operating system on a VM allows
you to run apps of questionable security and protects your host OS. VMs also allow for better security
forensics and are often used to safely study computer viruses, isolating the viruses to avoid risking
their host computer.

23
Notes

Isolated environment provided by VMs- If you are a tester or security analyst then VMs will be a
good idea to run multiple applications and services in an isolation using VMs because they do not
affect each other.
Easy to Backup and Clone- All the VMs are stored on the physical hard drive of our host or physical
machine in the file format. Thus, they can be easily back up, moved, or cloned in real-time is one of the
popular benefits we get from running a virtual machine.
Faster Server Provisioning- VMs are easy to install, eliminating the cumbersome and time-
consuming installation of applications on servers. For example, if you want a new server to run some
application then it is very easy and fasts to deploy pre-configured VM templates instead of installing a
new server OS on a physical machine. The same goes for cloning existing applications to try something
new.
Beneficial in Disaster Recovery- As VM doesn’t depend upon the underlying hardware, thus they
are independent of the hardware or CPU model on which it is running. Hence, we can easily replicate
VMs to cloud or offsite, so in some disaster situations, it would be easy to recover and get online in less
span of time as we don’t need to care for some particular server manufacturer or server model.
Use Older Applications for a Longer Time- Well, still many companies are using old applications
but crucial to them and couldn’t support modern hardware or operating system. In such situations,
even the company wants, the IT would never prefer to touch them. However, we can pack such
applications in a VM with the compatible old operating system and old virtual hardware. In this way, it
will be possible to switch to modern hardware while keeping the old software stack intact.
Virtual Machine is Easily Portable- A single server running with some particular operating system
software is not easy to move from one place to another, whereas if we have virtualized the same, then it
becomes very easy to move data and OS from one physical server to another, situated somewhere else
with the minimal workforce and without heavy transportation requirements.
Better Usage of Hardware Resources- Our modern computer or server hardware is quite
powerful, using a single operating system and a couple of applications can’t churn out the maximum
juice of it. Thus, using VMs not only efficiently use the power of the CPU but allows the companies to
save hundreds of bucks from spending on hardware.
Made Cloud Computing Possible- Yes, without VMs there will be no cloud computing because the
whole idea behind it to provide an instant provision of machines running either Windows or Linux OS;
it is only possible with the help of pre-build templates ready to deploy as VMs on some remote data
center hardware. For example, Digital Ocean, AWS, and Google Cloud. So, next time whenever you
heard “Cloud hosting” or “Virtual Private Server” hosting, remember it is a VM running on data center
hardware.

11.2 Hypervisors
VMs are widely used instead of physical machines in the IT industry today. The VMs support green IT
solutions, and its usage increases resource utilization, making the management tasks easier. Since the
VMs are mostly used, the technology that enables the virtual environment also gets attention in
industries and academia. The virtual environment can be created with the help of a software tool
called hypervisors.
Hypervisors are the software tool that sits in between VMs and physical infrastructure and provides
the required virtual infrastructure for VMs.Hypervisors are also called as Virtual Machine Manager
(VMM) (Figure 6). These are the key drivers in enabling virtualization in cloud data centers. Different
hypervisors are being used in the IT industry. Some of the examples are VMware, Xen, Hyper-V, KVM,
and OpenVZ.
The virtual infrastructure means virtual CPUs (vCPUs), virtual memory, virtual NICs (vNICs), virtual
storage, and virtual I/O devices. The fundamental element of hardware virtualization is the hypervisor,
or VMM that helps to recreate a hardware environment in which Guest Operating Systems (OSs) are
installed.

23
Notes

Cloud Computing
Figure 6: Internal Organization of a Virtual Machine Manager

There are three main modules, dispatcher, allocator, and interpreter, coordinate their activity in order
to emulate the underlying hardware. The dispatcher constitutes the entry point of the monitor and
reroutes the instructions issued by the virtual machine instance to one of the two other modules. The
allocator is responsible for deciding the system resources to be provided to the VM: whenever a virtual
machine tries to execute an instruction that results in changing the machine resources associated with
that VM, the allocator is invoked by the dispatcher. The interpreter module consists of interpreter
routines. These are executed when ever a VM executes a privileged instruction: a trap is triggered and
the corresponding routine is executed.
The design and architecture of a VMM, together with the underlying hardware design of the host
machine, determine the full realization of hardware virtualization, where a guest OS can be
transparently executed on top of a VMM as though it were run on the underlying hardware.
The criteria that need to be met by a VMM to efficiently support virtualization were established by
Goldberg and Popekin 1974. The three properties have to be satisfied:

24
Notes

o Equivalence: A guest running under the control of a virtual machine manager should exhibit the
same behavior as when it is executed directly on the physical host.
o Resource control: VMM should be incomplete control of virtualized resources.
o Efficiency: A statistically dominant fraction of the machine instructions should be executed
without intervention from the VMM.
Before the hypervisors are introduced, there was a one-to-one relationship between hardware and
OSs. This type of computing results in underutilized resources.
After the hypervisors are introduced, it became a one-to-many relationship. With the help of
hypervisors, many OSs can run and share a single hardware.

Types of Hypervisors
Hypervisors are generally classified into two categories:

 Type 1 or bare metal hypervisors


 Type 2 or hosted hypervisors

Figure 7: Hosted (left) and Native (right) VMs

Type I Hypervisors run directly on top of the hardware. Therefore, they take the place of the OSs and
interact directly with the ISA interface exposed by the underlying hardware, and they emulate this
interface in order to allow the management of guest OSs. These are also called a native VM since it runs
natively on the hardware. The other characteristics of Type I hypervisors include:

o Can run and access physical resources directly without the help of any host OS.
o Additional overhead of communicating with the host OS is reduced and offers better efficiency
when compared to type 2 hypervisors.
o Used for servers that handle heavy load and require more security.
o Examples- Microsoft Hyper-V, Citrix XenServer, VMWare ESXi, and Oracle VM Server for
SPARC.

24
Notes

Cloud Computing

Figure 8: Bare-Metal Virtualization

Type II Hypervisors require the support of an operating system to provide virtualization services
(Figure 9). This means that they are programs managed by the OS, which interact with it through the
ABI and emulate the ISA of virtual hardware for guest OSs.This type of hypervisor is also called a
hosted or embedded VM since it is hosted within an OS (Figure 10). Hosted virtualization requires the
host OS and does not have direct access to the physical hardware. The host OS is also known as
physical host, which has the direct access to the underlying hardware. However, the major
disadvantage of this approach is if the host OS fails or crashes, it also results in crashing of VMs. So, it is
recommended to use type 2 hypervisors only on client systems where efficiency is less
critical.Examples- VMWare Workstation and Oracle Virtualbox.

Figure 9: Type II Hypervisor

24
Notes

Figure 10: Hosted Virtualization

Summarized Implementation of Hypervisors


Vary greatly, with options including:

o Type 0 Hypervisors- Hardware-based solutions that provide support for virtual machine creation
and management via firmware. Example: IBM LPARs and Oracle LDOMs are examples.
o Type 1 Hypervisors- Operating-system-like software built to provide virtualization. Example:
Including VMware ESX, JoyentSmartOS, and Citrix XenServer.
o Type 1 Hypervisors– Also includes general-purpose operating systems that provide standard
functions as well as VMM functions. Example: Microsoft Windows Server with HyperV and
RedHat Linux with KVM.
o Type 2 Hypervisors- Applications that run on standard OSs but provide VMM features to guest
OSs. Example: VMware Workstation and Fusion, Parallels Desktop, and Oracle VirtualBox.
Other Variations Include:Much variation exists due to breadth, depth and importance of virtualization in
modern computing.
Para Virtualization- Technique in which the guest operating system is modified to work in
cooperation with the VMM to optimize performance.
Programming-environment Virtualization- VMMs do not virtualize real hardware but instead create an
optimized virtual system. It is used by Oracle Java and Microsoft.Net.
Emulators– Allow applications written for one hardware environment to run on a very different
hardware environment, such as a different type of CPU.
Application Containment- Not virtualization at all but rather provides virtualization-like features by
segregating applications from the operating system, making them more secure, manageable. It is
included in Oracle Solaris Zones, BSD Jails, and IBM AIX WPARs.

Summary
 Virtualization raises abstraction. Abstraction pertains to hiding of the inner details from a
particular user. Virtualization helps in enhancing or increasing the capability of abstraction.
 Virtualization enables sharing of resources much easily, it helps in increasing the degree of
hardware level parallelism, basically, there is sharing of the same hardware unit among different
kinds of independent units.
 In a bare metal architecture, one hypervisor or VMM is actually installed on the bare metal
hardware. There is no intermediate OS existing over here. The VMM communicates directly with
the system hardware and there is no need for relying on any host OS.
 Type I Hypervisors run directly on top of the hardware. Therefore, they take the place of the OSs
and interact directly with the ISA interface exposed by the underlying hardware, and they emulate
this interface in order to allow the management of guest OSs.
 Type II Hypervisors require the support of an operating system to provide virtualization services.

24
Notes

Cloud Computing
This means that they are programs managed by the OS, which interact with it through the ABI and
emulate the ISA of virtual hardware for guest OSs.
 Xen is an open-source initiative implementing a virtualization platform based on
paravirtualization. Xen is a VMM for IA-32 (x86, x86-64), IA-64 and PowerPC 970 architectures.
 KVM is part of existing Linux code, it immediately benefits from every new Linux feature, fix, and
advancement without additional engineering. KVM converts Linux into a type-1 (bare-metal)
hypervisor.
 VMware Workstation is the most dependable, high-performing, feature-rich virtualization platform
for your Windows or Linux PC.

Keywords
 Virtualization: Virtualization is a broad concept that refers to the creation of a virtual version of
something, whether hardware, a software environment, storage, or a network.
 Type 0 Hypervisors- Hardware-based solutions that provide support for virtual machine creation
and management via firmware. Example: IBM LPARs and Oracle LDOMs are examples.
 Type 1 Hypervisors- Operating-system-like software built to provide virtualization. Example:
Including VMware ESX, JoyentSmartOS, and Citrix XenServer. It also includes general-purpose
operating systems that provide standard functions as well as VMM functions. Example: Microsoft
Windows Server with HyperV and RedHat Linux with KVM.
 Type 2 Hypervisors- Applications that run on standard OSs but provide VMM features to guest
OSs. Example: VMware Workstation and Fusion, Parallels Desktop, and Oracle VirtualBox.
 Interpretation: Interpretation involves relatively inefficient instruction-at-a-time.
 Binary Translation: Binary translation involves block-at-a-time optimization for repeated.

24
 Para Virtualization- Technique in which the guest operating system is modified to work in
cooperation with the VMM to optimize performance.
 Programming-environment Virtualization- VMMs do not virtualize real hardware but instead
create an optimized virtual system. It is used by Oracle Java and Microsoft.Net.
 Emulators–Emulators allow the applications written for one hardware environment to run on a
very different hardware environment, such as a different type of CPU.

You might also like