Cloud Computing Notes
Cloud Computing Notes
Introduction
Cloud refers to a Network or Internet. Cloud is something, which is present at remote location.
Cloud can provide services over network, that is, on public networks or on private networks, that
is, Wide Area Networks (WANs), Local Area Networks (LANs), or Virtual Private Networks
(VPNs). Applications such as e-mail, web conferencing, customer relationship management (CRM),
all run in cloud.
Cloud Computing
Figure 2: Cloud Scenario
Cloud computing refers to manipulating, configuring, and accessing the applications online. It
offers online data storage, infrastructure and application and involves both a combination of
software and hardware- based computing resources delivered as a network service.
Example:Suppose we want to install MS-Word in our organization’s computer. We have to bought
the CD/DVD of it an install it or can setup a S/W distribution server to automatically install this
The client The service providers pay for hardware and maintenance
does not
need to pay
for
hardware
application on your machine. Every time Microsoft issued a new version, we have to perform the
same task. If some other company hosts your application, that is, they handle the cost of servers
and manage the software updates. The customers are charged as per their utilization, that is, as
per
the usage (Figure 3). It reduces the cost of using that software along with the reduction in the cost
of installation of heavy servers. Additionally, cloud aids in reducing the cost of electricity bills.
Figure 3: Cloud Client-Server Perspective
2
Notes
Distributed Computing
in which all the are networked the concept of
Cloud Computing
software together and
cloud
Client-Server
applications, all
Computing
share their
the data and all resources when computing
the controls are needed.
resided on the
server side.
Then after, distributed computing came into picture, where all the computers are networked
together and share their resources when needed. On the basis of above computing, there was
emergence of the concept of cloud computing.
Cloud computing was invented in the early 1960s by J.C.R Licklider (Joseph Carl RobnettLicklider),
an American Psychologist and Computer Scientist. During his network research work on ARPANet
(Advanced Research Project Agency Network), trying to connect people and data all around the
world, he gave an introduction tocloud computing technique which we all know today.
Born on March 11th, 1915 in St. Louis, Missouri, US, J.C.R Licklider pursued his initial studies from
Washington University in 1937 and received a BA Degree with three specializations including
physics, maths, psychology. Later in the year 1938, Licklider completed his MA in psychology and
received his Ph.D. from the University of Rochester in the year 1942. His interest in Information
Technology and looking at his years of service in different areas and achievements, made him
appointed as Head of IPTO at ARPA (US Department of Defense Advanced Research Project
Agency) in the Year 1962. His aim led to ARPANet, a forerunner of today’s Internet.
At around in 1961, John MacCharty suggested in a speech at MIT that computing can be sold like a
utility, just like a water or electricity. In 1999, Salesforce.com started delivering of applications to
users using a simple website. The applications were delivered to enterprises over the Internet, and
this way the dream of computing sold as utility were true.
The beauty of the cloud computing phase went on running throughout the era of the 21 st Century.
In 2002, Amazon started Amazon Web Services, providing services like storage, computation and
even human intelligence. However, only starting with the launch of the Elastic Compute Cloud in
2006 a truly commercial service open to everybody existed.
By 2008, Google too introduced its beta version of the search engine. In 2009, Google Apps also
started to provide cloud computing enterprise applications. Earlier announced by Microsoft in the
year 2008, it released its cloud computing service named Microsoft Azure for testing, deployment
and managing applications and services.
In the year 2012, Google Compute Engine was released but was rolled to the public. By the end of
Dec 2013, Oracle introduced Oracle Cloud with three primary services for business (IaaS, PaaS and
SaaS). Currently, as per records, Linux and Microsoft Azure share most of their work parallel.
Notes
Cloud Computing
Development of grid computing offered sharing of computing power and resources spread
across multiple geographical domains.
The recent stage involves rise of cloud computing where service-oriented, market-based
computing applications are predominant.
Virtualization meets the Internet.
1. Abstraction: Cloud computing abstracts the details of system implementation from users
and developers. Applications run on physical systems that aren't specified, data is stored in
locations that are unknown, administration of systems is outsourced to others, and access
by users is ubiquitous (Present or found everywhere).
2. Virtualization: Cloud computing virtualizes systems by pooling and sharing resources.
Systems and storage can be provisioned as needed from a centralized infrastructure, costs
4
Notes
are assessed on a metered basis, multi-tenancy is enabled, and resources are scalable with agility.
Cloud Computing
components of a cloud computing solution (Figure 6).
a) Clients
b) The data center, and
c) Distributed servers.
6
Notes
A. Clients: Devices that end users interact with to manage their information on cloud. There can be different types
of clients such as:
Mobile Clients: Includes PDAs or smartphones, like a Blackberry, Windows Mobile Smartphone, or an iPhone.
Thin Clients: Computers that do not have internal hard drives, but rather let the server do all the work, but
then display the information.
Thick Clients: Thick clients are regular computer, using a web browser like Firefox or Internet Explorer to
connect to the cloud.
A thin client is a computing device that's connected to a network. Unlike a typical PC or “fat client,” that has the
memory, storage and computing power to run applications and perform computing tasks on its own, a thin client
functions as a virtual desktop, using the computing power residing on the networked servers.
Advantages of Using Thin Clients
Thin clients are becoming an increasingly popular solution, because of their price and effect on the environment.
Lower hardware costs:Thin clients are cheaper than thick clients because they do not contain as much hardware.
They also last longer before they need to be upgraded or become obsolete.
Lower IT costs:Thin clients are managed at the server and there are fewer points of failure.
Security: Since the processing takes place on the server and there is no hard drive, there’s less chance of
malware invading the device. Also, since thin clients don’t work without a server, there’s less chance of them being
physically stolen.
Data security: Since data is stored on the server, there’s less chance for data to be lost if the client computer
crashes or is stolen.
Less power consumption:Thin clients consume less power than thick clients. This means you’ll pay less to
power them, and you’ll also pay less to air-condition the office.
Ease of repair or replacement: If a thin client dies, it’s easy to replace. The box is simply swapped out and the
user’s desktop returns exactly as it was before failure.
Less noise: Without a spinning hard drive, less heat is generated and quieter fans can be used on the thin
client.
B. Datacenter: Datacenter has a collection of servers where the application to which you subscribe is housed. It is
a large room in the basement of your building or a room full of servers on the other side of the world that you access
via the Internet. There is a growing trend in the IT world of virtualizing servers. The software can be installed allowing
multiple instances of virtual servers to be used. There can be half a dozen virtual servers running on one physical
server.
Notes
Cloud Computing
C. Distributed Servers: The distributed servers are in geographically disparate locations. They
give the service provider more flexibility in options and security. For instance, Amazon has
their cloud solution in servers all over the world. If something were to happen at one site,
causing a failure, the service would still be accessed through another site.
8
Notes
reassigned according to consumer demand. Examples of resources include storage, processing, memory, and network
bandwidth.
Rapid elasticity– Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward as needed. For the user, the capabilities
available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
Measured service– Cloud systems automatically control and optimize resource use by leveraging a metering
capability appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). The
resource usage can be monitored, controlled, and reported, providing transparency for both the provider and user of the
service. It follows a “Pay as you grow” modelor for internal IT departments to provide IT chargeback capabilities. The
usage of cloud resources is measured and user is charged based on some metrics such as amount of CPU cycles used,
amount of storage space used, number of network I/O requests etc. are used to calculate the usage charges for the cloud
resources.
Performance-Dynamic allocation of resources as per the application workloads helps to easily scale up or down and
maintain performance.
Reduced costs-Cost benefits for applications as only as much computing and storage resources are required can be
provisioned dynamically and upfront investment in purchase of computing assets to cover worst case requirements is
avoided.
Outsourced Management-Cloud computing allows the users to outsource the IT infrastructure
requirements to external cloud providers and save upfront capital investments. This helps in
easiness of setting IT infrastructure and pay only for the operational expensesfor the cloud resources used.
Multitenancy: Multitenancy allows multiple users to make use of the same shared resources. Modern applications
such as Banking, Financial, Social networking, e-commerce, B2B etc. are deployed in cloud environments that support
multi-tenanted applications.
Service Oriented Architecture (SOA): SOAis essentially a collection of services which communicate with each
other.SOA provides a loosely-integrated suite of services that can be used within multiple business domains (Figure
7). The approach here is usually implemented by Web service model.
Cost Management and Containment :Cloud computing can be expensive if you don’t know how to manage your
computing resources and take maximum advantage of them. Many times, the organizations dwell in a mindset of pay-
as-you-go and spend more on cloud than they would have on on-premise infrastructure. One should always optimize
the cost by financial analytics and reporting the usage for better monitoring of cost.For the most part, cloud computing
can save businesses money. In the cloud, an organization can easily ramp up its processing capabilities without
making large investments in new hardware. Businesses can instead access extra processing through pay-as-you-go
models from public cloud providers. However, the on-demand and scalable nature of cloud computing services make it
sometimes difficult to define and predict quantities and costs.
Notes
Cloud Computing
Lack of Resources/ Expertise:Cloud challenges companies and enterprises. As the usage of cloud
technologies is increasing, tools to manage it are getting sophisticated, finding experts on top of
this in cloud computing is becoming a bottleneck to many organizations. The organizations are
increasingly placing more workloads in the cloud while cloud technologies continue to rapidly
advance. Due to these factors, organizations are having a tough time keeping up with the tools.
Also, the need for expertise continues to grow. Such challenges can be minimized through
additional training of IT and development staff.Many companies are adopting automated cloud
management technologies but it’s always better to train individuals to satisfy the need of time.
Presently, DevOps tools like Chef and Puppet are heavily used in the IT industry.
Governance/ Control:In cloud computing, infrastructure resources are under CSP’s control and
end-users or companies have to abide by the governance policies from CSP. The traditional IT
teams have no control over how and where their data is and processed. IT governance should
assure how infrastructure assets from CSP are being used. To overcome the downfalls and
challenges, onboarding to Cloud, IT must adapt its orthodox way of governance and process
control to the induct cloud. Now, IT is playing an important role in benchmarking cloud services
requirements and policies. Thus, the proper IT governance should ensure IT assets are
implemented and used according to agreed-upon policies and procedures; ensure that these assets
are properly controlled and maintained; and ensure that these assets are supporting your
organization’s strategy and business goals.
Compliance:When organizations are moving their native data to a cloud, they need to comply
with particular general body policies if the data is from public sources. Although, finding a cloud
provider who will comply with these policies is difficult to find, or one needs to negotiate on that
front.Many CSPs are coming with flexible compliance policies for data acquisition and cloud
infrastructure.An issue for anyone using backup services or cloud storage. Every time a company
moves data from the internal storage to a cloud, it is faced with being compliant with industry
regulations and laws.Depending on the industry and requirements, every organization must
ensure thesestandards are respected and carried out.This is one of the many challenges facing
cloud computing, and although the procedure can take a certain amount of time, the data must be
properly stored.
Managing Multiple Clouds: The challenges facing cloud computing haven’t just been
concentrated in one, single cloud.The state of multi-cloud has grown exponentially in recent years.
But managing multi-cloud infrastructure contrary to a single cloud is very challenging given all the
above data-driven challenges. The companies are shifting or combining public and private clouds
and, as mentioned earlier, tech giants like Alibaba and Amazon are leading the
way.Approximately, 81% of companies are having multi-cloud strategies and have a hybrid cloud
structure (public and private clouds). Companies are opting for a multi-cloud scenario because
some of the services are cost-effective in public and to manage cost-effectively this cloud model
has been very successful in recent years. However, managing such highly networked architecture
is a difficult task.
Performance:When a business moves to the cloud it becomes dependent on the service
providers. The next prominent challenges of moving to cloud computing expand on this
partnership.The performance of the organization’s BI and other cloud-based systems is also tied
to the performance of the cloud provider when it falters. When your provider is down, you are also
down.Cloud computing is on-demand compute service and supports multitenancy, thus
performance should not suffer over the acquisition of new users. The CSP should maintain enough
resources to serve all the users and any ad-hoc requests.
Building a Private Cloud:Creating an internal or private cloud will cause a significant benefit:
having all the data in-house. But IT managers and departments will need to face building and
gluing it all together by themselves, which can cause one of the challenges of moving to cloud
1
Notes
computing extremely difficult.Many tasks such as grabbing an IP address cloud software layer, setting up a virtual local
area network (VLAN), load balancing, firewall rule-setting for the IP address, server software patch, arranging nightly
backup queue are quite complex associated tasks for a private cloud.Although building a private cloud isn’t a top
priority for many organizations, for those who are likely to implement such a solution, it quickly becomes one of the
main challenges facing cloud computing – private solutions should be carefully addressed.Many companies are planning
to do so because the cloud will on-premise and they will have all the data authority over shared cloud resources.
Segmented Usage and Adoption: Most organizations did not have a robust cloud adoption strategy in place when
they started to move to the cloud. Instead, ad-hoc strategies sprouted, fueled by several components. One of them was
the speed of cloud adoption. Another one was the staggered expiration of data center contracts/equipment, which led to
intermittent cloud migration. Finally, there also were individual development teams using the public cloud for specific
applications or projects.
Migration:One of the main cloud computing industry challenges in recent years concentrates on migration. This is a process
of moving an application to a cloud. An although moving a new application is a straightforward process, when it comes to
moving an existing application to a cloud environment, many cloud challenges arise.
1.Online Data Storage: Cloud computing allows storing data like files, images, audios, and videos,
etc. on the cloud storage. The organization need not set physical storage systems to store a huge
volume of business data which costs so high nowadays. As they are growing technologically, data
generation is also growing with respect to time, and storing that becoming problem. In that
situation, Cloud storage is providing this service to store and access data any time as per
requirement. Example: Google Drive, DropBox, iCloud etc.
Notes
Cloud Computing
2.Backup and Recovery: Cloud vendors provide security from their side by storing safe to the
data as well as providing a backup facility to the data. They offer various recovery application for
retrieving the lost data. In the traditional way, backup of data is a very complex problem and also
it is very difficult sometimes impossible to recover the lost data. But cloud computing has made
backup and recovery applications very easy where there is no fear of running out of backup
media or loss of data.
3.Bigdata Analysis: We know the volume of big data is so high, such that, storing that in the
traditional data management system for an organization is impossible. Cloud computing has
resolved that problem by allowing the organizations to store their large volume of data in cloud
storage without worrying about physical storage. Next comes analyzing the raw data and finding
out insights or useful information from it is a big challenge as it requires high-quality tools for
data analytics. Cloud computing provides the biggest facility to organizations in terms of storing
and analyzing big data.
4.Anti-Virus Applications: Previously, organizations were installing antivirus software within
their system even if we will see we personally also keep antivirus software in our system for
safety from outside cyber threats. But, nowadays, cloud computing provides cloud antivirus
software which means the software is stored in the cloud and monitors your
system/organization’s system remotely. This antivirus software identifies the security risks and
fixes them. Sometimes also they give a feature to download the software.
5. E-commerce Application: Cloud-based e-commerce allows responding quickly to the
opportunities which are emerging. Users respond quickly to the market opportunities as well as
the traditional e-commerce responds to the challenges quickly. Cloud-based e-commerce gives a
new approach to doing business with the minimum amount as well as minimum time possible.
Customer data, product data, and other operational systems are managed in cloud environments.
6.Cloud computing in Education: Cloud computing in the education sector brings an
unbelievable change in learning by providing e-learning, online distance learning platforms, and
student information portals to the students. It is a new trend in education that provides an
attractive
environment for learning, teaching, experimenting, etc. to students, faculty members, and
researchers. Everyone associated with the field can connect to the cloud of their organization
and access data and information from there.
7. Technology-enhanced Learning or Education as a Service (EaaS):There are the following
education applications offered by the cloud-
Example:
• Google Apps for Education: Google Apps for Education is the most widely used platform
for free web-based email, calendar, documents, and collaborative study.
• Chromebooks for Education: Chromebook for Education is one of the most important
Google's projects. It is designed for the purpose that it enhances education innovation.
1
Notes
• Tablets with Google Play for Education: It allows educators to quickly implement the latest technology solutions
into the classroom and make it available to their students.
8. Testing and development: Setting up the platform for development and finally performing different types of testing to
check the readiness of the product before delivery requires different types of IT resources and infrastructure. But Cloud
computing provides the easiest approach for development as well as testing even if deployment by using their IT resources
with minimal expenses. Organizations find it more helpful as they got scalable and flexible cloud services for product
development, testing, and deployment.
9. E-Governance Applications: Cloud computing can provide its services to multiple activities conducted by the
government. It can support the government to move from the traditional ways of management and service providers to
an advanced way of everything by expanding the availability of the environment, making the environment more scalable
and customized. It can help the government to reduce the unnecessary cost in managing, installing, and upgrading
applications and doing all these with help of could computing and utilizing that money public service.
10. Cloud Computing in Medical Fields: In the medical field also nowadays cloud computing is used for storing and
accessing the data as it allows to store data and access it through the internet
without worrying about any physical setup. It facilitates easier access and distribution of
information among the various medical professional and the individual patients. Similarly, with help of cloud computing
offsite buildings and treatment facilities like labs, doctors making emergency house calls and ambulances information, etc
can be easily accessed and updated remotely instead of having to wait until they can access a hospital computer.
11. Entertainment Applications: Many people get entertainment from the internet, in that case, cloud computing is the
perfect place for reaching to a varied consumer base. Therefore, different types of entertainment industries reach near the
target audience by adopting a multi-cloud strategy.Cloud-based entertainment provides various entertainment
applications such as online music/video, online games and video conferencing, streaming services, etc and it can reach
any device be it TV, mobile, set-top box, or any other form. It is a new form of entertainment called On-Demand
Entertainment (ODE).Entertainment industries use a multi-cloud strategy to interact with the target audience. Cloud
computing offers various entertainment applications such as online games and video conferencing.
• Online games:Today, cloud gaming becomes one of the most important entertainment media. It offers various online
games that run remotely from the cloud. The best cloud gaming services are Shaow, GeForce Now, Vortex, Project xCloud,
and PlayStation Now.
• Video conferencing apps:Video conferencing apps provides a simple and instant connected experience. It allows us to
communicate with our business partners, friends, and relatives using a cloud-based video conferencing. The benefits of
using video conferencing are that it reduces cost, increases efficiency, and removes interoperability.
12. Art Applications: Cloud computing offers various art applications for quickly and easily design attractive cards,
booklets, and images. Some most commonly used cloud art applications are given below:
Notes
Cloud Computing
• Moo: One of the best cloud art applications. It is used for designing & printing business
cards, postcards, & mini cards.
• Vistaprint: Vistaprint allows us to easily design various printed marketing products such as
business cards, Postcards, Booklets, and wedding invitations cards.
• Adobe Creative Cloud: Adobe creative cloud is made for designers, artists, filmmakers, and
other creative professionals. It is a suite of apps which includes PhotoShop image editing
programming, Illustrator, InDesign, TypeKit, Dreamweaver.
13. Management Applications: Cloud computing offers various cloud management tools which
help admins to manage all types of cloud activities, such as resource deployment, data
integration, and disaster recovery. These management tools also provide administrative
control over the platforms, applications, and infrastructure.Some important management
applications are:
• Toggl: Toggl helps users to track allocated time period for a particular project.
• Evernote: Evernote allows you to sync and save your recorded notes, typed notes, and
other notes in one convenient place. It is available for both free as well as a paid version. It
uses platforms like Windows, macOS, Android, iOS, Browser, and Unix.
• Outright: Outright is used by management users for the purpose of accounts. It helps to
track income, expenses, profits, and losses in real-time environment.
• GoToMeeting: GoToMeeting provides Video Conferencing and online meeting apps, which
allows you to start a meeting with your business partners from anytime, anywhere using
mobile phones or tablets. Using GoToMeeting app, you can perform the tasks related to the
management such as join meetings in seconds, view presentations on the shared screen, get
alerts for upcoming meetings, etc.
14. Social Applications: Social cloud applications allow a large number of users to connect with
each other using social networking applications such as Facebook, Twitter, Linkedln, etc. There
are the following cloud based social applications-
• Facebook: Facebook is a social networking website which allows active users to share files,
photos, videos, status, more to their friends, relatives, and business partners using the
cloud storage system. On Facebook, we will always get notifications when our friends like
and comment on the posts.
• Twitter: Twitter is a social networking site. It is a microblogging system. It allows users to
follow high profile celebrities, friends, relatives, and receive news. It sends and receives
short posts called tweets.
• LinkedIn: LinkedIn is a social network for students, freshers, and professionals.
With respect to this as a cloud, the market is growing rapidly and it is providing various services
day by day. So, in the future cloud computing is going to touch many more sectors by providing
more applications and services.
Summary
• Cloud computing offers various cloud management tools which help admins to manage all
types of cloud activities, such as resource deployment, data integration, and disaster recovery.
• Cloud computing refers to manipulating, configuring, and accessing the applications online.
1
Notes
• Cloud computing virtualizes systems by pooling and sharing resources. Systems and storage
can be provisioned as needed from a centralized infrastructure, costs are assessed on a metered
basis, multi-tenancy is enabled, and resources are scalable with agility.
• Cloud computing eliminates the need for IT infrastructure updates and maintenance since the
service provider ensures timely, guaranteed, and seamless delivery of our services and also
takes care of all the maintenance and management of our IT services according to the service-
level agreement (SLA).
• Cloud computing can be expensive if you don’t know how to manage your computing
resources and take maximum advantage of them.
• Cloud computing lets us deploy the service quickly in fewer clicks. This quick deployment lets
us get the resources required for our system within minutes.
Keywords
Service Oriented Architecture (SOA): SOAis essentially a collection of services which
communicate with each other.SOA provides a loosely-integrated suite of services that can be
used within multiple business domains.
Abstraction: Cloud computing abstracts the details of system implementation from users
and developers. Applications run on physical systems that aren't specified, data is stored in
locations that are unknown, administration of systems is outsourced to others, and access by
users is ubiquitous.
Cloud:Cloud refers to a Network or Internet. A cloud is usually defined as a large group of
interconnected computers. These computers include network servers or personal computers.
Cloud computing:Cloud computing is a model for enabling ubiquitous, convenient, on-
demand network access to a shared pool of configurable computer resources (networks,
servers, storage, applications, and services) that can be rapidly provisioned and released
with minimal management effort or service provider interaction.
Cloud computing collaboration: The users from multiple locations within a corporation, and
from multiple organizations, desired to collaborate on projects that crossed company and
geographic boundaries. Projects had to be housed in the “cloud” of the Internet, and accessed
from any Internet-enabled location. Cloud-collaboration is also termed as Internet-based
group collaboration.
Multitenancy: In cloud computing, multitenancy means that multiple customers of a cloud
vendor are using the same computing resources. Despite the fact that they share resources,
cloud customers aren't aware of each other, and their data is kept totally separate.
Thick clients:Thick clients are regular computers, using a web browser like Firefox or Internet
Explorer to connect to the cloud.
Notes
1
Notes
Cloud Computing
service. This level of agility can give businesses using cloud computing a real advantage over
competitors– it’s not surprising that businesses identify operational agility as a key reason for
cloud adoption.
Increasing Business Competitiveness:Many businesses are forced to be quick and efficient
when adapting to the marketplace changes because competition keeps rising up. They can benefit
a lot from the flexible and customizable cloud technology solutions. This may help them increase
their agility in their operations.
Geographical Dispersion:Cloud computing allows you to work anytime and from anywhere,
provided you have an internet connection. Since most established cloud services also offer mobile
apps, you’re not even restricted by which device you’ve got to hand.Businesses can offer more
flexible working perks to employees, so they can enjoy the work-life balance that suits them–
without productivity taking a hit. Home office is an attractive option for many employees and now,
thanks to cloud services, an increasingly accessible idea too.
For Developers:Cloud computing provides increased amounts of storage and processing power
to run the applications they develop. Cloud computing also enables new ways to access
information, process and analyze data, and connect people and resources from any location
anywhere in the world. In essence, it takes the lid off the box; with cloud computing, developers
are no longer boxed in by physical constraints.
For IT Departments:For IT departments, cloud computing offers more flexibility in computing
power, often at lower costs. With cloud computing, IT departments don't have to engineer for peak-
load capacity, because the peak load can be spread out among the external assets in the cloud. And,
because additional cloud resources are always at the ready, companies no longer have to purchase
assets (servers, workstations, and the like) for infrequent intensive computing tasks. If you need
more processing power, it's always there in the cloud—and accessible on a cost-efficient basis.
For End-Users:For end users, cloud computing offers all these benefits and more. An individual
using a web-based application isn't physically bound to a single PC, location, or network, his
applications and documents can be accessed wherever he is, whenever he wants.Gone is the fear of
losing data if a computer crashes out. Documents hosted in the cloud always exist, no matter what
happens to the user's machine.
1
Notes
cloud security and management system.Cybersecurity experts are more aware of cloud security than any other IT
professional.According to Crowd Research Partners survey, 9 out of 10 cybersecurity experts are concerned regarding
cloud security. Also, they are worried about the violation of confidentiality, data privacy, and data leakage and loss.
Performance:When a business moves to the cloud it becomes dependent on the service providers. The next prominent
challenges of moving to cloud computing expand on this partnership.The performance of the organization’s BI and other
cloud-based systems is also tied to the performance of the cloud provider when it falters. When your provider is down,
you are also down.Cloud computing is on-demand compute service and supports multitenancy, thus performance
should not suffer over the acquisition of new users. The CSP should maintain enough resources to serve all the users
and any ad-hoc requests.
Dealing with Multi-Cloud Environments: These days, maximum companies are not only working
on a single cloud. As per the RightScale report revelation, nearly 84% of the companies are
following a multi-cloud strategy and 58% already have their hybrid cloud tactic that is combined
with the public and private cloud.A long-term prediction on the future of cloud computing
technology gives a more difficulty encountered by the teams of IT infrastructure. However, the
professionals have also suggested the top practices like re-thinking procedures, training staff,
tooling, active vendor relationship management, and doing the study.
Cloud Migration: Although releasing a new app in the cloud is a very simple procedure,
transferring an existing application to a cloud computing environment is tougher.According to the
report, 62% said that their cloud migration projects were tougher than they anticipated. Alongside
this, 64% of migration projects took more time than predicted and 55% went beyond their
budgets.Especially, some organizations migrating their apps to the cloud reported downtime
during migration (37%), issues syncing data before cutover (40%), the problem having migration
tools to work well (40%), slow data migration (44%), configuring security issues (46%), and time-
consuming troubleshooting (47%).And to solve over these issues, nearly 42% of the IT experts
said they wished they had increased their budgets, around 45% wished to have employed an in-
house professional, 50% wanted to set a longer project duration, 56% of them wanted they had
performed more pre-migration testing.
Cloud Integration: Finally, several companies, especially those with hybrid cloud environments
report issues associated with having their on-premise apps and tools and public cloud for working
together.According to survey, 62% of respondents said integration of legacy systems as their
biggest challenge in multi-cloud.Likewise, in a Software One report on cloud cost, 39% of those
assessed said integrating legacy systems was one of their biggest worries while utilizing the cloud.
Also, combining new cloud-based apps and legacy systems needs resources, expertise, and time.
Unauthorized Service Providers: It is a new concept for most of the business organizations. A
normal businessman is not able to verify the genuineness of the service provider agency. It’s very
difficult for them to check the whether the vendors meet the security standards or not. There is
need for an ICT consultant to evaluate the vendors against the worldwide criteria. It is necessary
to verify that the vendor must be operating this business for a sufficient time without having any
negative record in past. Vendor continuing business without any data loss complaint and have a
number of satisfied clients. Market reputation of the vendor should be unblemished.
Hacking of Brand: Cloud involves some major risk factors like hacking. Some professional
hackers are able to hack the application by breaking the efficient firewalls and steal the sensitive
information of the organizations. A cloud provider hosts numerous clients; each can be affected by
actions taken against any one of them. When any threat came into the main server it affects all the
other clients also. As in distributed denial of service attacks server requests that inundate a
provider from widely distributed computers.
Cloud Management:Managing a cloud is not an easy task. It consists of a lot of technical
challenges. A lot of dramatic predictions are famous about the impact of cloud computing. People
think that traditional IT department will be outdated and research supports the conclusions that
cloud impacts are likely to be more gradual and less linear. Cloud services can easily change and
update by the business users. It does not involve any direct involvement of IT department. It is a
service provider’s responsibility to manage the information and spread it across the organization.
So, it is difficult to manage all the complex functionality of cloud computing.
Sustainability: Sustainability refers to minimizing the effect of cloud computing on environment.
Indeed, citing the server’s effects on the environmental effects of cloud computing, in areas where
climate favors natural cooling and renewable electricity is readily available. The countries with
favorable conditions, such as Finland, Sweden, and Switzerland are trying to attract cloud
computing data centers. But other than nature’s favors, would these countries have enough
technical infrastructure to sustain the high-end clouds.
1
Notes
Cloud Computing
Cost Management and Containment:Cloud computing can be expensive if you don’t know how to manage your
computing resources and take maximum advantage of them. Many times, theorganizations dwell in a mindset of pay-as-
you-go and spend more on cloud than they would have on on-premise infrastructure. One should always optimize the
cost by financial analytics and reporting the usage for better monitoring of cost.For the most part, cloud computing can
save businesses money. In the cloud, an organization can easily ramp up its processing capabilities without making large
investments in new hardware. Businesses can instead access extra processing through pay-as-you-go modelsfrom
public cloud providers. However, the on-demand and scalable nature of cloud computing services make it sometimes
difficult to define and predict quantities and costs.
Lack of Resources/ Expertise:Cloud challenges companies and enterprises. As the usage of cloud technologies is increasing,
tools to manage it are getting sophisticated, finding experts on top of this in cloud computing is becoming a bottleneck to
many organizations. The organizations are increasingly placing more workloads in the cloud while cloud technologies
continue to rapidly advance. Due to these factors, organizations are having a tough time keeping up with the tools. Also,
the need for expertise continues to grow. Such challenges can be minimized through additional training of IT and
development staff.Many companies are adopting automated cloud management technologies but it’s always better to train
individuals to satisfy the need of time. Presently, DevOps tools like Chef and Puppet are heavily used in the IT industry.
Governance/ Control: In cloud computing, infrastructure resources are under CSP’s control and end-users or
companies have to abide by the governance policies from CSP. The traditional IT teams have no control over how and
where their data is and processed. IT governance should assure how infrastructure assets from CSP are being used. To
overcome the downfalls and challenges, onboarding to Cloud, IT must adapt its orthodox way of governance and
process control to the induct cloud. Now, IT is playing an important role in benchmarking cloud services requirements
and policies. Thus, the proper IT governance should ensure IT assets are implemented and used according to agreed-
upon policies and procedures; ensure that these assets are properly controlled and maintained; and ensure that these
assets are supporting your organization’s strategy and business goals.
Compliance: When organizations are moving their native data to a cloud, they need to comply with particular general
body policies if the data is from public sources. Although, finding a cloud provider who will comply with these policies is
difficult to find, or one needs to negotiate on that front.Many CSPs are coming with flexible compliance policies for data
acquisition and cloud infrastructure.An issue for anyone using backup services or cloud storage. Every time a company
moves data from the internal storage to a cloud, it is faced with being compliant with industry regulations and
laws.Depending on the industry and requirements, every organization must ensure thesestandards are respected and
carried out.This is one of the many challenges facing cloud computing, and although the procedure can take a certain
amount of time, the data must be properly stored.
Building a Private Cloud: Creating an internal or private cloud will cause a significant benefit: having all the data in-
house. But IT managers and departments will need to face building and gluing it all together by themselves, which can
cause one of the challenges of moving to cloud computing extremely difficult.Many tasks such as grabbing an IP address
cloud software layer, setting up a virtual local area network (VLAN), load balancing, firewall rule-setting for the IP
address, server software patch, arranging nightly backup queue are quite complex associated tasks for a private
cloud.Although building a private cloud isn’t a top priority for many organizations, for those who are likely to
implement such a solution, it quickly becomes one of the main challenges facing cloud computing – private solutions
should be carefully addressed.Many companies are planning to do so because the cloud will on-premise and they will
have all the data authority over shared cloud resources.
Segmented Usage and Adoption:Most organizations did not have a robust cloud adoption strategy in place when they
started to move to the cloud. Instead, ad-hoc strategies sprouted, fueled by several components. One of them was the
speed of cloud adoption. Another one was the staggered expiration of data center contracts/equipment, which led to
intermittent cloud migration. Finally, there also were individual development teams using the public cloud for specific
applications or projects.
A. Cloud Technologies
Certain technologies that are working
behind the cloud computing platforms making cloud
computing flexible, reliable, usable. These technologies are listed below:
Virtualization
Service-Oriented Architecture (SOA)
Grid Computing
1
Notes
Utility Computing
Virtualization is a technique, which allows to share single physical instance of an application or
resource among multiple organizations or tenants (customers). It does so by assigning a logical
2
Notes
Cloud Computing
name to a physical resource and
2
Notes
providing a pointer to that physical resource when demanded(Figure 1). The multitenant architecture offers virtual
isolation among the multiple tenants and therefore the organizations can use and customize the application as though they
each have its own instance running.
Figure 1: Virtualization Scenario
Service-Oriented Architecture (SOA)helps to use applications as a service for other applications regardless the type of
vendor, product or technology (Figure 2). Therefore, it is possible to exchange of data between applications of different
vendors without additional programming or making changes to services.
Grid Computingis like distributed computing in which a group of computers from multiple locations are connected with
each other to achieve common objective. These computer resources are heterogeneous and geographically dispersed. Grid
Computing breaks complex task into smaller pieces. These smaller pieces are distributed to CPUs that reside within the
grid.
Utility computingis based on Pay per Use model. It offers computational resources on demand as a metered service. Cloud
computing, grid computing, and managed IT services are based on the concept of utility computing.
Front End
Back End
Each of the ends is connected through a network, usually via Internet.
2
Notes
Cloud Computing
2
Notes
Even in this case, the cloud completely depends on the network that is used.
Usually, when accessing the public or private cloud, the users require minimum bandwidth, which is sometimes defined
by the cloud providers.
This layer does not come under the purview of Service Level Agreements (SLAs), that is, SLAs do not take into account the
Internet connection between the user and cloud for quality of service (QoS).
Layer 3 (Cloud Management Layer)
Layer 3 consists of software that are used in managing the cloud. The software can be a cloud OS, a software that acts as an
interface between the data center (actual resources) and the user, or a management software that allows managing
resources. This software usually allow resource management (scheduling, provisioning, etc.), optimization (server
consolidation, storage workload consolidation), and internal cloud governance.
This layer comes under the purview of SLAs, that is, the operations taking place in this layer
would affect the SLAs that are being decided upon between the users and the service providers.
Any delay in processing or any discrepancy in service provisioning may lead to an SLA violation.
As per rules, any SLA violation would result in a penalty to be given by the service provider.
Layer 4 consists of provisions for actual hardware resources. Usually, in the case of a public cloud, a data center is used in
the back end.
Similarly, in a private cloud, it can be a data center, which is a huge collection of hardware resources interconnected to each
other that is present in a specific location or a high configuration system.
This layer comes under the purview of SLAs. This is the most important layer that governs the SLAs.This layer affects the SLAs
most in the case of data centers.
Whenever a user accesses the cloud, it should be available to the users as quickly as possible and should be within the time
that is defined by the SLAs.
If there is any discrepancy in provisioning the resources or application, the service provider has to pay the penalty. Hence,
the datacenter consists of a high-speed network connection and a highly efficient algorithm to transfer the data from the
datacenter to the manager.
There can be a number of datacenters for a cloud, and similarly, a number of clouds can share a datacenter.
public cloud,
private cloud,
community cloud, or
hybrid cloud.
2
Notes
Cloud Computing
The differences are based on how exclusive the computing resources are made to a Cloud
Consumer.
• Public Cloud:Public cloud is one in which the cloud infrastructure and computing resources are
made available to the general public over a public network (Figure 4). A public cloud is owned by
an organization selling cloud services, and serves a diverse pool of clients.
• Private Cloud: Private cloud gives a single Cloud Consumer’s organization the exclusive access
to and usage of the infrastructure and computational resources. It may be managed either
by:cloud consumer organization and may be hosted on the organization’s premises (that is, on-
site private clouds depicted in Figure 5), ora third party, outsourced to a hosting company (that
is, outsourced private clouds depicted in Figure 6).
• Hybrid Cloud: Hybrid cloud(Figure 7) is a composition of two or more clouds (on-site private,
on-site community, off-site private, off-site community or public) that remain as distinct entities
2
Notes
but are bound together by standardized or proprietary technology that enables data and
application portability.
• Community Cloud: Community cloudserves a group of cloud consumers which have shared concerns such as
mission objectives, security, privacy and compliance policy, rather than serving a single organization as does a private cloud
(Figure 9). Similar to private clouds, a community cloud may be managed by: organizations and may be implemented on
customer premise (that is,on-site community cloud), or, a third party, outsourced to a hosting company (that is,outsourced
community cloud).
Figure 9: Community Cloud Model
2
Notes
Cloud Computing
2
Notes
2
Notes
Cloud Computing
has control over the deployed applications and possibly application hosting environment
configurations.
Infrastructure as a Service (IaaS): The capability provided to the consumer is to provision
processing, storage, networks, and other fundamental computing resources where the
consumer is able to deploy and run arbitrary software, which can include operating systems
and applications. The consumer does not manage or control the underlying cloud infrastructure
but has control over operating systems, storage, deployed applications, and possibly limited
control of select networking components (e.g., host firewalls).
2
Notes
Cloud consumer may request service from a cloud broker instead of contacting a cloud provider
directly. Cloud broker may create a new service by combining multiple services or by enhancing
an existing service. The actual cloud providers are invisible to the cloud consumer and the cloud
consumer interacts directly with the cloud broker. Cloud carriers provide the connectivity and
transport of cloud services from cloud providers to cloud consumers. As illustrated in Figure, a
cloud provider participates in and arranges for two unique service level agreements (SLAs), one
with a cloud carrier (e.g. SLA2) and one with a cloud consumer (e.g. SLA1) (Figure 16).
Figure 16: SLA Management Between Cloud Consumer and Cloud Carrier
3
Notes
Cloud Computing
A cloud provider arranges service level agreements (SLAs) with a cloud carrier and may request
dedicated and encrypted connections to ensure the cloud services are consumed at a consistent
level according to the contractual obligations with the cloud consumers. In this case, the provider
may specify its requirements on capability, flexibility and functionality in SLA2 in order to provide
essential requirements in SLA1. For a cloud service, a cloud auditor conducts independent
assessments of the operation and security of the cloud service implementation. The audit may
involve interactions with both the cloud consumer and the cloud provider.
Cloud consumer is a principal stakeholder for cloud computing service. It can be a person or
organization that maintains a business relationship with, and uses the service from a cloud
provider. Cloud consumer browses the service catalogue from a cloud provider, requests the
appropriate service, sets up service contracts with the cloud provider, and uses the service. Cloud
consumer may be billed for the service provisioned, and needs to arrange payments accordingly.
Cloud provider can be a person, or an organization.It is an entity responsible for making a service
available to interested parties.Acloud provider can acquire and managethe computing
infrastructure required for providing the services, run the cloud software that provides the
services; and make arrangement to deliver the cloud services to the Cloud Consumers through
network access. A cloud provider’s activities can be described in five major areas:
• service deployment,
• service orchestration,
• cloud service management,
• security and privacy
Service orchestration refers to the composition of system components to support the cloud
providers activities in arrangement, coordination and management of computing resources in order
to provide cloud services to cloud consumers. Cloud service management includes all of the service-
related functions that are necessary for the management and operation of those services required by
or proposed to cloud consumers.
Cloud auditor is a party that can perform an independent examination of cloud service controls
with the intent to express an opinion thereon. Audits are performed to verify conformance to
standards through review of objective evidence. Cloud auditor can evaluate the services provided
by a cloud provider in terms of security controls, privacy impact, performance, etc. An auditor may
ensure that fixed content has not been modified and that the legal and business data archival
requirements have been satisfied. As cloud computing evolves, the integration of cloud services
can be too complex for cloud consumers to manage. Cloud consumer may request cloud services
from a cloud broker, instead of contacting a cloud provider directly.
Cloud broker is an entity that manages the use, performance and delivery of cloud services and
negotiates relationships between cloud providers and cloud consumers.A cloud broker can provide
services in three categories:
• Service Intermediation: A cloud broker enhances a given service by improving some specific
capability and providing value-added services to cloud consumers. The improvement can be
managing access to cloud services, identity management, performance reporting, enhanced
security, etc.
• Service Aggregation: A cloud broker combines and integrates multiple services into one or
more new services. The broker provides data integration and ensures the secure data
movement between the cloud consumer and multiple cloud providers.
• Service Arbitrage: Service arbitrage is similar to service aggregation except that the services
being aggregated are not fixed. Service arbitrage means a broker has the flexibility to choose
services from multiple agencies. The cloud broker, for example, can use a credit-scoring
service to measure and select an agency with the best score.
Cloud carrier acts as an intermediary that provides connectivity and transport of cloud services
between cloud consumers and cloud providers. Cloud carriers provide access to consumers
through network, telecommunication and other access devices. For example, cloud consumers can
obtain cloud services through n/w access devices, such as computers, laptops, mobile phones,
3
Notes
mobile Internet devices (MIDs), etc. The distribution of cloud services is normally provided by
network and telecommunication carriers or a transport agent, where a transport agent refers to a
business organization that provides physical transport of storage media such as high-capacity
hard drives.
Summary
• Cloud computing signifies a major change in the way we run various applications and store our
information. Everything is hosted in the “cloud”, a vague assemblage of computers and servers
accessed via the Internet, instead of the method of running programs and data on a single
desktop computer.
• With cloud computing, the software programs you use are stored on servers accessed via the
Internet and are not run from your personal computer. Hence, even if your computer stops
working, the software is still available for use.
• The “cloud” itself is the key to the definition of cloud computing. The cloud is usually defined
as a large group of interconnected computers. These computers include network servers or
personal computers.
• Cloud computing has its ancestors both as client/server computing and peer-to-peer
distributed computing. It is all about how centralized storage of data and content facilitates
collaborations, associations and partnerships.
• With cloud storage, data is stored on multiple third-party servers, rather than on the dedicated
servers used in traditional networked data storage.
• Cloud storage system stores multiple copies of data on multiple servers and in multiple
locations. If one system fails, then it only requires changing the pointer to stored object's
location.
• Jericho Forum has designed the Cloud Cube Model to help select cloud formations for secure
collaboration.
Keywords
Cloud:The cloud is usually defined as a large group of interconnected computers. These computers
include network servers or personal computers.
3
Notes
Cloud Services
Introduction
Cloud services refer to any IT services that are provisioned and accessed from a cloud computing
provider. Cloud computing is a broad term that incorporates all delivery and service models of
cloud computing and related solutions. Cloud services are delivered over the internet and
accessible globally from the internet.Cloud computing services and deployment models describe
how the services delivery is carried out in cloud computing. These indicate the topological layouts
for the cloud computing. The entities basically correspond to the operational components in cloud
computing.
Cloud services provide many IT services traditionally hosted in-house, including provisioning an
application/database server from the cloud, replacing in-house storage/backup with cloud storage
and accessing software and applications directly from a web browser without prior
installation.Cloud services provide great flexibility in provisioning, duplicating and scaling
resources to balance the requirements of users, hosted applications and solutions. Cloud services
are built, operated and managed by a cloud service provider, which works to ensure end-to-end
availability, reliability and security of the cloud.
Software-as-a-Service (SaaS): SaaS is a software delivery model that helps users to access applications through a simple
interface over the Internet. The providers of SaaS possess total control on the applications and they enable the users to access
them. The users have an illusion as if their applications are locally hosted without being bothered of the application
background details. The typical SaaS examples are social media platforms, email boxes, Facebook, Google Apps etc.
Platform-as-a-Service (PaaS): PaaS model is more of an urbane model that offers building, testing,
deployment, and hosting environments for applications created by users or otherwise acquired from them. The prominent
platforms of Microsoft Azure, Google App Engine are perfect PaaS cloud models.
Infrastructure-as-a-Service (IaaS): IaaS is an undemanding model for delivering the cloud service. It provides an actual
physical infrastructure support that includes computing, storing, networking, and other primary resources to the users. The
users benefit by renting resources from the IaaS providers and using them on demand instead of incurring their own
infrastructure. Examples- Amazon EC2, Nimbus etc.
Cloud Computing
Figure 1: Cloud Classic Service Models
Infrastructure-as-a-Service (IaaS)
• Provides virtual machines, virtual storage, virtual infrastructure, and other hardware assets as
resources that clients can provision.Service provider manages all infrastructure, while the client
is responsible for all other aspects of the deployment.
• Can include the operating system, applications, and user interactions with the system.Figure 3
shows the different concepts associated with IaaS.
4
Notes
Scalability
• Scalability: Resource is available as and when the client needs it and, therefore, there are no
delays in expanding capacity or the wastage of unused capacity
• No Investment in Hardware: The underlying physical hardware that supports an IaaS service is
set up and maintained by the cloud provider, saving the time and cost of doing so on the client
side
• Utility Style Costing: The service can be accessed on demand and the client only pays for the
resource that they actually use.
• Location Independence: The service can usually be accessed from any location as long as there
is an internet connection and the security protocol of the cloud allows it.
• Physical Security of Data Centre Locations: The services available through a public cloud, or
private clouds hosted externally with the cloud provider, benefit from the physical security
afforded to the servers which are hosted within a data center.
• No Single Point of Failure: If one server or network switch, for example, were to fail, the
broader service would be unaffected due to the remaining multitude of hardware resources
and redundancy configurations. For many services if one entire data center were to go offline,
nevermind one server, the IaaS service could still run successfully.
• Where Demand is Very Volatile- any time there are significant spikes and troughs in terms of
demand on the infrastructure amazon.in, Snapdeal, Flipkart- during festival season.
• For new enterprise without capital to invest in hardware. Example- entrepreneurs starting on a
shoestring budget.
• Where the enterprise is growing rapidly and scaling hardware would be problematic. Example-
A company that experience huge success immediately.
• Where the enterprise is growing rapidly and scaling hardware would be problematic. Example-
A company that experience huge success immediately - animato, Pinterest.
• For specific line of business, trial or temporary infrastructural needs.
5
Notes
Cloud Computing
Examples of IaaS
o Amazon Web Services: A public cloud that offers subscribers access to virtual servers for product deployment, Cloud
storage, tools for development, testing, and analytics. The application provides a ready-to-use environment to
develop and test the product and offers the full cloud infrastructure for its deployment and maintenance.
o Microsoft Azure: Combination of IaaS and platform as a service, the software offers 100+ services for software
development, administration, and deployment, provides tools for working with innovative technologies (big data,
machine learning, Internet of Things), etc.
o IBM Infrastructure: IBM uses its in-house services to store the data of infrastructure users, enabling remote data
access via Cloud computing. IBM servers support AI, blockchain, and the Internet of Things. The infrastructure also
provides Cloud storage and virtual development environments, enabled on the subscription basis.
o Google Cloud Infrastructure: The large network of international servers that provides users access to remote Cloud
data centers. Companies can store their information in Asia, Europe, Latin America, which minimizes the risk of a
security breach.
Platform-as-a-Service (PaaS)
PaaS is a category of cloud computing service that provides a platform and environment to allow developers to build
applications and services over the internet (Figure 5). PaaS services are hosted in the cloud and accessed by users simply
via their web browser. PaaS allows the users to create software applications using tools supplied by the provider. PaaS
services can consist of preconfigured features that customers can subscribe to; they can choose to include the features that
meet their requirements while discarding those that do not.
Cloud consumers not manage or control the underlying cloud infrastructure including network,
servers, operating systems, or storage, but has control over the deployed applications and possibly
configuration settings for the application-hosting environment. PaaS is expected to grow more
than 3,000% by 2026. From $1.78b to $68.38b, more than double SaaS, the expected growth
during the same period. Other PaaS characteristics are:
PaaS provides virtual machines, operating systems, applications, services, development
frameworks, transactions, and control structures. The clients can deploy its applications on the
cloud infrastructure or use applications that were programmed using languages and tools that are
supported by the PaaS service provider. The service provider manages the cloud infrastructure,
the operating systems, and the enabling software. The client is responsible for installing and
managing the application that it is deploying.PaaS provides service– Programming IDE in order to
develop their service among PaaS. It integrates the full functionalities which supported from the
underling runtime environment. It offers some development tools, such as profiler, debugger and
testing environment.Examples of PaaS Service Providers: Microsoft Windows Azure, Google App
Engine, Hadoop etc.
PaaS providers can assist developers from the conception of their original ideas to the creation of
applications, and through to testing and deployment. This is all achieved in a managed
mechanism.As with most cloud offerings, PaaS services are generally paid for on a subscription
basis with clients ultimately paying just for what they use. Following are the PaaS offerings:
o Operating System
o Coding &Server-side Scripting Environment
5
Notes
o Optionally, analytics
• Users don’t Need to Invest in Physical Infrastructure: Being able to ‘rent’ virtual infrastructure has both cost benefits
and practical benefits. They don’t need to purchase hardware themselves or employ the expertise to manage it. This
leaves them free to focus on the development of applications.
• Makes development possible for ‘non-experts’: with some PaaS offerings anyone can develop an application. They
can simply do this through their web browser utilizing one-click functionality, example, WordPress.
• Flexibility: customers can have control over the tools that are installed within their platforms and can create a platform
that suits their specific requirements. They can ‘pick and choose’ the features they feel are necessary.
• Adaptability: Features can be changed if circumstances dictate that they should.
• Teams in various locations can work together: As an internet connection and web browser are all that is required,
developers spread across several locations can work together on the same application build.
• Security: Security is provided, including data security and backup and recovery.
5
Notes
Cloud Computing
Software-as-a-Service (SaaS)
SaaS facilitates complete operating environment with applications, management, and the user interface. The
applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-
based email). The consumer does not manage or control the underlying cloud infrastructure including network,
servers, OSs, storage, or even individual application capabilities.Examples: Google Apps, SalesForce.com, EyeOS etc.
SaaS describes any cloud service where consumers are able to access software applications over the internet. The
applications are hosted in “the cloud” and can be used for a wide range of tasks for both individuals and organizations.
Google, Twitter, Facebook and Flickr are all examples of SaaS, with users able to access the services via any internet-
enabled device (Figure 7). The enterprise users are able to use applications for a range of needs, including accounting
and invoicing, tracking sales, planning, performance monitoring and communications (including webmail and instant
messaging).
SaaS is often referred to as software-on-demand and utilizing it is akin to renting software rather
than buying it. The SaaS users, however, subscribe to the software rather than purchase it, usually
on a monthly basis. The applications are purchased and used online with files saved in the cloud
rather than on individual computers.
Advantages of SaaS
• No Additional Hardware Costs: Processing power required to run the applications is supplied
by the cloud provider.
• No Initial Setup Costs: Applications are ready to use once the user subscribes.
• Pay for What You Use: If a piece of software is only needed for a limited period then it is only
paid for over that period and subscriptions can usually be halted at any time.
• Usage is Scalable: If a user decides they need more storage or additional services, for example,
then they can access these on demand without needing to install new software or hardware.
• Updates are Automated: Whenever there is an update it is available online to existing
customers, often free of charge. No new software will be required as it often is with other types
of applications and the updates will usually be deployed automatically by the cloud provider.
• Cross-Device Compatibility: SaaS applications can be accessed via any internet enabled device,
which makes it ideal for those who use a number of different devices, such as internet enabled
phones and tablets, and those who don’t always use the same computer.
• Accessible from Any Location: Rather than being restricted to installations on individual
computers, an application can be accessed from anywhere with an internet enabled device.
• Applications can be Customized and White-labelled: With some software, customization is
available meaning it can be altered to suit the needs and branding of a particular customer.
5
Notes
o Google’s G Suite: Top cloud service provides businesses with access to management,
communication, and organization tools and uses cloud for data computing. Gmail, Google Drive,
Google Docs, Google Planner, Hangouts—these are all SaaS tools that can be accessed anytime and anywhere.
o Microsoft Office 365: The series of web services that provide business owners and individuals with access to
Microsoft Office main tools directly from their browsers. Users can access Microsoft editing tools, business email,
communication instruments, and documentation software.
o Salesforce: The most popular CRM on the market that unites marketing, communication, e- commerce. Salesforce
uses cloud computing benefits to provide access to its services and internal data. Business owners can keep track of
their sales, client relations, communications, and relevant tasks from any device. Salesforce can be integrated into the
website — the information about incoming leads will be sent to the platform automatically.
Provides a virtual data center to store It provides virtual platforms Provides web software
information and create platforms for and tools to create, test, and and apps to complete
app development, testing, & deploy apps business tasks
deployment
Used by the network architects Used by developers Used by the end users
5
Notes
Cloud Computing
developers. Typically, a service provider does not require purchase of an IT product by a user or
organization.A service provider builds, operates and manages these IT products, which are
bundled and delivered as a service/solution. In turn, a customer accesses this type of solution
from a service provider via several different sourcing models, such as a monthly or annual
subscription fee.
• Hosting Service Provider- A type of Internet hosting service that allows individuals and
organizations to make their website accessible via the World Wide Web. Web host companies
provide space on a server owned or leased for use by clients, as well as providing Internet
connectivity, typically in a data center.
• Cloud Service Provider- Offer cloud-based services
• Storage Service Provider- A Storage service provider (SSP) is any company that provides
computer storage space and related management services. SSPs also offer periodic backup and
archiving.
• Software-as-a-Service (SaaS) Provider- SaaS providers allow the users to connect to and use cloud-
based apps over the Internet. Common examples are email, calendaring and office tools (such as
Microsoft Office 365).
Cloud service provider can be a third-party company offering a cloud-based platform,
infrastructure, application, or storage services. Much like a homeowner would pay for a utility such
as electricity or gas, companies typically have to pay only for the amount of cloud services they use,
as business demands require. The cloud services can reduce business process costs when compared
to on-premise IT. Such services are managed by the Cloud Service Provider (CSPs). CSP provide all
the resources needed for the application and hence the company needs not to worry about resource
5
Notes
allocation. Cloud services can dynamically scale up based on users’ needs. CSP companies establish public clouds, manage
private clouds, or offer on-demand cloud computing components (also known as cloud computing services) like IaaS, PaaS,
and SaaS.
Cloud Service Providers are helpful way to access computing services that you would otherwise have to provide on your
own, such as:
• Infrastructure: The foundation of every computing environment. This infrastructure could include networks, database
services, data management, data storage (known in this context as cloud storage), servers (cloud is the basis for serverless
computing), and virtualization.
• Platforms: Tools needed to create and deploy applications. These platforms could include operating systems, middleware,
and runtime environments.
• Software: Ready-to-use applications. This software could be custom or standard applications provided by independent
service providers.
Google AppEngine
NetApp
Microsoft Azure
IBM
Hadoop
Manjrasoft Aneka
Google App Engine:Often referred to as GAE or simply App Engine. GAE is a cloud-based PaaS
for developing and hosting web applications in Google-managed data centers. The applications are
sandboxed and run across multiple servers. GAE offers automatic scaling for web applications—as
the number of requests increases for an application, App Engine automatically allocates more
resources for the web application to handle the additional demand. It primarily supports Go, PHP,
Java, Python, Node.js, .NET, and Ruby applications, although it can also support other languages
via "custom runtimes". The service is free up to a certain level of consumed resources and only in
standard environment but not in flexible environment. The fees are charged for additional storage,
bandwidth, or instance hours required by the application.
5
Notes
Cloud Computing
GAE was first released as a preview version in April 2008 and came out of preview in September
2011. It offers:
Google Web Toolkit (GWT):An open-source set of tools that allows web developers to create
and maintain JavaScript front-end applications in Java. Other than a few native libraries,
everything is Java source that can be built on any supported platform with the included GWT Ant
build files. It is licensed under the Apache License 2.0 (Figure 12).
NetApp:NetApp, Inc. is an American hybrid cloud data services and data management company
headquartered in Sunnyvale, California. It has ranked in the Fortune 500 since 2012. It was
founded in 1992 with an IPO in 1995, NetApp offers cloud data services for management of
applications and data both online and physically.An organization that creates storage and data
management solutions for their customers. NetApp was one of the first companies in the cloud,
offering data center consolidation and storage services, as well as virtualization. The products
include a platform OS, storage services, storage security, software management, and protection
software. NetApp competes in the computer data storage hardware industry.In 2009, NetApp
ranked second in market capitalization in its industry behind EMC Corporation, now Dell EMC,
and ahead of Seagate Technology, Western Digital, Brocade, Imation, and Quantum.In the total
revenue of 2009, NetApp ranked behind EMC, Seagate, Western Digital, and Brocade, Xyratex, and
Hutchinson Technology.According to a 2014 IDC report, NetApp ranked second in the network
storage industry "Big 5's list", behind EMC (DELL), and ahead of IBM, HP and Hitachi.According to
Gartner's 2018 Magic Quadrant for Solid-State Arrays, NetApp was named a leader, behind Pure
Storage Systems. In 2019, Gartner named NetApp as #1 in Primary Storage.
NetApp goal is to deliver cost efficiency and accelerate business breakthroughs. NetApp products
could be integrated with a variety of software products, mostly for ONTAP systems. Other
provisions from NetApp include:
• Automation- NetApp provides a variety of automation services directly to its products with
HTTP protocol or through middle-ware software.
• Docker- NetApp Trident software provides a persistent volume plugin for Docker containers
with both orchestrators Kubernetes and Swarm and supports ONTAP, Azure NetApp Files
(ANF), Cloud Volumes and NetApp Kubernetes Service in cloud.
• Backup and Recovery- Cloud Backup integrates with nearly all Backup & Recovery products for
archiving capabilities since it is represented as ordinary NAS share for B&R software. The
backup and recovery software from competitor vendors like IBM Spectrum Protect, EMC
NetWorker, HP Data Protector, Dell vRanger, and others also have some level of integrations
with NetApp storage systems.
5
Notes
Amazon Web
Services (AWS):Amazon Web Services (AWS) is a subsidiary of Amazon
providing on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-
as-you-go basis. AWS provides a variety of basic abstract technical infrastructure and distributed computing building
blocks and tools. It offers services on many
different fronts, from storage to platform to databases.As of 2021, AWS comprises over 200
products and services including computing, storage, networking, database, analytics, application services, deployment,
management, machine learning, mobile, developer tools, and tools for the Internet of Things (IoT).
Case Study: The most popular cloud services from AWS include:
• Amazon Elastic Compute Cloud (Amazon EC2):Amazon EC2 allows the users to rent virtual
computers on which to run their own computer applications. EC2 encourages scalable
deployment of applications by providing a web service through which a user can boot an Amazon Machine Image (AMI) to
configure a virtual machine, which Amazon calls an "instance", containing any software desired. A user can create, launch,
and terminate server- instances as needed, paying by the second for active servers– hence the term "elastic". EC2 provides
users with control over the geographical location of instances that allows for latency optimization and high levels of
redundancy. In November 2010, Amazon switched its own retail website platform to EC2 and AWS.
• Amazon SimpleDB (Simple Database Service): Amazon SimpleDB is a distributed database written in Erlang by
Amazon.com. It is used as a web service in concert with Amazon Elastic Compute Cloud (EC2) and Amazon S3 and is part of
Amazon Web Services. It was announced on December 13, 2007.
• Amazon Simple Storage Service (Amazon S3): Amazon S3 is a service offered by Amazon Web Services (AWS) that provides object
storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to
run its global e-commerce network. Amazon S3 can be employed to store any type of object, which allows for uses like storage
for Internet applications, backup and recovery, disaster recovery, data archives, data lakes for analytics, and hybrid cloud
storage.
• Amazon CloudFront: Amazon CloudFront is a content delivery network (CDN) operated by Amazon Web Services. Content
delivery networks provide a globally-distributed network of proxy servers that cache content, such as web videos or other
bulky media, more locally to consumers, thus improving access speed for downloading the content.
• Amazon Simple Queue Service (Amazon SQS): Amazon SQS is a distributed message queuing service introduced by
Amazon.com in late 2004. It supports programmatic sending of messages via web service applications as a way to
communicate over the Internet. SQS is intended to provide a highly scalable hosted message queue that resolves issues
arising from the common producer-consumer problem or connectivity between producer and consumer.
• Amazon Elastic Block Store (Amazon EBS): Amazon Elastic Block Store (EBS) provides raw block-level storage that can be
attached to Amazon EC2 instances and is used by Amazon Relational Database Service (RDS). Amazon EBS provides a range
of options for storage performance and cost. These options are divided into two major categories: SSD-backed storage for
transactional workloads, such as databases and boot volumes (performance depends primarily on IOPS), and disk-backed
storage for throughput intensive workloads, such as MapReduce and log processing (performance depends primarily on
MB/s).
Microsoft: Microsoft offers a number of cloud services for organizations of any size (Figure 13):
• Azure Services Platform- Windows Azure/ Microsoft Azure, commonly referred to as Azure is a cloud computing service
created by Microsoft for building, testing, deploying, and managing
5
Notes
Cloud Computing
• .NET Services- The.NET Framework is a software framework developed by Microsoft that runs
primarily on Microsoft Windows. It includes a large class library called Framework Class
Library and provides language interoperability across several programming languages.
Programs written for.NET Framework execute in a software environment named the Common
Language Runtime (CLR). The CLR is an application virtual machine that provides services
such as security, memory management, and exception handling. As such, computer code
written using.NET Framework is called "managed code".
• Exchange Online- Work smarter, anywhere, with hosted email for business.
• SharePoint Services- SharePoint is a web-based collaborative platform that integrates with
Microsoft Office. Launched in 2001, SharePoint is primarily sold as a document management
and storage system.
• Microsoft Dynamics CRM- Microsoft Dynamics is a line of enterprise resource planning and
customer relationship management software applications. Microsoft Dynamics forms part of
"Microsoft Business Solutions". Dynamics can be used with other Microsoft programs and
services, such as SharePoint, Yammer, Office 365, Azure and Outlook. The Microsoft Dynamics
focus-industries are retail, services, manufacturing, financial services, and the public sector.
Microsoft Dynamics offers services for small, medium, and large businesses.
Salesforce.com: Salesforce works on the three primary areas, sales cloud, service cloud and your
cloud. It has 3 primary offerings, Force.com, Salesforce.com CRM, AppExchange.
5
Notes
• Service Cloud- The platform for customer service that lets companies tap into the power of customer conversations no
matter where they take place.
• Your Cloud- Powerful capabilities to develop custom applications on its cloud computing
platform.
IBM:IBM offers cloud computing services to help businesses of all sizes take advantage of this increasingly attractive
computing model.IBM is applying its industry-specific consulting expertise and established technology record to offer
secure services to companies in public, private, and hybrid cloud models. Some of their services include:
Hadoop:Apache Hadoop is a collection of open-source software utilities that facilitates using a network of many computers
to solve problems involving massive amounts of data and computation. It provides a software framework for distributed
storage and processing of big data using the MapReduce programming model.All the modules in Hadoop are designed with
a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the
framework.Apache Hadoop software allows for the distributed storage and processing of large datasets across clusters of
computers using simple programming models. Hadoop is designed to scale up from a single computer to thousands of
clustered computers, with each machine offering local computation and storage.In this way, Hadoop can efficiently store
and process large datasets ranging in size from gigabytes to petabytes of data.
Aneka: ANEKA cloud platform is a software platform and a framework for developing distributed applications on the cloud. It
harnesses the computing resources of a heterogeneous network of workstations and servers or data centers on demand. Aneka
provides developers with a rich set of APIs for transparently exploiting these resources. The system administrators can
leverage on a collection of tools to monitor and control the deployed infrastructure. This can be a public cloud available to anyone
through the Internet, or a private cloud constituted by a set of nodes with restricted access.All in all, Aneka-based computing
cloud is a collection of physical and virtualized resources connected through a network, which are either the Internet or a
private intranet.
6
Notes
Cloud Computing
• Financial Stability: Your cloud provider should be well-financed and receive steady profits
from the infrastructure. If the company shuts down due to monetary issues, your solutions will
be in jeopardy, too. In the worst-case scenario, you will have to cease the support of your
solutions, or, in a better case, migrate to a new provider, which is an expensive and time-
consuming process.
• Industries that Prefer the Solution: Before committing to a cloud services company, take a look
at its existing clients and examine their markets. Ideally, the provider should be popular among
companies in your niche, or at least in the neighbouring ones. Another road to take is asking
competitors and partners about their favourite choices.
• Datacenter Locations: To avoid safety risks, make sure that cloud providers can enable
geographical distribution for your data. Ideally, you want to locate your data on servers in Asia,
Europe, America, without betting on a single region. Also, pay attention to countries— some,
like Japan or Germany, are known to be more secure, whereas Russia, for instance, is not the
safest option.
• Security Programs: Take a look at the security programs of your favourite cloud providers. The
majority of companies have dedicated papers and e-books that discuss this matter in detail—
take your time to go through them. Start with taking a look at security documentation of the
top cloud providers- AWS, G Suite, Microsoft Azure, Salesforce. You can use these pages as
references during your safety research.
• Encryption Standards: Make sure the cloud provider specifies the use of encryption. The
provider should encrypt the data both when it’s being transferred to the cloud and during the
storage itself. No matter what is the stage of data storage, the information should be secured
end-to-end, so there is no way even for developers of the service to access the file contents.
• Check Accreditation and Auditing: The most common online auditing standard is SSAE— the
procedure that verifies that the online service checked the safety of its data-storing practices.
ISO 27001 certificate verified that a cloud provider complies with international safety
standards for data storage.
• Look for solutions that offer Free Cloud Backup: OneDrive, Google Drive, Dropbox, and Box
offer free space to create cloud backup copies, both manually and automatically.
6
Notes
Storage-as-a-Service (STaaS)
• Cloud service model in which a company leases or rents its storage infrastructure to another company or individuals to store
either files or objects.
• Economy of scale in the service provider’s infrastructure theoretically allows them to provide storage much more cost-
effectively than most individuals or corporations can provide their own storage when the total cost of ownership is
considered.
• STaaS is generally seen as a good alternative for a small or mid-sized business that lacks the
capital budget infrastructure. and/or technical personnel to implement and maintain their own storage
• Small companies and individuals often find this to be a convenient methodology for managing backups, and providing
cost savings in personnel, hardware and physical space.
• STaaS operates through a web-based API that is remotely implemented through its interaction with the client application’s
in-house cloud storage infrastructure for input/output (I/O) and read/write (R/W) operations.
• If the company ever loses its local data, the network administrator could contact the STaaS provider and request a copy
of the data.
• For an end-user-level cloud storage, Dropbox, Google Drive, Apple’s iCloud and Microsoft OneDrive are among the
leading end-user-level cloud storage providers.
• For enterprise-level cloud storage, Amazon S3, Zadara, IBM’s SoftLayer and Google Cloud Storage are some of the more
popular providers.
Data-as-a-Service (DaaS)
• In the DaaS computing model (a more advanced, fine-grained form of STaaS), data (as opposed to files) is readily
accessible through a Cloud-based platform.
• Data (either from databases or object containers) is supplied “on-demand” via cloud platforms (as opposed to the
traditional, on-premise models in which the data remains in the customer’s hands) and the vendor provides the tools
that make it easier to access and explore.
• Based on Web Services standards and Service-oriented Architecture (SOA), DaaS provides a
dynamic infrastructure for delivering information on demand to geographical location or users, regardless of their
organizational separation– and, in the providers with a number of significant opportunities. process, presents solution
• DaaS eliminates redundancy and reduces associated expenditures by accommodating vital data in a single location,
allowing data use and/or modification by multiple users via a single update point.
6
Notes
Cloud Computing
Communication-as-a-Service (CaaS)
Communications as a Service (CaaS) is an outsourced enterprise communications solution that can
be leased from a single vendor (Figure 16). CaaS vendor is responsible for
all hardware and software management and offers guaranteed Quality of Service (QoS). CaaS
allows businesses to selectively deploy communications devices and modes on a pay-as-you-go,
as- needed basis. Such communications can include:Voice over IP (VoIP or Internet telephony);
Instant messaging (IM); and Collaboration and Video conference applications using the fixed and
mobile devices.
Advantages of CaaS
Monitoring-as-a-Service (MaaS)
A concept that combines the benefits of cloud computing technology and traditional on-premise IT
infrastructure monitoring solutions (Figure 17). MaaS is a new delivery model that is suited for
organizations looking to adopt a monitoring framework quickly with minimal investments.MaaS is
a framework that facilitates the deployment of monitoring functionalities for various other
services and applications within the cloud. The most common application for MaaS is online state
monitoring, which continuously tracks certain states of applications, networks, systems, instances
or any element that may be deployable within the cloud. MaaS makes it easier for users to deploy
state monitoring at different levels of cloud services.
6
Notes
Advantages of MaaS
oReady to Use Monitoring Tool Login: The vendor takes care of setting up the hardware
infrastructure, monitoring tool, configuration and alert settings on behalf of the customer. The
customer gets a ready to use login to the monitoring dashboard that is accessible using an
internet browser. A mobile client is also available for the MaaS dashboard for IT
administrators.
o Inherently Available 24x7x365: Since MaaS is deployed in the cloud, the monitoring dashboard
itself is available 24x7x365 that can be accessed anytime from anywhere. There are no
downtimes associated with the monitoring tool.
o Easy Integration with Business Processes: MaaS can generate alert based on specific business
conditions. MaaS also supports multiple levels of escalation so that different user groups can
get different levels of alerts.
o Cloud Aware and Cloud Ready: Since MaaS is already in the cloud, MaaS works well with
other cloud-based products such as PaaS and SaaS. MaaS can monitor Amazon and Rackspace
cloud infrastructure. MaaS can monitor any private cloud deployments that a customer might
have.
o Zero Maintenance Overheads: As a MaaS, customer, you don’t need to invest in a network
operations centre. Neither do you need to invest an in-house team of qualified IT engineers to
run the monitoring desk since the MaaS vendor is doing that on behalf of the customer.
Assets Monitored by MaaS
oServers and Systems Monitoring: Server Monitoring provides insights into the reliability of the
server hardware such as Uptime, CPU, Memory and Storage. Server monitoring is an essential
tool in determining functional and performance failures in the infrastructure assets.
o Database Monitoring: Database monitoring on a proactive basis is necessary to ensure that
databases are available for supporting business processes and functions. Database monitoring
also provides performance analysis and trends which in turn can be used for fine tuning the
database architecture and queries, thereby optimizing the database for your business
requirements.
o Network Monitoring: Network availability and network performance are two critical
parameters that determine the successful utilization of any network – be it a LAN, MAN or
WAN network. Disruptions in the network affect business productivity adversely and can bring
regular operations to a standstill. Network monitoring provides pro-active information about
network performance bottlenecks and source of network disruption.
6
Notes
Cloud Computing
o Storage Monitoring: A reliable storage solution in your network ensures anytime availability of
business-critical data. Storage monitoring for SAN, NAS and RAID storage devices ensures that
your storage solution are performing at the highest levels. Storage monitoring reduces
downtime of storage devices and hence improves availability of business data.
o Applications Monitoring: Applications Monitoring provides insight into resource usage,
application availability and critical process usage for different Windows, Linux and other open-
source operating systems-based applications. Applications Monitoring is essential for mission
critical applications that cannot afford to have even a few minutes of downtime. With
Application Monitoring, you can prevent application failures before they occur and ensure
smooth operations.
o Cloud Monitoring: Cloud Monitoring for any cloud infrastructure such as Amazon or
Rackspace gives information about resource utilization and performance in the cloud. While
cloud infrastructure is expected to have higher reliability than on-premise infrastructure, quite
often resource utilization and performance metrics are not well understood in the cloud. Cloud
monitoring provides insight into exact resource usage and performance metrics that can be used
for optimizing the cloud infrastructure.
o Virtual Infrastructure Monitoring: Virtual Infrastructure based on common hypervisors such as
ESX, Xen or Hyper-V provides flexibility to the infrastructure deployment and provides
increased reliability against hardware failures. Monitoring virtual machines and related
infrastructure gives information around resource usage such as memory, processor and storage.
Database-as-a-Service (DBaaS)
Database as a Service (DBaaS) is an architectural and operational approach enabling DBAs to
deliver database functionality as a service to internal and/or external customers. DBaaS
architectures support following required capabilities: customer side provisioning and
management of database instances using on-demand, self-service mechanisms; automation of
monitoring with provider-defined service definitions, attributes and quality SLAs; and fine-grained
metering of database usage enabling show-back reporting or charge-back for both internal and
external functionality for each individual consumer.
Setting up DBaaS
In order to set-up DBaaS, a cloud administrator will need to:
6
Notes
Network-as-a-Service (NaaS)
In NaaS, the users who do not want to use their own networks take help from service providers to
host the network infrastructure. The connectivity and bandwidth are provided by the service
provider for the contracted period. NaaS represents the network as transport connectivity. The
network virtualization is done in this service.
NaaS is “an emerging procurement model to consume network infrastructure via a flexible
operating expense (OpEx) subscription inclusive of hardware, software, management tools,
licenses, and lifecycle services.”
What's Driving the Trend Toward NaaS?
Traditional network model requires capital expenses (CapEx) for physical networks with switches,
routers, and licensing. The do-it-yourself IT model requires time for planning and deployment as
well as expertise to install and configure infrastructure and to ensure security access policies are
in place. This model involves the following:
o Diligent monitoring for updates and security patches is essential due to rapid changes in
technology and security threats.
o Provisioning a new service is a manual process that requires a technician to deploy and
configure equipment at various locations.
o Service provisioning and issue resolution have historically been lengthy processes.
o As networks have grown in complexity—with more mobile users connecting from everywhere
and with the expansion to cloud—IT teams have been challenged to keep pace.
o Connectivity Cloud: A model in which a private fiber fabric or wireline "Middle Mile" network is
used to bypass often less-optimal public (internet) routing and congestion to provide connectivity
for critical Enterprise resource and services access. It is controlled via a distributed software
platform, the model supports "cloud-aligned" elastic consumption including on-demand
provisioning, any-to-any connectivity, and flexible bandwidth deployment through both portal
and programmable API operation and introspection.By integrating the platform API with
provisioning and application deployment playbooks, the resulting WAN can realize an
infrastructure as code paradigm for Wide Area Networks - "network-as-code". They resulting
services include custom WAN interconnectivity, hybrid cloud and multi-cloud connectivity.
oVirtual Private Network (VPN): A tunnel overlay that extends a private network and the
resources contained in the network across networks like the public Internet. It enables a host
computer to send and receive data across shared or public networks as if it were a private
network with the functionality and policies of the private network.
o Virtual Network Operation: Model common in mobile networks in which a telecommunications
manufacturer or independent network operator builds and operates a network (wireless, or
transport connectivity) and sells its communication access capabilities to third parties (commonly
mobile phone operators) charging by capacity utilization. A mobile virtual network operator
(MVNO), is a mobile communications services provider that does not own the radio spectrum or
wireless network infrastructure over which it provides services. Commonly a MVNO offers its
communication services using the network infrastructure of an established mobile network
operator.
Benefits of NaaS
NaaS is a cloud model that enables users to easily operate the network and achieve the outcomes
they expect from it without owning, building, or maintaining their own infrastructure.NaaS can
6
Notes
Cloud Computing
replace hardware-centric VPNs, load balancers, firewall appliances, and Multiprotocol Label
Switching (MPLS) connections. The users can scale up and down as demand changes, rapidly
deploy services, and eliminate hardware costs.NaaS offers ROI (return on investment), enabling
customers to trade CapEx for OpEx and refocus person hours on other priorities.Figure 18 depicts
the different benefits of NaaS.
Benefits of NaaS
Flexibility
Visibility and Insights
and
alabi
Sclity
Enhanced Security
oIT Simplicity and Automation- Businesses benefit when they align their costs with actual usage.
They don't need to pay for surplus capacity that goes unused, and they can dynamically add
capacity as demands increase. Businesses that own their own infrastructure must implement
upgrades, bug fixes, and security patches in a timely manner. Often, IT staff may have to travel to
various locations to implement changes. NaaS enables the continuous delivery of new fixes,
features and capabilities. It automates multiple processes such as onboarding new users and
provides orchestration and optimization for maximum performance. This can help to eliminate
the time and money spent on these processes.
o Access from Anywhere- Today's workers may require access to the network from anywhere—
home or office—on any devices and without relying on VPNs. NaaS can provide enterprises with
global coverage, low-latency connectivity enabled by a worldwide POP backbone, and negligible
packet loss when connecting to SaaS applications, platform-as-a-service (PaaS)/infrastructure-as- a-
service (IaaS) platforms, or branch offices
o Visibility and Insights- NaaS provides proactive network monitoring, security policy
enforcement, advanced firewall and packet inspection capabilities, and modeling of the
performance of applications and the underlying infrastructure over time. Customers may also
have an option to co-manage the NaaS.
o Enhanced Security- NaaS results in tighter integration between the network and the network
security. Some vendors may "piece together" network security. By contrast, NaaS solutions need
to provide on-premise and cloud-based security to meet today’s business needs.
o Flexibility-NaaS services are delivered through a cloud model to offer greater flexibility and
customization than conventional infrastructure. Changes are implemented through software, not
hardware. This is typically provided through a self-service model. IT teams can, for example,
reconfigure their corporate networks on demand and add new branch locations in a fraction of
the time. NaaS often provides term-based subscription with usage billing and multiple payment
options to support various consumption requirements.
o Scalability- NaaS is inherently more scalable than traditional, hardware-based networks. NaaS
customers simply purchase more capacity instead of purchasing, deploying, configuring, and
securing additional hardware. This means they can scale up or down quickly as needs change.
6
Notes
o Improved Application Experience- NaaS provides AI-driven capabilities to help ensure SLAs and
SLOs for capacity are met or exceeded. NaaS provides the ability to route application traffic to
help ensure outstanding user experience and to proactively address issues that occur.
Healthcare-as-a-Service (HaaS)
Gone are days when healthcare organizations used to store patient data in piles of papers and files.
Not only was that inconvenient and time-consuming, but also expensive in terms of both money
and resources. With the exponential growth in technology, more and more healthcare businesses
are moving to the cloud.Cloud computing has impacted the essential divisions of society, especially
the healthcare industry.
Technology-enabled Healthcare includes telehealth, telecare, telemedicine, tele-coaching, mHealth
and self-care services that can put people in control of their own health, wellbeing and support,
keeping them safe, well and independent and offering them and their family’s peace of mind.
Cloud computing in healthcare market can be segmented as:
6
Notes
Cloud Computing
scaling, a healthcare company will only need to buy the space or computing power it requires.
Additionally, Cloud offers easy downgrade plans with cost reductions.
o Enhanced Patient Care Efficiency: Cloud computing can enhance patient care in many
ways.When accepting new patients, doctors and staff can quickly check the online database to
check a person’s potential existing medical records. As a result, they can spend more time on
actual consultations and not on the paperwork.Next, a hospital can efficiently distribute patient
information like condition, status, schedules, and medication to nurses and doctors. The
workers can also avoid information inaccuracy when data mixes with other groups or new
entries overlap existing ones.
o Better Data Management: If a healthcare organization uses local computing, chances are, it only
has limited methods of storing and accessing data. The offline databases are more constrained,
they can restrict users to access faster or do things they can typically perform on commercial
programs.Cloud providers employ powerful technologies, it’s now possible to store more
complex data and file types without worrying about slow down or errors. Additionally,
organizing massive data collections won’t be as hard compared to doing it on a local server.
Above all, healthcare workers can instantly upload/access information and files remotely
without messing up the system or waiting for queues and slow loading times.
o Improved Privacy: Unlike offline processing, cloud computing provides more privacy and
security for the healthcare sector. Cloud platforms offer high-level encryption, multiple stacks
of protection, and superior threat detection methods, it’s harder for criminals to infiltrate a
healthcare company’s system.
Benefits of EaaS
o Learners pay for the Education They Want/ Need: Mostly, the degrees and training courses can
be costly and often require the students to follow a dictated set of modules. EaaS gives students
the option to pick and choose the modules they want to purchase according to their needs. The
training structure is tailored to the student, by the student, and their time and money are not
wasted on irrelevant learning.
o Advocates Flexible Learning: The flexibility of EaaS allows students to learn at a time, place,
and pace that they choose. This method of learning often goes hand-in-hand with blended
learning, as both models give the student control and responsibility of their own learning.
6
Notes
o Learner-centric: The traditional education sees the teacher making all decisions regarding
curriculum. They impose the place, time and pace of content delivery. This old-fashioned
approach forces students to take a passive role in their education. In contrast, learner-centric
courses encourage students to be active in designing and executing their own educational
journey. This manner of learning is supported by constructivist theory– the idea that humans
generate knowledge and meaning from interactions between their experiences and ideas. This
theory is key to corporate learning practices and is commonly used to inform adult educational
programmes. By getting students to use previous experiences and existing knowledge in their
learning, a deeper understanding of the content can be achieved.
o Encourages Agile Content Development by Course Designers: Almost continuous access to the
Internet and the popularity of social media has taught many of us to expect dynamic content –
content that is constantly being changed or updated, based on new information as it is made
available. As such, developing content for an online or blended learning platform requires new
ideas and the continual update of learning materials.
Function-as-a-Service (FaaS)
FaaS is a concept of serverless computing via serverless architectures where developers can
leverage this to deploy an individual “function”, action, or piece of business logic.
Principles of FaaS:
Example:
o AWS Lambda: The service allows accessing software code without server setting and
management. Developers need only to upload the code, and the solution will automatically
connect the app to servers, language runtimes, OS, and highlight the functional code
fragments. From that point, developers only choose features for editing.
o Azure Functions: The platform uses trigger mechanisms to highlight functions. Developers
can set events that will lead to changes in code — for instance, a particular user input
(interaction with an app or provided data) can turn on a function (like showing a pop-up or
opening a page). The developers set up these triggers and responses without building the
software infrastructure.
o IBM Open Whisk: Similar to Lambda and Azure, IBM Open Whisk reacts to trigger effects and
produces a series of organized outputs. Developers only have to set up action sequences and
describe possible trigger events. The action itself will be enabled by IBM’s infrastructure— the
users don’t have to control these aspects.
Summary
• Cloud computing signifies a major change in the way we run various applications andstore our
information. Everything is hosted in the “cloud”, a vague assemblage ofcomputers and servers
accessed via the Internet, instead of the method of running programsand data on a single
desktop computer.
• Technology-enabled Healthcare includes telehealth, telecare, telemedicine, tele-coaching,
mHealth and self-care services that can put people in control of their own health, wellbeing and
support, keeping them safe, well and independent and offering them and their family’s peace of
mind.
7
Notes
Cloud Computing
• The first step in switching to cloud computing is determining what kind of cloud services you
could be interested in. Then, choosing a cloud computing service is a long-term investment. Your
application will heavily rely on third-party capacities, and you need to make sure that the
provider is legitimate and fits your needs.
• IBM offers cloud computing services to help businesses of all sizes take advantage of this
increasingly attractive computing model. IBM is applying its industry-specific consulting
expertise and established technology record to offer secure services to companies in public,
private, and hybrid cloud models.
• NaaS is “an emerging procurement model to consume network infrastructure via a flexible
operating expense (OpEx) subscription inclusive of hardware, software, management tools,
licenses, and lifecycle services”.
• SaaS describes any cloud service where consumers are able to access software applications over
the internet. The applications are hosted in “the cloud” and can be used for a wide range of tasks
for both individuals and organizations.
Keywords
Amazon Simple Storage Service (Amazon S3): Amazon S3 is a service offered by Amazon Web Services
(AWS) that provides object storage through a web service interface. Amazon S3 uses the same
scalable storage infrastructure that Amazon.com uses to run its global e-commerce network.
Function-as-a-Service:FaaS is a concept of serverless computing via serverless architectures where
developers can leverage this to deploy an individual “function”, action, or piece of business logic.
Database as a Service: Database as a Service (DBaaS) is an architectural and operational approach
enabling DBAs to deliver database functionality as a service to internal and/or external customers.
Cloud Hosting: The hosting of websites on virtual servers which are founded upon pooled
resources from underlying physical servers.
Virtual Data Centers (VDC): A virtualized network of interconnected virtual servers which can be
used to offer enhanced cloud hosting capabilities, enterprise IT infrastructure or to integrate all of
these operations within either a private or public cloud implementation.
Communications as a Service:Communications as a Service (CaaS) is an outsourced enterprise
communications solution that can be leased from a single vendor. CaaS vendor is responsible for
all hardware and software management and offers guaranteed Quality of Service (QoS).
7
Notes
7
Virtualization
Introduction
In computing, virtualization or virtualisation is the act of creating a virtual (rather than actual)
version of something, including virtual computer hardware platforms, storage devices, and
computer network resources.Virtualization began in the 1960s, as a method of logically dividing
the system resources provided by mainframe computers between different applications. Since
then, the meaning of the term has broadened.Virtualization technology has transformed hardware
into software. It allows to run multiple Operating Systems (OSs) as virtual machines (Figure
1).Each copy of an operating system is installed in to a virtual machine.
You can see a scenario over here that we have a VMware hypervisor that is also called as a Virtual
Machine Manager (VMM). On a physical device, a VMware layer is installed out and, on that layer, we
have six OSs that are running multiple applications over there, these can be the same kind of OSs or
these can be the different kinds of OSs in it.
Why Virtualize
1. Share same hardware among independent users- Degrees of Hardware parallelism increases.
2. Reduced Hardware footprint through consolidation- Eases management and energy usage.
3. Sandbox/migrate applications- Flexible allocation and utilization.
4.Decouple applications from underlying Hardware- Allows Hardware upgrades without impacting an OS image.
Virtualization enables sharing of resources much easily, it helps in increasing the degree of hardware level parallelism,
basically, there is sharing of the same hardware unit among different kinds of independent units, if we say that we have
the same physical hardware and on that physical hardware, we have multiple OSs. There can be different users running
on different kind of OSs. Therefore, we have a much more processing capability with us. This also helps in increasing the
degree of hardware parallelism as well as there is a reduced hardware footprint throughout the VM consolidation. The
hardware footprint that is overall hardware consumption also reduces out the amount of hardware that is wasted out that
can also be reduced out. This consequently helps in easing out the management process and also to reduce the amount of
energy that would have been otherwise consumed out by a particular hardware if we would have invested in large
number of hardware machines would have been used otherwise. Virtualization helps in sandboxing capabilities or
migrating different kinds of applications that in turn enables flexible allocations and utilization of the resources.
Additionally, the decoupling of the applications from the underlying hardware is much easier and further aids in allowing
more and more hardware upgrades without actually impacting any particular OS image.
Virtualization raises abstraction. Abstraction pertains to hiding of the inner details from a particular user. Virtualization
helps in enhancing or increasing the capability of abstraction. It is very similar to how the virtual memory operates. It
helps to access the larger address spaces physical memory mapping is actually hidden by an OS with the help of paging. It
can be similar to hardware emulators where codes are allowed on one architecture to run on a different physical device
such as virtual devices central processing unit, memory or network interface cards etc. No botheration is actually
required out regarding the hardware details of a particular machine. The confinement to the excess of hardware details
helps in raising out the abstraction capability through virtualization.
Basically, we have certain requirements for virtualization, first is the efficiency property. Efficiency means that all
innocuous instructions are executed by the hardware independently. Then, the resource control property means that it is
impossible for the programs to directly affect any kind of system resources. Furthermore, there is an equivalence
Note
s
Cloud
property that indicatesComputing
that we have a program which has a virtual machine manager or hypervisor that performs in a
particular manner, indistinguishable from another program that is running on it.
After virtualization was introduced, different OSs and applications were able to share a single
physical infrastructure (Figure 3). The virtualization reduces the huge amount invested in buying
additional resources. The virtualization becomes a key driver in the IT industry, especially in cloud
computing. Generally, the terms cloud computing and virtualization are not same. There are
significant differences between these two technologies.
Virtual Machine (VM):A VM involves anisolated guest OS installation within a normal host
OS.From the user perspective, VM is software platform like physical computer that runs OSs and
apps.VMs possess hardware virtually.
76
Nowadays, the average end-user desktop PC is powerful enough to meet almost all the needs of
everyday computing, with extra capacity that Is rarely used. Almost all these PC share resources
enough to host a VMM and execute a VM with by far acceptable performance. The same
consideration applies to the high-end side of the PC market, where supercomputers can provide
immense compute power That can accommodate the execution of hundreds or thousands of VMs.
Underutilized Hardware and Software Resources- Hardware and softwareunderutilization is occurring due to:
increased performance and computing capacity, and the effect of limited or sporadic use of resources. The computers today
are so powerful that in most cases only a fraction of their capacity is used by an application or the system. Moreover, if we
consider the IT infrastructure of an enterprise, many computers are only partially utilized whereas they could be used
without interruption on a 24/7/365 basis.For example, desktop PCs mostly devoted to office automation tasks and used by
administrative staff are only used during work hours, remaining completely
unused overnight. Using these resources for other purposes after hours could improve the
efficiency of the IT infrastructure. To transparently provide such a service, it would be necessary to deploy a completely
separate environment, which can be achieved through virtualization.
Lack of Space: The continuous need for additional capacity, whether storage or compute power, makes data centers grow
quickly. Companies such as Google and Microsoft expand their infrastructures by building data centers as large as football
fields that are able to host thousands of nodes. Although this is viable for IT giants, in most cases enterprises cannot afford
to build another data center to accommodate additional resource capacity. This condition, along with hardware under-
utilization, hassled to the diffusion of a technique called server consolidation, for which virtualization technologies are
fundamental.
Greening Initiatives:Recently, companies are increasingly looking for ways to reduce the amount of energy they consume
and to reduce their carbon footprint. Data centers are one of the major power consumers; they contribute consistently to the
impact that a company has on the environment. Maintaining a data center operation not only involves keeping servers on, but
a great deal of energy is also consumed in keeping them cool. Infrastructures for cooling have a significant impact on the carbon
footprint of a data center. Hence, reducing the number of servers through server consolidation will definitely reduce the impact
of cooling and power consumption of a data center. Virtualization technologies can provide an efficient way of consolidating
servers.
Rise of Administrative Costs: The power consumption and cooling costs have now become higher than the cost of IT
equipment. Moreover, the increased demand for additional capacity, which translates into more servers in a data center, is
also responsible for a significant increment in administrative costs. Computers—in particular, servers—do not operate all
on their own, but they require care and feeding from system administrators. Common system administration tasks include
hardware monitoring, defective hardware replacement, server setup and updates, server resources monitoring, and
backups. These are labor-intensive operations, and the higher the number of servers that have to be managed, the higher
the administrative costs. Virtualization can help reduce the number of required servers for a given workload, thus reducing
the cost of the administrative personnel.
Sandbox/migrate applications
Note
s
Cloud
Computing
Flexible allocation & utilization.
Features of Virtualization
Virtualization Raises Abstraction
o Similar to Virtual Memory: To access larger address space, physical memory mapping is
hidden by OS using paging.
o Similar to Hardware Emulators: Allows code on one architecture to run on a different physical
device, such as, virtual devices, CPU, memory, NIC etc.
o No botheration about the physical hardware details.
Virtualization Requirements
o Efficiency Property: All innocuous instructions are executed by the hardware.
o Resource Control Property: It must be impossible for programs to directly affect system
resources.
o Equivalence Property: A program with a VMM performs in a manner indistinguishable from
another.Except: Timing & resource availability.
Virtualized Environments
Virtualization is a broad concept that refers to the creation of a virtual version of something,
whether hardware, a software environment, storage, or a network.In a virtualized environment,
there are three major components (Figure 4):
o Guest: Represents the system component that interacts with the virtualization layer rather
than with the host, as would normally happen.
o Host: Represents the original environment where the guest is supposed to be managed.
o Virtualization Layer: Responsible for recreating the same or a different environment where
the guest will operate.
The components of virtualized environments include: In the case of hardware virtualization, the
guest is represented by a system image comprising an OS and installed applications. These are
installed on top of virtual hardware that is controlled and managed by the virtualization layer, also
called the VMM. The host is instead represented by physical hardware, & in some cases OS, that
78
defines an environment where VMM is running. The guest— Applications and users—interacts
with a virtual network, such as a virtual private network (VPN), which is managed by specific
software (VPN client) using physical network available on the node. VPNs are useful for creating
an illusion of being within a different physical network & thus accessing the resources in it, which
would otherwise not be available. The virtual environment is created by means of a software
program. The ability to use software to emulate a wide variety of environments creates a lot of
opportunities, previously less attractive because of excessive overhead introduced by the
virtualization layer.
In a bare metal architecture, one hypervisor or VMM is actually installed on the bare metal
hardware. There is no intermediate OS existing over here. The VMM communicates directly with
the system hardware and there is no need for relying on any host OS. VMware ESXi and Microsoft
Hyper-V are different hypervisors that are used for bare-metal virtualization.
Figure 6illustrates the hosted virtualization architecture. At the lower layer, we have the shared
hardware with a host OS running on this shared hardware. Upon the host OS, a VMM is running
that and is creating a virtual layer which is enabling different kinds of OSs to run concurrently. So,
you can see a scenario we have a hardware then we add an operating system then a hypervisor is
added and different kinds of virtual machines can run on that particular virtual layer and each
virtual machine can be running same or different kind of OSs.
Note
s
Cloud
Computing
80
and
Note
s
Cloud
Computing
use.Before discussing virtualization techniques, it is important to know about protection rings in
OSs. The protection rings are used to isolate the OS from untrusted user applications. The OS can
be protected with different privilege levels (Figure 8).
The hardware-assisted full virtualization eliminates the binary translation and directly interrupts
with hardware using the virtualization technology which has been integrated on X86 processors
since 2005 (Intel VT-x and AMD-V). The guest OS’s instructions might allow a virtual context
execute privileged instructions directly on the processor, even though it is virtualized. There is
82
several enterprise software that support hardware-assisted– Full virtualization which falls under
hypervisor type 1 (Bare metal) such as:
• VMware ESXi /ESX
• KVM
• Hyper-V
• Xen
However, due to the architectural difference between windows-based and Linux-based Xen
hypervisor, Windows OS can’t be para-virtualized. It does for Linux guest by modifying the
kernel. VMware ESXi doesn’t modify the kernel for both Linux and Windows guests.
OS Level Virtualization: It is widely used and is also known as “containerization”. The host OS
kernel allows multiple user spaces aka instance. Unlike other virtualization technologies, there is
very little or no overhead since it uses the host OS kernel for execution. Oracle Solaris zone is one
of the famous containers in the enterprise market. The list of other containers:
• Linux LCX
• Docker
• AIX WPAR
Processor Virtualization: It allows the VMs to share the virtual processors that are abstracted from
the physical processors available at the underlying infrastructure (Figure 10). The virtualization
layer abstracts the physical processor to the pool of virtual processors that is shared by the VMs.
The virtualization layer will be normally any hypervisors. But processor virtualization can also be
achieved from distributed servers.
84
Figure 11: Memory Virtualization
Storage Virtualization: A form of resource virtualization where multiple physical storage disks are
abstracted as a pool of virtual storage disks to the VMs (Figure 12). Normally, the virtualized
storage will be called a logical storage.
Storage virtualization is mainly used for maintaining a backup or replica of the data that are stored
on the VMs. It can be further extended to support the high availability of the data. It efficiently
utilizes the underlying physical storage. Other advanced storage virtualization techniques are
storage area networks (SAN) and network-attached storage (NAS).
Network Virtualization:It is a type of resource virtualization in which the physical network can be
abstracted to create a virtual network (Figure 13).Normally, the physical network components like
router, switch, and Network Interface Card (NIC) will be controlled by the virtualization software
to provide virtual network components. Virtual network is a single software-based entity that
contains the network hardware and software resources. Network virtualization can be achieved
from internal network or by combining many external networks. It enables the communication
between the VMs that share the physical network. There are different types of network access given
to the VMs such as bridged network, network address translation (NAT), and host only.
Note
s
Cloud
Computing
Data Virtualization: Data virtualization offers the ability to retrieve the data without knowing its
type and the physical location where it is stored (Figure 14). It aggregates the heterogeneous data from
the different sources to a single logical/virtual volume of data. This logical data can be accessed
from any applications such as web services, E-commerce applications, web portals, Software-as-a-
Service (SaaS) applications, and mobile application.It hides the type of the data and the location of
the data for the application that access it and ensures the single point access to data by aggregating
data from different sources. It is mainly used in data integration, business intelligence, and cloud
computing.
86
Figure 15: Application Virtualization
88
Software Licensing Considerations- This is becoming less of a problem as more software
vendors adapt to the increased adoption of virtualization, but it is important to check with your
vendors to clearly understand how they view software use in a virtualized environment.
Possible Learning Curve- Implementing and managing a virtualized environment will require IT
staff with expertise in virtualization. On the user side a typical virtual environment will operate
similarly to the non-virtual environment. There are some applications that do not adapt well to the
virtualized environment – this is something that your IT staff will need to be aware of and address
prior to converting.
Summary
Virtualization opens the door to a new and unexpected form of phishing. The capability of
emulating a host in a completely transparent manner led the way to malicious programs that
are designed to extract sensitive information from the guest.
Virtualization raises abstraction. Abstraction pertains to hiding of the inner details from a
particular user. Virtualization helps in enhancing or increasing the capability of abstraction.
Virtualization enables sharing of resources much easily, it helps in increasing the degree of
hardware level parallelism, basically, there is sharing of the same hardware unit among
different kinds of independent units.
In protection ring architecture, the rings are arranged in hierarchical order from ring 0 to ring 3.
The Ring 0 contains the programs that are most privileged, and ring 3 contains the programs
that are least privileged.
In a bare metal architecture, one hypervisor or VMM is actually installed on the bare metal
hardware. There is no intermediate OS existing over here. The VMM communicates directly
with the system hardware and there is no need for relying on any host OS.
The para-virtualization works differently from the full virtualization. It doesn’t need to
simulate the hardware for the VMs. The hypervisor is installed on a physical server (host) and
a guest OS is installed into the environment.
The software-assisted full virtualization is also called as Binary Translation (BT) and it
completely relies on binary translation to trap and virtualize the execution of sensitive, non-
virtualizable instructions sets.
Memory virtualization is an important resource virtualization technique. In the main memory
virtualization, the physical main memory is mapped to the virtual main memory as in the
virtual memory concepts in most of the OSs.
Keywords
Virtualization: Virtualization is a broad concept that refers to the creation of a virtual
version of something, whether hardware, a software environment, storage, or a network.
Hardware-assisted full virtualization: Hardware-assisted full virtualization eliminates
the binary translation and directly interrupts with hardware using the virtualization technology
which has been integrated on X86 processors since 2005.
Data Virtualization: Data virtualization offers the ability to retrieve the data without
knowing its type and the physical location where it is stored.
Application Virtualization: Application virtualization is the enabling technology for SaaS
of cloud computing that offers the ability to the user to use the application without the need
to install any software or tools in the machine.
Memory Virtualization: It involves the process of providing a virtual main memory to
the VMs is known as memory virtualization or main memory virtualization.
Notes
Network Virtualization: It is a type of resource virtualization in which the physical
network can be abstracted to create a virtual network.
Virtual Machine
Introduction
A software that creates a virtualized environment between the computer platform and the end-user in
which the end user can operate software. It provides an interface identical to the underlying bare
hardware. The Operating System (OS) creates the illusion of multiple processes, each executing on its
own processor with its own (virtual) memory.Virtual machines are “an efficient, isolated duplicate of a
real machine”- Popek and Goldberg. Popek and Goldberg introduced conditions for computer
architecture to efficiently support system virtualization.
Virtual machine is a software that creates a virtualized environment between the computer platform
and the end user in which the end user can operate software. The concept of virtualization applied to
the entire machine involves:
23
Notes
o Each VM has its own set of virtual hardware (e.g., RAM, CPU, NIC, etc.) upon which an operating
system and applications are loaded.
23
Notes
Cloud Computing
o OS sees a consistent, normalized set of hardware regardless of the actual physical hardware
components.
Partitioning
o Multiple applications and OSs can be supported within a single physical system.
o There is no overlap amongst memory as each Virtual Memory has its own memory space.
Isolation
o VMs are completely isolated from host machine and other VMs. If a VM crashes, all others are
unaffected.
o Data does not leak across VMs.
Identical Environment
o VMs can have a number of discrete identical execution environments on a single computer, each of
which runs an OS.
Other VM Features
o Each VM has its own set of virtual hardware (e.g., RAM, CPU, NIC, etc.) upon which an operating
system and applications are loaded.
o OS sees a consistent, normalized set of hardware regardless of the actual physical hardware
components.
o Host system resources are shared among the various VMs. For example, if a host system has 8GB
memory where VMs are running, this amount will be shared by all the VMs, depending upon the
size of the allocation.
o One of the best features of using Virtual machines is we can run multiple OSs/VMs in parallel on
one host system.
o VMs are isolated from one another, thus secure from malware or threat from any other
compromised VM running on the same host.
o Direct exchange of data and mutual influencing are prevented.
o Transfer of VMs to another system can be implemented by simply copying the VM data since the
complete status of the system is saved in a few files.
o VMs can be operated on all physical host systems that support the virtualization environment
used.
23
Notes
Process Virtual Machines: These are also known as Application VM (Figure 4). The virtualization
below the API or ABI, providing virtual resources to a single process executed on a machine is called as
the process virtualization. It is created for the process alone, destroyed when process finishes.
Figure 4: Process VM
o Cross-platform compatibility.
o Programs written for an abstract machine, which is mapped to real hardware through a VM.
Sun Micro systems Java VM
Microsoft Common Language Infrastructure, .NET framework.
System Virtual Machines: These correspond to the virtualized hardware below the ISA. The single
host can run multiple isolated OSs (Figure 5). The servers running different OSs but in isolation
between concurrent systems. The hardware managed by the Virtual Machine Manager
(VMM).Classically, the
23
Notes
Cloud Computing
VMM runs on bare hardware, directly interacting with resources. It intercepts and interprets guest OS
actions.
23
Notes
Isolated environment provided by VMs- If you are a tester or security analyst then VMs will be a
good idea to run multiple applications and services in an isolation using VMs because they do not
affect each other.
Easy to Backup and Clone- All the VMs are stored on the physical hard drive of our host or physical
machine in the file format. Thus, they can be easily back up, moved, or cloned in real-time is one of the
popular benefits we get from running a virtual machine.
Faster Server Provisioning- VMs are easy to install, eliminating the cumbersome and time-
consuming installation of applications on servers. For example, if you want a new server to run some
application then it is very easy and fasts to deploy pre-configured VM templates instead of installing a
new server OS on a physical machine. The same goes for cloning existing applications to try something
new.
Beneficial in Disaster Recovery- As VM doesn’t depend upon the underlying hardware, thus they
are independent of the hardware or CPU model on which it is running. Hence, we can easily replicate
VMs to cloud or offsite, so in some disaster situations, it would be easy to recover and get online in less
span of time as we don’t need to care for some particular server manufacturer or server model.
Use Older Applications for a Longer Time- Well, still many companies are using old applications
but crucial to them and couldn’t support modern hardware or operating system. In such situations,
even the company wants, the IT would never prefer to touch them. However, we can pack such
applications in a VM with the compatible old operating system and old virtual hardware. In this way, it
will be possible to switch to modern hardware while keeping the old software stack intact.
Virtual Machine is Easily Portable- A single server running with some particular operating system
software is not easy to move from one place to another, whereas if we have virtualized the same, then it
becomes very easy to move data and OS from one physical server to another, situated somewhere else
with the minimal workforce and without heavy transportation requirements.
Better Usage of Hardware Resources- Our modern computer or server hardware is quite
powerful, using a single operating system and a couple of applications can’t churn out the maximum
juice of it. Thus, using VMs not only efficiently use the power of the CPU but allows the companies to
save hundreds of bucks from spending on hardware.
Made Cloud Computing Possible- Yes, without VMs there will be no cloud computing because the
whole idea behind it to provide an instant provision of machines running either Windows or Linux OS;
it is only possible with the help of pre-build templates ready to deploy as VMs on some remote data
center hardware. For example, Digital Ocean, AWS, and Google Cloud. So, next time whenever you
heard “Cloud hosting” or “Virtual Private Server” hosting, remember it is a VM running on data center
hardware.
11.2 Hypervisors
VMs are widely used instead of physical machines in the IT industry today. The VMs support green IT
solutions, and its usage increases resource utilization, making the management tasks easier. Since the
VMs are mostly used, the technology that enables the virtual environment also gets attention in
industries and academia. The virtual environment can be created with the help of a software tool
called hypervisors.
Hypervisors are the software tool that sits in between VMs and physical infrastructure and provides
the required virtual infrastructure for VMs.Hypervisors are also called as Virtual Machine Manager
(VMM) (Figure 6). These are the key drivers in enabling virtualization in cloud data centers. Different
hypervisors are being used in the IT industry. Some of the examples are VMware, Xen, Hyper-V, KVM,
and OpenVZ.
The virtual infrastructure means virtual CPUs (vCPUs), virtual memory, virtual NICs (vNICs), virtual
storage, and virtual I/O devices. The fundamental element of hardware virtualization is the hypervisor,
or VMM that helps to recreate a hardware environment in which Guest Operating Systems (OSs) are
installed.
23
Notes
Cloud Computing
Figure 6: Internal Organization of a Virtual Machine Manager
There are three main modules, dispatcher, allocator, and interpreter, coordinate their activity in order
to emulate the underlying hardware. The dispatcher constitutes the entry point of the monitor and
reroutes the instructions issued by the virtual machine instance to one of the two other modules. The
allocator is responsible for deciding the system resources to be provided to the VM: whenever a virtual
machine tries to execute an instruction that results in changing the machine resources associated with
that VM, the allocator is invoked by the dispatcher. The interpreter module consists of interpreter
routines. These are executed when ever a VM executes a privileged instruction: a trap is triggered and
the corresponding routine is executed.
The design and architecture of a VMM, together with the underlying hardware design of the host
machine, determine the full realization of hardware virtualization, where a guest OS can be
transparently executed on top of a VMM as though it were run on the underlying hardware.
The criteria that need to be met by a VMM to efficiently support virtualization were established by
Goldberg and Popekin 1974. The three properties have to be satisfied:
24
Notes
o Equivalence: A guest running under the control of a virtual machine manager should exhibit the
same behavior as when it is executed directly on the physical host.
o Resource control: VMM should be incomplete control of virtualized resources.
o Efficiency: A statistically dominant fraction of the machine instructions should be executed
without intervention from the VMM.
Before the hypervisors are introduced, there was a one-to-one relationship between hardware and
OSs. This type of computing results in underutilized resources.
After the hypervisors are introduced, it became a one-to-many relationship. With the help of
hypervisors, many OSs can run and share a single hardware.
Types of Hypervisors
Hypervisors are generally classified into two categories:
Type I Hypervisors run directly on top of the hardware. Therefore, they take the place of the OSs and
interact directly with the ISA interface exposed by the underlying hardware, and they emulate this
interface in order to allow the management of guest OSs. These are also called a native VM since it runs
natively on the hardware. The other characteristics of Type I hypervisors include:
o Can run and access physical resources directly without the help of any host OS.
o Additional overhead of communicating with the host OS is reduced and offers better efficiency
when compared to type 2 hypervisors.
o Used for servers that handle heavy load and require more security.
o Examples- Microsoft Hyper-V, Citrix XenServer, VMWare ESXi, and Oracle VM Server for
SPARC.
24
Notes
Cloud Computing
Type II Hypervisors require the support of an operating system to provide virtualization services
(Figure 9). This means that they are programs managed by the OS, which interact with it through the
ABI and emulate the ISA of virtual hardware for guest OSs.This type of hypervisor is also called a
hosted or embedded VM since it is hosted within an OS (Figure 10). Hosted virtualization requires the
host OS and does not have direct access to the physical hardware. The host OS is also known as
physical host, which has the direct access to the underlying hardware. However, the major
disadvantage of this approach is if the host OS fails or crashes, it also results in crashing of VMs. So, it is
recommended to use type 2 hypervisors only on client systems where efficiency is less
critical.Examples- VMWare Workstation and Oracle Virtualbox.
24
Notes
o Type 0 Hypervisors- Hardware-based solutions that provide support for virtual machine creation
and management via firmware. Example: IBM LPARs and Oracle LDOMs are examples.
o Type 1 Hypervisors- Operating-system-like software built to provide virtualization. Example:
Including VMware ESX, JoyentSmartOS, and Citrix XenServer.
o Type 1 Hypervisors– Also includes general-purpose operating systems that provide standard
functions as well as VMM functions. Example: Microsoft Windows Server with HyperV and
RedHat Linux with KVM.
o Type 2 Hypervisors- Applications that run on standard OSs but provide VMM features to guest
OSs. Example: VMware Workstation and Fusion, Parallels Desktop, and Oracle VirtualBox.
Other Variations Include:Much variation exists due to breadth, depth and importance of virtualization in
modern computing.
Para Virtualization- Technique in which the guest operating system is modified to work in
cooperation with the VMM to optimize performance.
Programming-environment Virtualization- VMMs do not virtualize real hardware but instead create an
optimized virtual system. It is used by Oracle Java and Microsoft.Net.
Emulators– Allow applications written for one hardware environment to run on a very different
hardware environment, such as a different type of CPU.
Application Containment- Not virtualization at all but rather provides virtualization-like features by
segregating applications from the operating system, making them more secure, manageable. It is
included in Oracle Solaris Zones, BSD Jails, and IBM AIX WPARs.
Summary
Virtualization raises abstraction. Abstraction pertains to hiding of the inner details from a
particular user. Virtualization helps in enhancing or increasing the capability of abstraction.
Virtualization enables sharing of resources much easily, it helps in increasing the degree of
hardware level parallelism, basically, there is sharing of the same hardware unit among different
kinds of independent units.
In a bare metal architecture, one hypervisor or VMM is actually installed on the bare metal
hardware. There is no intermediate OS existing over here. The VMM communicates directly with
the system hardware and there is no need for relying on any host OS.
Type I Hypervisors run directly on top of the hardware. Therefore, they take the place of the OSs
and interact directly with the ISA interface exposed by the underlying hardware, and they emulate
this interface in order to allow the management of guest OSs.
Type II Hypervisors require the support of an operating system to provide virtualization services.
24
Notes
Cloud Computing
This means that they are programs managed by the OS, which interact with it through the ABI and
emulate the ISA of virtual hardware for guest OSs.
Xen is an open-source initiative implementing a virtualization platform based on
paravirtualization. Xen is a VMM for IA-32 (x86, x86-64), IA-64 and PowerPC 970 architectures.
KVM is part of existing Linux code, it immediately benefits from every new Linux feature, fix, and
advancement without additional engineering. KVM converts Linux into a type-1 (bare-metal)
hypervisor.
VMware Workstation is the most dependable, high-performing, feature-rich virtualization platform
for your Windows or Linux PC.
Keywords
Virtualization: Virtualization is a broad concept that refers to the creation of a virtual version of
something, whether hardware, a software environment, storage, or a network.
Type 0 Hypervisors- Hardware-based solutions that provide support for virtual machine creation
and management via firmware. Example: IBM LPARs and Oracle LDOMs are examples.
Type 1 Hypervisors- Operating-system-like software built to provide virtualization. Example:
Including VMware ESX, JoyentSmartOS, and Citrix XenServer. It also includes general-purpose
operating systems that provide standard functions as well as VMM functions. Example: Microsoft
Windows Server with HyperV and RedHat Linux with KVM.
Type 2 Hypervisors- Applications that run on standard OSs but provide VMM features to guest
OSs. Example: VMware Workstation and Fusion, Parallels Desktop, and Oracle VirtualBox.
Interpretation: Interpretation involves relatively inefficient instruction-at-a-time.
Binary Translation: Binary translation involves block-at-a-time optimization for repeated.
24
Para Virtualization- Technique in which the guest operating system is modified to work in
cooperation with the VMM to optimize performance.
Programming-environment Virtualization- VMMs do not virtualize real hardware but instead
create an optimized virtual system. It is used by Oracle Java and Microsoft.Net.
Emulators–Emulators allow the applications written for one hardware environment to run on a
very different hardware environment, such as a different type of CPU.