Cloud Computing
Cloud Computing
Introduction:
Cloud computing is the delivery of computing services like servers, storage,
databases, networking, software etc over the Internet Companies offering these
computing services are called cloud providers and typically charge for cloud
computing services based on usage, similar to how you are billed for water or
electricity at home.
Cloud Components
There are three cloud components, they are
1. Clients
2. Data centers
3. Distributed servers
Each element has an explicit role in delivering a application which is cloud
based.
Clients
Clients are the end user devices where the users interact with cloud to
manage their information. They are usually computers and also
laptops,notebook computers, tablets, mobile phones, PDAs etc. Clients
usually filtered in three categories, they are
Mobile clients.
Thin clients.
Thick clients.
Mobile clients:This refer to mobile devices including PDAs or smart phones
like iPhone.
Thin Clients:This refer to computers that do not have internal hard drives.
They allow the server do all the work, but then display the information on the
screen.
Thick clients:This refers to regular computers, that uses a web browser like
Chrome, Firefox, Internet Explorer to connect to the internet.
Now a days thin clients are becoming more popular due to the following
reasons,
Lower hardware costs
Data security
Less power consumption
Easy to repair or replacement
Less noise
Data Centers
The data center is the collection of numerous servers. It could be a large
room with full of servers located anywhere in the world. The clients can
access these servers through the cloud.
Distributed Servers
Servers are in geographically separate locations in the world. For the
cloud subscriber, these servers act as if they are very near. This gives the
service provider more flexibility in security and options.
For example,Amazon has servers all over the world. If there is a failure at
one site, the service would still be accessed through another site.
Essential characteristics:
Nowadays the term cloud is often used but still confused by nontechnical
crowd. Read about cloud. Now coming to essential characteristics of cloud
computing there are five most essential characteristics, they are :
1) On-Demand self service
2) Broad network access
3) Location independent resource pooling
4) Rapid elasticity
5) Measured service
On-Demand self service
It one of the essential characteristic of cloud that allows user to receive
the services such as computing resources, server time, network storage
automatically without direct interaction with the service provider.
The applications and resources can be assigned and removed within
minutes using cloud catalogs. Some of the popular on demand self service
providers are AWS (Amazon Web Services), Google, Microsoft, IBM,
Salseforce.com.
Broad network access
This is another essential aspect that is available over the network. They
are accessed by using standard mechanisms in thick or thin client platforms.
Location independent resource pooling
The service providers resources are pooled in order to serve multiple
consumers. There is a sense of location independence as the customer has no
control over location where the resources are provided. Consumers need not
worry about how the cloud allocates the provided resources.
Rapid elasticity
The definition of elasticity is the ability to scale the resources up and
down as required. The storage on cloud seems to be unlimited for the client. The
consumer can use as much as he needs at any time.
Measured services
Another essential attribute is that the resources can be measured,
controlled and reported. This provides transparency for both provider and
consumer of the used service. Metering capability is used to control and
optimize resource use.
Architectural influences
High Performance Computing
1. High-performance computing (HPC) is the use of super computers and
parallel processing techniques for solving complex computational
problems.
2. HPC technology focuses on developing parallel processing algorithms
and systems by incorporating both administration and parallel
computational techniques.
High-performance computing is typically used for solving advanced
problems and performing research activities through computer modelling,
simulation and analysis. HPC systems have the ability to deliver
sustained performance through the concurrent use of computing
resources.
3. The terms high-performance computing and supercomputing are
sometimes used interchangeably. High-performance computing (HPC)
evolved due to meet increasing demands for processing speed.
4. HPC brings together several technologies such as computer architecture,
algorithms, programs and electronics, and system software under a single
canopy to solve advanced problems effectively and quickly.A highly
efficient HPC system requires a high-bandwidth, low-latency network
to connect multiple nodes and clusters.
These types of grids are used to meet the specific set of business goals.
The services that run on an enterprise-grid may range from the traditional
commercial enterprise applications such as ERP to new concepts such as
distributed applications. Enterprise computing enables the companies to the
following
To dynamically provide resources.
To simplify tasks.
To consolidate computing components.
To set standards across enterprise.
To scale the resources and workload.
Some of the advantages of enterprise-grid computing are,
It reduces hardware, software costs
It also reduces employee costs.
It improves the quality of service through quick response time.
Autonomic computing
Autonomic computing refers to the self managing characteristics of
distributed computing resources, adapting to unpredictable changes.
It controls the functioning computer applications and systems without
input from the user. This computing model has systems that run themselves,
capable of doing high level functions.
The complexity of the system is invisible to the users. The concept of
autonomic computing was first introduced by IBM in 2001. This model aims to
develop computer systems capable of self management. This overcomes the
rapidly growing complexity of computing systems management.
These systems take decisions on their own by using high level policies.
It will constantly check and optimize the status. Thus it automatically
adapts itself to changing conditions.
An autonomic computing framework is composed of autonomic
components (AC) interacting with each other.
The following are the main components of autonomic computing:
Two main control loops ( local and global )
Sensors ( for self monitoring )
Effectors ( for self adjustments )
Knowledge
Planner ( for analyzing policies ).
Service Consolidation
In computing, consolidation refers to when data storage or server
resources are shared among multiple users and accessed by multiple
applications.
Consolidation aims to make more efficient use of computer resources and
prevent servers and storage equipment from being under-utilized and taking too
much space.
The two main types of consolidation are server consolidation and storage
consolidation.
Horizontal Scaling
Horizontal scaling is the capability of an operation to be scaled up to
meet the demand through request and the distribution of request across the
servers as in Fig 1.3 as demand increases the servers are scaled up. Scalability is
the capacity of a model to be enlarged to handle the expanding volume of
production in an effective way.
Web services
Web services are a set of services over the web or the technical
term cloud. This a service which is “always on”, same as in the concept
of utility computing. It is a standard way for integrating the web applications, It
is considered as the next evolution of web. Web services converts your
applications into Web application which can be published, found and used over
the internet.
Web services communicate using open protocols and can be used by
other applications.
These services are hardware independent, operating system independent,
programming language independent.
The basic platform for these services are XML and HTTP.
Fig 1. 4 Web Services Management
The components of web services are web service server code, web service
consumer code, SOAP, XML, WSDL and UDDI.SOAP is a protocol for
accessing web service. It stands for Simple Object Access Protocol. SOAP is a
communication protocol for sending messages.Fig 1.4
XML is a markup language. It stands for eXtensible Markup
Language. The contents are encoded in xml codes. WSDL is an XML based
language. It stands for Web Services Description Language. It is used to
describe and locate the services.
UDDI is a directory service. It stands for Universal Description,
Discovery and Integration. UDDI is used for storing the information about the
web services.
Scalability:
The ability of a model to be extended to manage the amount of work
growth in an effective manner is called scalability.Cloud-computing resources
can be rapidly scaled according to subscribers convenience. If there is a sudden
necessity for more computer resources, instead of buying new equipment we
can buy additional resources form cloud providers. After the endeavor is over
we can stop using those services.
Simplicity:
In most cases cloud-computing is free to use. It is very simple that users
can easily understand which is the biggest advantage of cloud-computing. It is
possible to get our application started instantly.
Vendors:
The service providers are called vendors. Some of the well known
vendors are Google, Amazon,Microsoft, IBM. These providers offer reliable
services to their customers.
Security:
There are also some risks when using a cloud vendor. But the reputed
firms work hard to keep their consumers data safe and secure. They use
complex cryptographic algorithms to authenticate users. To make it even more
secure we can encrypt our information before storing it in cloud.
Limitations in cloud
There is not any product without a few flaws and so is cloud computing.
There are some cases where the cloud computing may not be the best solution
for computational requirements. Such cases are called as limitations,there are
two main limitations:
Sensitive Information :
Storing sensitive information on the cloud is always dangerous. Any
important information about a person or an organization is called sensitive
information. Once the data leaves our hands, the control over the data is lost but
that does not mean we cannot manage the data on cloud. It needs to be kept
safe. Some of the popularly known limitations in the issue of sensitive
information are:
Government can get the information from service providers easily.
In few cases, the service providers itself share our data with marketing
companies.
The best way is to encrypt the data before storing it in cloud or sending it
to third party. Programs like PGP ( Pretty Good Privacy ) or open source True
Crypt can encrypt the file so that only the one who owns the password can view
the details stored in the uploaded file.
Application development :
This is the other important limitation in cloud computing. In some cases
the applications we need may not be available on cloud or not work as expected.
And sometimes some applications will not be securely communicated over the
internet. In that case our data will be at risk. Thus there are only two ways to get
the desired product. One is, to develop their own application and other is to
approach application developer to build the desired product for you.
Security concerns
In cloud computing world, security is a two sided coin. The security is
very important particularly when moving critical applications and sensitive data
to public and shared environments.
Security Benefits
Providers do endeavor to ensure security. Cloud provide some of the
security measures ensuring the customers data are safe:
Centralised Data
There are some good security traits that come with centralizing your data,
making your system more inherently secure.
Reduced Data Leakage:
If the data is centralized and the various devices used like laptop,
notebook computers can access the data, no need to backup the data.
There is threat for theft of the handheld devices. If the data are lost and
although any security measures like encryption is applied and it may be
compromised and the entire data may be in the hands of the thief.
Moreover by maintaining data on the cloud, employing strong access
control, limiting the employee downloading to only what they need to perform a
task, computing can limit the amount of information that could be potentially be
lost.
Monitoring benefits:
Central storage is easier to control and monitor. The flipside is the
nightmare scenario of comprehensive data theft. If your data is maintained on a
cloud, it is easier to monitor security than have to worry about the security of
numerous servers and clients. The security professional figuring out smart
ways to protect and monitor access to data stored in one place (with the benefit
of situational advantage) than trying to figure out all the places where the
company data resides. You can get the benefits of Thin Clients today but
Cloud Storage provides a way to centralize the data faster and potentially
cheaper. The logistical challenge today is getting Terabytes of data to the
Cloud in the first place.
Instant Swapover - if a server in the Cloud gets compromised (i.e.
broken into), then clone that server at the click of a mouse and make the
cloned disks instantly available to the Cloud Forensics server. When the
swapover is performed its seamless to the users. No need to spend time to
replicate the data or fix the breach. Abstracting the hardware allows to
do it instantly.
Logging
In cloud logging is improved. Logging is often an afterthought, to solve
the issues insufficient disk space is allocated. Cloud Storage changes all this -
no more ‘guessing’ how much storage you need for standard logs. With your
logs in the Cloud you can leverage Cloud Compute to index those logs in real-
time and get the benefit of instant search results. This help to Compute instances
and to measure in and scale as needed based on the logging load - meaning a
true real-time view.
Most modern operating systems offer extended logging in the form of a
C2 audit trail. This is rarely enabled for fear of performance degradation and log
size. Now you can ‘opt-in’ easily - if you are willing to pay for the enhanced
logging, you can do so. Granular logging makes compliance and investigations
easier.
Secure builds
When you developed your own network and you have to buy third-party
security software to get the level of protection you want. With the cloud
solution, those tools can be bundled in and available to you and you can develop
your system with whatever level of security you desire.
Easier to test impact of security changes: this is a big one. Spin up a
copy of your production environment, implement a security change and test the
impact at low cost, with minimal startup time. This is a big deal and removes a
major barrier to ‘doing’ security in production environments.
Security Testing
Reduce cost of testing security: a SaaS provider only passes on a
portion of their security testing costs. It is shared among the cloud users. The
end results is that because you are in a pool with others but you never see the
other users but you realize the lower cost for testing.
Even with Platform as a Service (PaaS) where your developers get to
write code, but the cloud code –scanning tools check for security weakness.
Government Policies:
Government Policies
2. Components of Meghraj
***
SUMMARY
Cloud computing is the process of delivering or providing computational
resources like software and/or hardware as a service over the cloud
(internet).
Cloud Components-There are three cloud components, They are
Clients,Data centers,Distributed servers
Essential characteristics: 1) On-Demand self service,2) Broad network
access,3) Location independent resource pooling,4) Rapid elasticity,5)
Measured service
High-performance computing (HPC) is the use of super computers and
parallel processing techniques for solving complex computational
problems.
Server consolidation involves reducing the number of servers and server
locations within an organization.
Benefits of cloud computing Scalability,Simplicity,Vendors,Security
Limitations in cloud are Sensitive Informationand application
development.
Questions
Part-A
1. Define Cloud Computing.
2. Define Utility Computing.
3. What is grid computing?
4. What is autonomic computing?
5. Mention the benefits of Cloud Computing.
6. List out any two limitations of cloud computing.
7. What is meant by on-demand self service?
8. What is Data center?
9. What is horizontal scaling?
10. Mention two essential characteristics of cloud computing.
Part-B
1. What are the components of cloud computing?
2. Write the essential characteristics of cloud computing.
3. What are the types of clients?
4. State any 3 security benefits.
5. What is High Scalability Architecture?
Part-C
1. What is cloud computing? Discuss the origin of cloud computing.
2. Explain about components in detail.
3. Write short notes on high performance computing.
4. Write short notes on :
(a) Web Services.
(b) High scalability architecture.
5. Discuss the limitations of cloud computing.
6. List out the benefits of cloud computing and explain.
7. Explain the architectural influences of cloud computing.
8. Write about security concerns in cloud.
9. Explain about utility computing and enterprise grid computing.
10.Explain about regulatory issues and government policies in cloud
computing.
***
UNIT – II
CLOUD COMPUTING ARCHITECTURE AND SERVICES
Objectives
To learn the cloud delivery model
To understand the difference between SPI vs traditional IT model
To learn the services of the SPI framework
To learn the cloud deployment model and its types.
Introduction:
The Cloud computing architecture refers to the components and
subcomponents required for cloud computing. These components typically
consist of a front end platform (fat client, thin client, mobile device), back end
platforms (servers, storage), a cloud based delivery, and a network (Internet,
Intranet, Intercloud). Combined, these components make up cloud computing
architecture. The cloud services are categorized into SaaS,PaaS,IaaS.
The evolution of SPI when compared with the traditional IT model,helps
to know how the cloud is deployed based on the requirement of the
user/organization is discussed in detail.
Cloud Architecture
Cloud architecture is the design of software application that uses internet
access and on-demand service. Cloud architecture is used when it is needed to
retrieve the resources on demand and perform the specific job. It will dispose
the resources once the job is finished.
These services can be accessed anywhere from the world.
In Fig 2.1 the cloud computing architecture is depicts the various components :
The major components are defined and explained below.
Cloud consumer
Cloud provider
Cloud auditor
Cloud broker
Cloud carrier
SPI Framework
The acronym for SPI stands for three major services provided through the cloud.
They are as follows,
1. Software as a Service ( SaaS )
2. Platform as a Service ( PaaS )
3. Infrastructure as a Service ( IaaS )
The Fig 2.3 gives overall view about the various resources are available to
the end-user.
Fig 2.3 SPI Framework
SPI evolution
To understand how the SPI framework evolved, perhaps it’s helpful to
place it in context with the development of Internet Service Providers (ISPs).
The various version and services of ISPs are:
Version 1.0 — As ISPs originally began to provide Internet services, dial-
up modem service for homes and organizations grew, making the Internet a
commercial commodity.
Version 5.0 — The ASP model eventually evolved into cloud computing,
which brought new delivery models, such as the SPI framework, with its SaaS,
PaaS, and IaaS service models, and various deployment models, such as private,
community, public, and hybrid cloud models.
flexibility
Virtualisation
higher server,applications
CLOUD
LEGACY ,desktops SPI
level (Hardware,data
center and
lower cost server oriented)
SPI Framework
This enables companies to pay for only the resources they need and use,
and to avoid paying for resources they do not need, thus helping them avoid a
large capital expenditure for infrastructure. The cloud service provider that
delivers some or all of the SPI elements to the organization can also share
infrastructure between multiple clients.
Web Services
A web service is a service offered by an electronic device to another
electronic device, communicating with each other via the World Wide Web. In
a Web service, Web technology such as HTTP, originally designed for human-
to-machine communication, is utilized for machine-to-machine communication,
more specifically for transferring machine readable file formats such
as XML and JSON, overall concept is depicted in Fig . 2.6.
Web 2.0
Web 2.0 describes World Wide Web websites that emphasize user-
generated content, usability (ease of use, even by non-experts),
and interoperability (this means that a website can work well with other
products, systems and devices) for end users.
A Web 2.0 website may allow users to interact and collaborate with each
other in a social media dialogue as creators of user-generated content in a virtual
community, in contrast to the first generation of Web 1.0-era websites where
people were limited to the passive viewing of content. Examples of Web 2.0
include social networking sites and social media sites
(e.g., Facebook), blogs, wikis, folksonomies ("tagging" keywords on websites
and links), video sharing sites (e.g., YouTube), hosted services, Web
applications , collaborative consumption platforms, and mashup applications.
Web Operating System:
With this approach, users can work with their applications from multiple
computers. In addition, organizations can more easily control corporate data and
reduce malware infections.
With Google on its new Sheets and Slides APIs, which were announced
today at Google I/O. With these new APIs, you can seamlessly connect your
favorite Salesforce apps to Sheets and Slides for increased ease of use and real-
time collaboration.
And now, with new integrations between Salesforce and the APIs, data
and reports will flow seamlessly between these solutions so you always have
access to Salesforce data directly within the Google apps you use every day.
Benefits of SaaS
1. Low cost : There are no software installation and also no need to
maintain the resources. It helps to decrease the resource cost.
2. Quick deployment : It can be set up and ready to work in few minutes.
3. Easy to use : To work with these apps, no training is needed. Users can
easily understand without having any technical knowledge.
4. Increased collaboration : Web platform allows the solution to be used
across the industries world wide.
5. Scalable : This model provides unlimited scalability and very quick
processing time.
6. Geo specific hosting : It ensures that the data is kept in the specific
location.
7. On demand : The solution is self served and is made available to use
anytime we need.
8. Secure access : Data is stored in 256 bit AES ( Advanced Encryption
Standard ) and customer account information is encrypted before storing in
database.
9. On going benefits : It is not just one time solution. The organization can
enjoy ongoing operational benefits.
Operational benefits of SaaS :
SaaS can improve the consumer’s organization effectiveness based on the
following benefits,
1. Managing business driven IT project:
A SaaS model provide the necessary infrastructure and thus leads to
technology projects that address true business needs.
2. Increasing consumer demand :
SaaS model provides reliability to deliver near perfect 99.99% system
availability. So any number of users can access the system at anytime from
anywhere.
3. Addressing growth :
This model provides scalability that is easily supported by an increasing
number of consumers to meet their own objectives.
4. Serving new markets quickly and easily :
SaaS allows the organization to quickly and easily adds programs so as to
adapt the changes based on the demand at a faster rate.
5. On demand : The solution is self serve and available for use as needed.
6. Scalable : It allows for the infinite scalability and quick processing time.
Right Scale
Right Scale is one of the PaaS service providers. Right Scale is a web
based cloud computing management solution for managing cloud infrastructure
from multiple providers. It enables firms to easily deploy and manage business
critical applications across public, private and hybrid clouds. This also enables
customers to manage hybrid cloud infrastructure by migrating workloads
between their private clouds and public clouds operated by Amazon Web
Services ( AWS ), Rack Space and Tata.
Salesforce.com
Salesforce.com brings the trust and speed to build and deploy the
developed applications on cloud. The process is faster than any other paas
model. It is the trusted leader in cloud computing and customer relationship
management and also it is simple, scalable and most importantly reliable
service.
Rack Space
Rack space is a world’s leading specialist in hosting and cloud computing
industry. Rack space hosting started its service in 1998. It provides three
different types of services such as managed hosting, cloud hosting and hybrid
hosting.
Force.com
Force.com is a standalone platform which delivers paas service to
customers in a new way to build and deploy apps that makes developers and
companies to concentrate on their applications rather than the software and
infrastructure. There is no need to buy software or server. Force.com gives app
programmers or creators, the quickest way to transform innovative ideas into
business. It creates the business app which is very simple and at the same time
sophisticated. It allows consumers to run multiple applications within the same
instance.
Services of PaaS
PaaS includes various services for application design, application
development, testing and deployment.
The main services of PaaS are :
Team collaboration
Web service
Integration
Managing databases
Security
Scalability
Storage
Presistance
State management
Application version
Application instrumentation
Developer community facilitation
Benefits of PaaS
Some of the main benefits of PaaS are given below.
Each platform components are provided as a services.
Provides services required to complete the building and deploying
services and web applications through internet.
Service for deploying, testing and maintaining application in same IDE.
It follows pay per use model.
PaaS reduces the total cost of ownership since there is no need to buy all
the system software, platforms, tools needed to build the application. User
can only rent them for a certain period of time till the resources are needed.
It has elasticity and scalability to afford same efficiency and experience
irrespective of load and usage.
It helps to build applications rapidly with PaaS. System features can be
changed and upgraded frequently.
Amazon EC2
Amazon EC2 is an acronym for Amazon Elastic Compute Cloud. It is a
central part of Amazon.com’s cloud computing platform where it is an
American multinational electronic e-commerce company which is a major
service provider in cloud computing services.
EC2 allows the users to rent virtual computers on which run their
applications. Scalable deployment of applications are provided by EC2 for web
services by which a subscriber can create a virtual machine.A user can create,
launch and terminate server instances as needed by paying. That is why it is
defined as “elastic”. EC2 provides users with control over geographical location
of instances that allows high levels of redundancy.
EC2 Functionality
EC2 presents a true virtual computing environment. This allows the users
to use web service interface to launch instances with a variety of operating
systems.
It can be loaded with the customized application environment, manage
the access permission and run the image for any number of systems.
Features of EC2
Some of the features of EC2 are :
Amazon elastic book store
EBS optimized instances
Multiple locations
Elastic load balancing
Amazon cloud watch.
GoGRID
GoGRID is a cloud infrastructure service. It hosts Windows and Linux
virtual machines manages by a multi server control panel and a RESTful
API. Representation State Transfer ( REST ) has emerged as a important web
API design model. It is privately held service. It is used to provide and scale
virtual and physical servers, storage, networking, load balancing and firewall in
real time across the multiple data centers using Go GRID’s API.
Its infrastructure is used when we need instant access to highly available
multi server environments. It can be accessed and operated by the standard
network protocols and IP address.
Microsoft calls their cloud operating system the Windows Azure Platform.
You can think of Azure as a combination of virtualized infrastructure to which
the .NET Framework has been added as a set of .NET Services.
Going forward, Microsoft sees its future as providing the best Web
experience for any type of device, which means that it structures its
development environment so the application alters its behavior depending
upon the device. For a mobile device, that would mean adjusting the user
interface to accommodate the small screen, while for a PC the Web application
would take advantage of the PC hardware to accelerate the application and add
richer graphics and other features.
That means Microsoft is pushing cloud development in terms of
applications serving as both a service and an application. This duality—like light,
both a particle and a wave—manifests itself in the way Microsoft is currently
structuring its Windows Live Web products. Eventually, the company intends to
create a Microsoft app store to sell cloud applications to users.
Microsoft Live is only one part of the Microsoft cloud strategy. The
second part of the strategy is the extension of the .NET Framework and related
development tools to the cloud. To enable .NET developers to extend their
applications into the cloud, or to build .NET style applications that run
completely in the cloud, Microsoft has created a set of .NET services, which it
now refers to as the Windows Azure Platform. .NET Services itself had as its
origin the work Microsoft did to create its BizTalk products.
Azure and its related services were built to allow developers to extend
their applications into the cloud. Azure is a virtualized infrastructure to which a
set of additional enterprise services has been layered on top, including:
Recent developments
Recent development of Infrastructure as a Service ( IaaS ) is Dynamic
Load Balancer by Go GRID. It is a cloud based load balancing solution to
manage the high availability infrastructure. Using this, customers can deploy
and scale network services dynamically to support essential infrastructure. This
highly available solution helps in business control spending and easy
maintenance.This provides the elasticity and on demand control needed to
efficiently manage cloud.
Recent development in IBM is Advanced enterprise ready IaaS (
Infrastructure as a Service ). This delivers a range of security rich, enterprise
cloud services. Based on open standards, IBM IaaS features advanced cloud
management.
Recent development in Microsoft is Windows Azure. This enables us to
extend our data centers into the cloud while using our current windows
resources. We can build applications using any language, tool or framework.
Benefits
Some of the advantages of Infrastructure as a Service ( IaaS ) are,
Rapid scaling
Completely managed cloud solutions
Security
Easy to use
Metered services
Flexibility
Rapid Scaling
It can be scaled both horizontally and vertically in a very short period of
time. Horizontal scaling provides unique separate environment to private cloud.
Vertical scaling provides additional hardware resources. IaaS services can be
easily scaled depending upon the user’s need.
Completely managed cloud solution
It does not need costlier infrastructure investment. This model provides
more advantages for companies with limited investment in the computing
resources.
Security
It provides high level security for every environment.
Easy to use
It is very much easy to use the web portal and API. It provides plug and
play integration with existing infrastructure and networks.
Metered services
The service usages are measured and charged on the number of units
consumed ( used ). It follows Pay for what you use and when you use.
Flexibility
It can be accessed from anywhere, anytime and on any device.
The National Institute for Science and Technology ( NIST ) has define four
different cloud computing deployment models. They are,
1. Private cloud
2. Community cloud
3. Public cloud
4. Hybrid cloud
Private cloud
Private cloud is a cloud infrastructure that is operated particularly for a
single organization. It can be managed internally or by a third party. It can be
hosted internally or externally.
It delivers the services to single organization. This model shares many
characteristics of traditional client server architecture.Like any other cloud
model services are delivered on demand .
In this the resources can be managed inside the organization or by third
party. It provides more security and privacy.Hosting in public, community and
hybrid cloud.As we seen earlier about private cloud, we are now moving on to
other types of cloud hosting.
Community cloud
In community cloud hosting, the infrastructure is shared among the
number of organizations with similar interests and requirements. The cost for
the services are spread over few users.
The number of subscribers for community cloud is less than public cloud
but more than private cloud. This can be managed third party or by thyself.
Community cloud can be hosted internally or externally.
Hybrid cloud:
Vendors: The service providers are called vendors. Some of the well
known vendors are Google, Amazon, Microsoft, IBM. These providers offer
reliable services to their customers.
Security: There are also some risks when using a cloud vendor. But the
reputed firms work hard to keep their consumers data safe and secure. They use
complex cryptographic algorithms to authenticate users. To make it even more
secure we can encrypt our information before storing it in cloud.
***
Summary
Cloud architecture is the design of software application that uses internet
access and on-demand service.
Cloud delivery model includes Iaas,Paas,Saas.
Web operating systems are interfaces to distributed computing systems,
particularly cloud or utility computing systems.
An SLA serves as both the blueprint and warranty for cloud computing.
Cloud Deployment Model – the different types of clouds are Public
cloud, private cloud, Community cloud and Hybrid Cloud.
Advantages of Cloud Computing: Scalability, Simplicity, Vendors,
Security, Flexibility and Resiliency ,Reduced Costs ,Reduced Time to
Deployment.
Review Questions
Part-A (2 marks)
1. What is Cloud Architecture?
2. What is SPI?
3. What is IaaS?
4. Expand PaaS and SaaS.
5. What are the benefits of SaaS?
6. State any 2 service providers of SaaS.
7. State any 2 service providers of PaaS.
8. State any 2 service providers of IaaS.
9. What is Public Cloud?
10.What is Private Cloud?
11. What is SPI?
12. What is salesforce.com?
13. What is cloud delivery model?
14. What is other name for IaaS?
15. Define Hybrid cloud.
16. Give examples of PaaS.
Part-B (3 marks)
***
Unit – III
Virtualization
OBJECTIVES:
Define Virtualization
Explain Virtualization and Cloud Computing
State the need and limitations of Virtualization
State the types of Hardware Virtualization
Explain Full, partial and para Virtualization
Explain Desktop, Software, Memory, Storage, Data and Network Virtualization
State Microsoft Implementation
Explain Microsoft Hyper V
Explain VMWare features and infrastructure
Explain Virtual Box and Thin Client
INTRODUCTION:
Virtualization
Virtualization is the key component of cloud computing for providing computing and
storage services. Virtualization is the ability to run multiple operating systems on a single
physical system and share the underlying hardware resources. It is the process by which
one computer hosts the appearance of many computers.
Virtualization and cloud computing were both developed to maximize the use of
computing resources while streamlining processes and increasing efficiencies to reduce the
total cost. But virtualization and cloud computing are truly very different approaches.
Virtualization software allows one physical server to run several individual computing
environments. This technology is fundamental to cloud computing. Cloud providers have
large data centers full of servers to power the cloud offerings but they are not able to devote
a single server to each customer. Thus they virtually partition the data on the server,
enabling each client to work with the separate virtual instance of some software.
A virtualized server makes better use of the server’s available capacity than a non-
virtualized server. Each virtual machine can run its own operating system as well as any
business applications as needed. It can also be applied to storage hardware.
With cloud computing we can implement enterprise grade application. In order to get
the service, it can be chosen from a variety of cloud computing providers and cloud based
services. It is used for big applications.
Thus both virtualization and cloud computing operate on a one to many model.
Virtualization can make one computer to perform like many separate computers and cloud
computing allows many different companies to access one application. The virtualization is
employed locally while cloud computing is accessed as a service.
Costs
With virtualization, administration becomes a lot easier, faster and cost effective.
Virtualization lowers the existing cost.
Administration
Administrating virtualization has to be done in efficient manner since all the resources
are centralized security issues has to be categorized more sensitively. The users access the
resources like data storage, hardware or software has to be allocated properly. Since more
users will utilize the resources, the sharing of needed resources is complicated.
Fast Deployment
Deployment of consolidated virtual servers, migrating physical, servers has to be
done.
Virtualization deployment involves several phases and planning. Both server and
client systems can support several operating systems simultaneously, virtualization
providers offer reliable and easily manageable platform to large companies.
It can be built with independent, isolated units which work together without being tied
to physical equipment. Virtualization provides much faster and efficient way of deployment of
services by some third party software like VMware, Oracle etc. Thus it provides the fastest
service to the users.
Virtual servers and virtual desktops allow hosting multiple operating systems and
multiple applications locally and in remote locations. It lowers the expense by efficient use of
the hardware resources.
It increases utilization rate for server and cost savings efficiently by altering the physical
resources by virtual sharing.
Limitations
Some of the limitations of virtualization are
1. If the CPU does not allow for hardware virtualization we can run some operating
system in software virtualization but it is generally slower. Some operating system
will not run in software virtualization and require to have CPU with hardware
virtualization so it would cost more if CPU with hardware virtualization is not possible.
2. If we want a own server and intend to resell a virtual server then it cost high. This
mean purchase of 64 bit hardware with multiple CPU’s and multiple hard drives.
3. Some of the limitations are in analysis and planning which problems can be divided
into three types they are
a. Technical limitation
b. Marketing strategies
c. Political strategies
4. It has a high risk in physical fault.
5. It is more complicated to set up and manage virtual environment with high critical
servers in a production environment. It is not easy as managing physical servers.
6. It does not support all applications.
In hardware virtualization, the host machine is the actual machine on which the
virtualization takes place and the guest machine is the virtual machine. The words host and
guest are used to distinguish the software that runs on the physical machine from the
software that runs on the virtual machine. The software that creates a virtual machine on the
host hardware is called hypervisor or virtual machine manager.
1. Full virtualization
2. Partial virtualization
3. Para virtualization
Full virtualization
It is a virtualization technique used to provide a virtual machine environment. Full
virtualization requires that every features of the hardware can be reflected into one of
several virtual machines which include the full instruction set, I/O operations, interrupts,
memory access and all the elements used by the software that runs on the base machine
which will run in the virtual machine. In such case any software capable of executing on that
hardware can be run in mutual machine.
Full virtualization is possible only with the right combination of hardware and
software.
The obvious test of virtualization is whether an operating system intended for stand-
alone use can successfully run in a virtual machine.
The effects of the every operation performed within a given virtual machine must be
kept within that virtual machine. Virtual operation cannot be allowed to alter the state of any
other virtual machine, the control program or the hardware.
Virtualization features built into the latest generations of CPU’s from the
technologies, known as Intel VT and AMD-V respectively, provide extensions necessary to
run unmodified guest virtual machines without the internet in full virtualization. CPU
emulation Hypervisor can operate essentially leaving ring 0 available for unmodified guest
operating systems. Hypervisor based virtualization solutions include Xen, VMWare ESX
server and Microsoft’s Hyper-V technology.
Partial virtualization
Partial virtualization is a virtualization technique used to implement a certain kind of
virtual environment. One that provides a partial simulation of the underlying hardware
environment particularly address spaces. Using this entire operating system cannot run in
the virtual machine which would be the sign of full virtualization but that many applications
can run. A key form of partial virtualization is address space virtualization in which each
virtual machine consists of an independent address space. This capability address relocation
hardware is present in most of partial virtualization. It is the milestone to full virtualization.
Para virtualization
Under para virtualization the kernel of the guest operating system is modified to run
on the hypervisor. Hypervisor performs the task instead of guest kernel.
It is very difficult to build the more sophisticated binary translation support necessary
for full virtualization.
The hypervisor also provides hypercall interfaces for other critical kernel operations
such as memory management, interrupt handling and time keeping.
Para virtualization is different from full virtualization, where the unmodified OS does not
know it is virtualized and sensitive OS calls are trapped using binary translation.
Desktop virtualization
It is a software technology that separates the desktop environment and associated
application software from the physical client device that is used to access it.
Software virtualization
It is the virtualization of applications or computer programs. One of the most widely
used software virtualization is Software Virtualization Solution (SVS) which is developed by
Altris.
Once user finished using application, they can switch it off. When a application is
switched off, any changes that the application made to the host OS will be completely
reversed. This means that registry entries and installation directories will have no trace of the
application being installed, executed at all.
The ability to run applications without making permanent registry or library changes.
The ability to run multiple versions of the same application.
The ability to install applications that would otherwise conflict with each other.
The ability to test new applications in an isolated environment.
It is easy to implement.
Memory virtualization
Virtual memory is a feature of an operating system that enables a process to use a
memory (RAM) address space that is independent of other processes running in the same
system. It uses the space that is larger than the actual amount of RAM. Virtual memory
enables each process to act as it has the whole memory space to itself, since the address
that it uses to reference memory are translated by the virtual memory mechanism into
different addresses in physical memory. This allows different processes to use same
memory address. The purpose of virtual memory is to enlarge the address.
Benefits:
Improves memory utilization via the sharing of scarce resources.
Increases efficiency and decreases run time for data and I/O bound application.
Allows applications on multiple servers to share data without replication; decreasing
total memory needs.
Lowers the latency and provides faster access than other solutions.
Storage Virtualization
Storage virtualization is the pooling of physical storage from multiple network storage
devices into what appears to be single storage device that is managed from a central
console. Storage virtualization is commonly used in storage area networks. Storage Area
Network is a high speed sub network of shared storage devices and makes tasks such as
archiving, back-up and recovery easier and faster. Storage virtualization is usually
implemented via software applications.
Storage systems typically use special hardware and software along with disk drives
in order to provide very fast and reliable storage for computing and data processing.
Storage systems can provide either block discussed storage or file accessed storage.
In storage system there are two primary types of virtualization. They are, Block virtualization
and file virtualization. Block virtualization is used to separate the logical storage from
physical storage. So that it may be accessed without regard to physical storage. The
separation allows the administrator of the storage system with greater flexibility in how they
manage the storage of end users.
File virtualization eliminates the dependences between the data accessed at the file
level and the location where the files are physically stored. This provides opportunities to
optimize storage use and server consolidation.
Benefits
Ability of data migration or data transfer
Improved utilization
Central management
Data virtualization
It describes that process of abstracting desperate systems like database, application,
the repositories websites, data services etc. Through a single data access layer which may
be any of several data access mechanism.
This abstraction enables data access clients to target a single data access layer,
serialization, formal structure etc. rather than making each client tool handle multiples of any
of these. This data virtualization is often used in data integration, business intelligence,
service oriented architecture, data services etc.
Abstraction
Virtualized Data Access
Transformation
Data federation
Flexible data delivery
Network virtualization
Network virtualization is the process of combining hardware and software network
resources and network functionality into a single, software based administrative entity which
is said to be virtual network. Network virtualization involves platform virtualization.
Microsoft Hyper – V
Hyper-V is code named viridian and formerly known as windows server virtualization.
It is a native hypervisor that enables platform virtualization. Hyper-V has been released in a
free standalone version. Hyper-V exists in two variants. They are standalone product called
Microsoft Hyper-V 2012 and Microsoft Hyper-V Server 2008.
With any virtualization platform, Hyper-V makes for a more efficient data center,
maximizing resources and reducing costs. Hyper-V provides end to end functionality for an
enterprise grade virtualization product. It provides the basic functionality to create a
virtualization layer over the physical layer of the host server machine and enables guest
operating systems to be installed and managed through an integrated management console.
Hyper-V isolates part of a physical machine into child partitions and allocates them to
different guest operating systems with Windows Server 2008 which act as a primary
host/parent.
Hyper-V also assigns appropriate hardware and software resources for each of the
guest operating system it’s hosting because they don’t have direct access to the computer
hardware resources.
Benefits
It is cost effective
It improves scalability and performance
It has a better performance
VMWare features
VMWare is an American software company that provides cloud and virtualization
software and services. It was founded in 1998 and based in Palo Alto, California, USA. The
company was acquired by EMC Corporation in 2004 and now operates as a separate
software subsidiary.
VMWare’s desktop software runs on Microsoft Windows, Linux and Mac OS X and
VMWare ESX are embedded hypervisors that run directly on server hardware without
requiring and additional operating system.
VMWare infrastructure
VMWare infrastructure includes the following components are
The central point for configuring, provisioning and managing virtualized IT infrastructure.
Virtual Infrastructure Client (VI Client)
The central point for configuring, provisioning and managing virtualized IT infrastructure.
A web interface for virtual machine management and remote consoles access.
VMWare VMotion
Enables the live migration of running virtual machines form one physical server to
another with zero downtime continuous service availability and complete transaction
integrity.
It provides easy to use, cost-effective high availability for applications running in mutual
machines. In the event of server failure, affected mutual machines are automatically
restarted on the other server.
It provides easy usage, centralized facility for agent feedback of virtual machines. It
simplifies backup administration and reduces the load on ESX server installations.
It provides a standard interface for VMWare and third-party solutions to access VMWare
structure.
Virtual Box
Virtual box is originality developed by Innotek Gambit and related in 2007 as an open
source software package. It is purchased by Sun Microsystem.
It will share the RAM and CPU power of other OS. If Linux is running and install
virtual box, then it is possible to install Windows and run it.
Virtual box was the most popular, virtualization software application. Separating
systems which are supported are Windows XP, Vista, Windows 7, Mac OS X, Linux, Solaris
and Open Solaris.
Oracle Corporation is developing the software package with little Oracle VM virtual
box.
Thin Client
It is a computer or a computer program which depends heavily on some other
computer to fulfill its computational roles. Thin client is designed to be especially small
processing occurs on the server.
Thin client is increasingly used for computers such as network computers which are
designed to serve as the client for Client/Server architecture. A thin client is a network
computer without a hard disk drive, whereas a fat client includes a disk drive.
Uses
Thin clients are used where a lot of people need to use computers. This includes
public places like libraries, airports and schools. Thin client setup is also popular in places
where people need to be able to save and access information from a central location like an
office, a call center.
Summary:
Virtualization is the key component of cloud computing.
Virtualization and cloud computing were both developed to maximize the use of
computing resources.
Virtualization is needed for cost, administration, fast deployment, reduce
infrastructure cost.
Virtualization has some limitations also.
Three types of Hardware Full virtualization: Full, Partial and Para virtualization
Desktop virtualization is a software technology that separates the desktop
environment and associated application software.
Software virtualization is the virtualization of applications or computer programs.
Memory virtualization is Virtual memory and it is a feature of an operating system.
Storage virtualization is the pooling of physical storage from multiple network storage
devices into a single storage device.
Data virtualization describes the process of abstracting desperate systems.
Network virtualization is the process of combining hardware and software network
resources.
Microsoft Hyper-V is code named viridian and formerly known as windows server
virtualization.
VMWare is an American software company that provides cloud and virtualization
software and services.
VMWare infrastructure includes the many components: VMWare ESX Server,
VMWare virtual Machine File System (VMFS), VMWare virtual symmetric Multi-
processing (SMP), Virtual Center Management Server, Virtual Infrastructure Client
(VI Client), Virtual Infrastructure Web Access, VMWare VMotion, VMWare High
Availability (HA), VMWare Distributed Resource Scheduler (DRS), VMWare
Consolidated Backup, VMWare Infrastructures (SDK)
A virtual box is a software virtualization package that installs on an operating system
as an application.
Thin Client is a computer or a computer program which depends heavily on some
other computer to fulfill its computational roles.
---
Review Questions
Part – A
1. Define virtualization
2. Write any two needs for virtualization
3. List any two limitations of virtualization
4. List out the types of hardware virtualization
5. Define Desktop virtualization
6. Write any two benefits of software virtualization
7. Define memory virtualization
8. Define storage virtualization
9. What is Data virtualization
10. What is Microsoft Hyper-V ?
11. Define virtual box
12. What is Thin client ?
Part – B
1. Write about the limitations of virtualization
2. Write short notes on Partial virtualization
3. Write short notes on Para virtualization
4. Write short notes on Network virtualization
5. Write short notes on Virtual Box
6. Write short notes on Thin Client and its types
Part – C
1. Differentiate virtualization and cloud computing
2. Explain the types of hardware virtualization with its benefits
3. Explain software and memory virtualization with its benefits
4. List out the VMWare features
5. Explain VMWare infrastructure with a neat diagram
---
Unit – IV
Storage Management
OBJECTIVES:
INTRODUCTION:
Storage Network
Storage networking is the practice of linking together storage devices and connecting
them to other IT networks. Storage networks provide a centralized repository for digital data
that can be accessed by many users, and they use high-speed connections to provide fast
performance.
Architecture
Cloud storage is based on highly virtualized infrastructure and is like broader cloud
computing in terms of accessible interfaces, near-instant elasticity and scalability, multi-
tenancy, and metered resources. Cloud storage services can be utilized from an off-
premises service (Amazon S3) or deployed on-premises (ViON Capacity Services).
Cloud storage typically refers to a hosted object storage service, but the term has
broadened to include other types of data storage that are now available as a service, like
block storage.
Object storage services like Amazon S3 and Microsoft Azure Storage, object storage
software like Openstack Swift, object storage systems like EMC Atmos, EMC ECS and
Hitachi Content Platform, and distributed storage research projects like OceanStore and
VISION Cloud are all examples of storage that can be hosted and deployed with cloud
storage characteristics.
Made up of many distributed resources, but still acts as one, either in a federated or
a cooperative storage cloud architecture
Highly fault tolerant through redundancy and distribution of data
Highly durable through the creation of versioned copies
Typically eventually consistent with regard to data replicas
Phases of Planning
Following are the phases to migrate the entire business to cloud.
6. Strategy phase
7. Planning phase
8. Deployment phase
1. Strategy phase
This phase analyze the strategy problems that customer might face. There are two
steps to perform this analysis:
In this, examine the factors affecting the customers while applying the cloud
computing mode. Target the key problems they want to solve. The key factors are:
o IT management simplification
o Operation and maintenance cost reduction
o Business mode innovation
o Low cost outsourcing hosting
o High service quality outsourcing hosting
All of the above analysis helps in decision making for future development.
The strategy establishment is based on the analysis result of the above step. In this
step, a strategy document is prepared according to the conditions a customer might face
when applying cloud computing mode.
2. Planning Phase
This step performs analysis of problems and risks in the cloud application to ensure
the customers that the cloud computing is successfully meeting their business goals. This
phase involves the following planning steps:
In this step, we recognize the risks that might be caused by cloud computing
application from a business perspective.
IT Architecture development
In this step, we identify the applications that support the business processes and the
technologies required to support enterprise applications and data systems.
In this step, we formulate all kinds of plans that are required to transform current
business to cloud computing modes.
3. Deployment Phase
This phase focuses on both of the above two phases. It involves the following two
steps:
This step includes selecting a cloud provider on basis of Service Level Agreement
(SLA), which defines the level of service the provider will meet.
Maintenance and Technical services are provided by the cloud provider. They need
to ensure the quality of services.
Find out how each of these factors will influence storage area network design choices.
Uptime and availability
Because several servers will rely on a SAN for all of their data, it's important to make
the system very reliable and eliminate any single points of failure. Most SAN hardware
vendors offer redundancy within each unit like dual power supplies, internal controllers
and emergency batteries.
In a typical storage area network design, each storage device connects to a switch
that then connects to the servers that need to access the data. To make sure this path
isn't a point of failure, the client should buy two switches for the SAN network. Each
storage unit should connect to both switches, as should each server. If either path fails,
software can failover to the other. Some programs will handle that failover automatically,
but cheaper software may require you to enable the failover manually.
A good storage area network design should not only accommodate the client's
current storage needs, but it should also be scalable so that the client can upgrade the
SAN as needed throughout the expected lifespan of the system.
Because a SAN's switch connects storage devices on one side and servers on the
other, its number of ports can affect both storage capacity and speed. By allowing
enough ports to support multiple, simultaneous connections to each server, switches can
multiply the bandwidth to servers. On the storage device side, enough ports for
redundant connections to existing storage units, as well as units to add later should be
present.
Security
With several servers able to share the same physical hardware, the security plays an
important role in a storage area network design.
Most of this security work is done at the SAN's switch level. Zoning allows giving only
specific Servers access to certain LUNs, like a firewall allows communication on specific
ports for a given IP address. If any outward-facing application needs to access the SAN,
like a website, the switch should be configured so that only the server's IP address can
access it.
With so much data stored on a SAN, the client wants to build disaster recovery into
the system. SANs can be set up to automatically mirror data to another site, which could
be a failsafe SAN a few meters away or a disaster recovery (DR) site hundreds or
thousands of miles away.
If client wants to build mirroring into the storage area network design, one of the first
considerations is whether to replicate synchronously or asynchronously. Synchronous
mirroring means that as data is written to the primary SAN, each change is sent to the
secondary and must be acknowledged before the next write can happen.
Network-attached storage devices are flexible and scale-out, meaning that if we need
additional storage, we can add on to what we have. A Network-attached storage is like
having a private cloud in the office. It’s faster, less expensive and provides all the benefits of
a public cloud onsite, giving complete control.
Network-attached storage devices are perfect for small businesses because they are:
FC SANs
A storage-area network (SAN) is a dedicated high-speed network (or sub-network)
that interconnects and presents shared pools of storage devices to multiple servers.
Fiber Channel (FC) is a high speed serial interface for connecting computers and
storage systems. A fiber channel storage area network (FC SAN) is a system that enables
multiple servers to access network storage devices. A storage area network enables high-
performance data transmission between multiple storage devices and servers.
iSCSI
Internet Small Computer Systems Interface is an Internet Protocol (IP)-based storage
networking standard for linking data storage facilities. It provides block-level access to
storage devices by carrying SCSI commands over a TCP/IP network. iSCSI is used to
facilitate data transfers over intranets and to manage storage over long distances. It can be
used to transmit data over local area networks (LANs), wide area networks (WANs), or the
Internet and can enable location-independent data storage and retrieval.
FCIP
Fiber Channel over IP created by the Internet Engineering Task Force (IETF) for
storage technology. An FCIP entity functions to encapsulate Fiber Channel frames and
forward them over an IP network. FCIP entities are peers that communicate using TCP/IP.
FCIP technology overcomes the distance limitations of native Fiber Channel, enabling
geographically distributed storage area networks to be connected using existing IP
infrastructure, while keeping fabric services intact. The Fiber Channel Fabric and its devices
remain unaware of the presence of the IP Network.
FCoE
Fiber Channel is accomplished on a separate network than the Ethernet network.
With Fiber Channel over Ethernet, Converged Network Adapters are used in place of
Ethernet adapters and allow a single channel to pass both Ethernet and Fiber Channel
encapsulated packets across a standard IP network extending distance over an entire
enterprise, regardless of geography via Ethernet routers and bridges. For replication
between storage systems over a wide area network, FCoE provides a mechanism to
interconnect islands of FC SAN or FCoE SANs over the IP infrastructure
(LANs/MANs/WANs) to form a single, unified FC SAN fabric.
Design for storage virtualization in cloud computing
One of the most popular storage virtualization techniques is the pooling of physical
storage from multiple network storage devices into what appears to be a single logical
storage device that can be managed from a central point of control (console). Storage
virtualization techniques are commonly used in a storage area network (SAN), but are also
applicable to large-scale NAS environments where there are multiple NAS filers.
Object storage can be implemented at multiple levels, including the device level
(object storage device), the system level, and the interface level. In each case, object
storage seeks to enable capabilities not addressed by other storage architectures, like
interfaces that can be directly programmable by the application, a namespace that can span
multiple instances of physical hardware, and data management functions like data
replication and data distribution at object-level granularity.
The majority of cloud storage available in the market uses the object storage
architecture. Two notable examples are Amazon Web Services S3, which debuted in 2005,
and Rackspace Files. Other major cloud storage services include IBM Bluemix, Microsoft
Azure, Google Cloud Storage, Alibaba Cloud OSS, Oracle Elastic Storage Service and
DreamHost based on Ceph.
Advantages
Scalable capacity
Scalable performance
Durable
Low cost
Simplified management
Single Access Point
No volumes to manage/resize/etc.
Disadvantages
No random access to files
The Application Programming Interface (API), along with command line shells and
utility interfaces (POSIX utilities) do not work directly with object-storage
Integration may require modification of application and workflow logic
Typically, lower performance on a per-object basis than block storage
---
Summary:
Storage networking is the practice of linking together storage devices and
connecting them to other IT networks.
Cloud storage is a model of data storage in which the digital data is stored in
logical pools, the physical storage spans multiple servers (and often locations),
and the physical environment is typically owned and managed by a hosting
company.
Cloud storage is based on highly virtualized infrastructure and is like broader
cloud computing in terms of accessible interfaces, near-instant elasticity and
scalability, multi-tenancy, and metered resources.
In cloud computing, the business requirements are mandatory to consider before
deploying the applications to cloud.
The phases to migrate the entire business to cloud are as follows: Strategy
phase, Planning phase, Deployment phase.
Strategy phase analyze the strategy problems that customer might face. There
are two steps to perform this analysis: Cloud Computing Value Proposition, Cloud
Computing Strategy Planning.
Planning Phase performs analysis of problems and risks in the cloud application
to ensure the customers that the cloud computing is successfully meeting their
business goals.
Deployment Phase focuses on both of the above two phases.
The best storage area network design for a customer will take into consideration
a number of critical issues: Uptime and availability, Capacity and scalability,
Security, Replication and disaster recovery.
Network-attached storage (NAS) is a file-level computer data storage server
connected to a computer network providing data access to a heterogeneous
group of clients.
A fiber channel storage area network (FC SAN) is a system that enables multiple
servers to access network storage devices.
Hybrid cloud storage is an approach to managing storage that uses both local
and off-site resources.
Internet Small Computer Systems Interface (iSCSI) is an Internet Protocol (IP)
based storage networking standard for linking data storage facilities.
A Fiber Channel over IP (FCIP) entity functions to encapsulate Fiber Channel
frames and forward them over an IP network.
With Fiber Channel over Ethernet (FCoE), Converged Network Adapters are
used in place of Ethernet adapters and allow a single channel to pass both
Ethernet and Fiber Channel encapsulated packets across a standard IP network
extending distance over an entire enterprise, regardless of geography via
Ethernet routers and bridges.
One of the most popular storage virtualization techniques is the pooling of
physical storage from multiple network storage devices into what appears to be a
single logical storage device that can be managed from a central point of control
(console).
There are two primary types of virtualization that can occur: Block level storage
virtualization and File level storage virtualization.
Block level storage virtualization is a storage service that provides a flexible,
logical arrangement of storage capacity to applications and users while
abstracting its physical location.
File level storage can be defined as a centralized location, to store files and
folders.
Object storage (also known as object-based storage) is a computer data storage
architecture that manages data as objects, as opposed to other storage
architectures like file systems which manage data as a file hierarchy and block
storage which manages data as blocks within sectors and tracks.
Characteristics of Object Storage are, 1. Performs best for big content and high
storage throughput, 2. Data can be stored across multiple regions, 3. Scales
infinitely to Petabytes (bigger than terabyte) and beyond and 4. Customizable
metadata, not limited to number of tags
Advantages are Scalable capacity, Scalable performance, Durable, Low cost,
simplified management, Single Access Point, No volumes to manage/resize/etc.
Disadvantages are, 1. No random access to files, 2. The Application
Programming Interface (API), along with command line shells and utility
interfaces (POSIX utilities) do not work directly with object-storage, 3. Integration
may require modification of application and workflow logic, 4. Typically, lower
performance on a per-object basis than block storage.
The Object Storage is suited for the following: 1. Unstructured data, 2. Archival
and storage of structured and semi-structured data.
The Object Storage is not suited for the following: 1. Relational Databases, 2.
Data requiring random access/updates within objects.
---
Review Questions
Part – A
1. What is Storage Network ?
9. Define : NAS
Part – B
1. Write on architecture of storage in cloud.
7. Write on FC SANs.
9. Write on iSCSI.
Part – C
1. Write on Architecture of storage, analysis and planning.
3. Describe on NAS.
---
UNIT - V : SECURITY IN THE CLOUD
Objectives
At end the this unit, students can
Describe Encryption.
Introduction:
Cloud computing and storage provides users with capabilities to store and process their
data in third-party data centers. Organizations use the cloud in a variety of different service
models (with acronyms such as SaaS, PaaS, and IaaS) and deployment models (private,
public, hybrid, and community). Security concerns associated with cloud computing fall into
two broad categories: security issues faced by cloud providers (organizations providing
software, platform, or infrastructure-as-a-service via the cloud) and security issues faced by
their customers (companies or organizations who host applications or store data on the
cloud).
When an organization elects to store data or host applications on the public cloud, it
loses its ability to have physical access to the servers hosting its information. As a result,
potentially sensitive data is at risk from insider attacks. According to a recent Cloud Security
Alliance Report, insider attacks are the sixth biggest threat in cloud computing.
Cloud security refers to a set of policies, technologies, and controls deployed to protect data,
applications, and the associated infrastructure of cloud computing. It is a sub-domain of
computer security, network security, and more information security.
Different types of cloud computing service models provide different levels of security
services. The least amount of built in security with an Infrastructure as a Service provider
(IaaS) and most with a Software as a Service provider (SaaS). The concept of security
boundary is separating the clients and vendors responsibilities. The Data stored in the cloud
must be transferred and stored in an encrypt format. It uses proxy and brokerage services to
separate clients from direct access to shared cloud storage.
Securing the Cloud:
Cloud computing has all the vulnerabilities associated with Internet applications, and
additional vulnerabilities arise from pooled, virtualized, and outsourced resources. The
following areas of cloud computing that they felt were uniquely troublesome:
Auditing
Data integrity
e-Discovery for legal compliance
Privacy
Recovery
Regulatory compliance
Identity and Access Management: It should provide controls for assured identities
and access management. Identity and access management includes people,
processes and systems that are used to manage access to enterprise resources.
Data Loss Prevention: It is the monitoring, protecting and verifying the security of
data at rest, in motion and in use in the cloud and on-premises.
Web Security: This is real-time protection offered either on-premise through
software/appliance installation or via the cloud by proxy or redirecting web traffic to
the cloud provider.
E-mail Security: It should provide control over inbound and outbound e-mail and
protecting the organization from malicious attachments. Digital signatures enabling
identification and non-repudiation are features of many cloud e-mail security
solutions.
Intrusion Management: This is the process of using pattern recognition to detect
and react to statistically unusual events.
Security Information and Event Management: The systems accept log and event
information.
Encryption: The systems typically consist of algorithms that are computationally
difficult or infeasible to break, along with the processes and procedures to manage
encryption and decryption, hashing, digital signatures, certificate generation and
renewal and key exchange are also used.
Network Security: It consists of security services that allocate access, distribute,
monitor and protect the underlying resource services.
IaaS supplies the infrastructure, PaaS use added application development frameworks,
transactions and control structures and SaaS is an operating environment with applications,
management and the user interface. Each different type of cloud service delivery model
creates a security boundary, at which the service provider’s responsibilities end and
customer’s responsibilities begin. Any security mechanism below the security boundary must
be built into the system, and any security mechanism about must be maintained by the
customer. It moves up the stack, it becomes the type and level of security is part of Service
Level Agreement.
In the SaaS model, the vendor provides security like compliance, governance, and liability
levels for the entire stack. The PaaS model, the security boundary may include the software
framework and middleware layer. In Ieast built-in security IaaS model, software of any kind is
the customer’s problem. A private cloud may be internal or external of the organization and a
public cloud is most often external only.
Fig 5.2: CSA Cloud Reference Model
Securing Data
Securing data sent to received from stored in the cloud is the single largest security concern
of most organizations should have with cloud computing. Such as any WAN traffic, the data
can be intercepted and modified. The traffic to a cloud service provider and stored-off
premises is encrypted. It is for general data as it is for any account ID and passwords.
Access control
Auditing
Authentication
Authorization
All the data stored in the cloud. It can be located in the cloud service provider’s system used
to transfer data from sent and received. The cloud computing has no physical system that
serves this purpose. To protect the cloud storage is the way to isolate data from client direct
access. They are two services are created. One service for a broker with full access to
storage but no access to the client, and another service for a proxy with no access to
storage but access to both the client and broker. These important two services are in the
direct data path between the client and data stored in the cloud.
Under this system, when a client makes a request for data, here’s what happens:
Even if the proxy service is compromised, that service does not have access to the trusted
key that is necessary to access the cloud storage. In the multi-key solution, not eliminated
all internal service endpoints, but proxy service run at a reduced trust level is eliminated. The
creation of storage zones with associated encryption keys can further protect cloud storage
from unauthorized access.
Fig 5.4 storage zone with encrypted keys
Arbitrage: This is similar to service aggregation, except that the services being aggregated
are not fixed.
Intermediation: The cloud broker give service by improving capability and providing value-
added services to cloud consumers. The improvement can be managing access to cloud
services, identity management, performance reporting, enhanced security, etc.
Benefits of using a cloud broker for a business or technical purpose include the following:
Cloud interoperability - Integration between several cloud offerings.
Cloud portability - Move application between different cloud vendors.
Increase business continuity by reducing dependency from one cloud provider.
Cost savings.
Because data stored in the cloud is usually stored from multiple tenants the each vendor has
its own unique method for segregating one customer’s data from another. It’s important to
understand how the specific service provider maintains data segregation. Cloud storage
provider provides privileged access to storage. Most cloud service providers store data in an
encrypted form to protect the data used in security mechanism. Hence, data cannot be
accessed by the unauthorized user.
It is important to know what impact a disaster or interruption occur on the stored data. Since
data are stored across multiples sites, it may not be possible to recover data in a timely
manner.
Encryption
Cloud encryption is the transformation of a cloud service customer's data into cipher text.
Cloud encryption is commonly used to prevent unauthorized access to private information,
protect sensitive data stored in the cloud. Cloud customer must take time to learn about the
provider's policies and procedures for encryption are called encryption key management.
The cloud encryption capabilities of the service provider need to match the level of sensitivity
of the data being hosted.
Strong encryption technology is a core technology for protecting data in transit to and from
the cloud as well as data stored in the cloud. The goal of encrypted cloud storage is to
create a virtual private storage system that maintains confidentiality and data integrity.
Encryption should separate stored data (data at rest) from data in transit. Depending upon
the particular cloud provider, such as Microsoft allows up to five security accounts per client,
and can use these different accounts to create different zones. On Amazon Web Service, we
can create multiple keys and rotate those keys during different sessions.
Although encryption protects data from unauthorized access, it does nothing to prevent data
loss. The losing encrypted data is to lose the keys that provide access to the data.
Therefore, need to approach key management seriously. This schemes used to protect keys
are the creation of secure key stores that restricted role-based access, automated key stores
backup, and recovery techniques.
Cloud Computing Security Challenges:
The cloud security challenges are classified into five categories:
1. User authentication: Only authorized persons are allowed to access the data
which rests in the cloud. To ensure the integrity of user authentication and to
confirm that only the authenticated users are accessing the data.
2. Data protection: So protecting the data is to be considered in two aspects. Such
as data at rest and in transit.
3. Contingency planning: The data is being secured and also measures the Cloud
Service Provider (CSP) is implementing to assure the integrity and availability of
the data.
4. Interoperability: It applies to cloud computing is at its simplest, the
requirement for the components of a processing system to work together to
achieve their intended result.
A policy is one of those terms that can mean several things. For example, there are security
policies on firewalls, which refer to the access control and routing list information. Standards,
procedures, and guidelines are also referred to as policies in the larger sense of a global
information security policy.
A policy, for example, can literally be a lifesaver during a disaster, or it might be a
requirement of a governmental or regulatory function. A policy can also provide protection
from liability due to an employee’s actions, or it can control access to trade secrets . A policy
can also provide protection from liability due to an employee’s actions, or it can control
access to trade secrets.
Policy Types:
Policies are the first and highest level of documentation, from which the lower-level elements
of standards, procedures, and guidelines flow. These higher-level policies, which reflect the
more general policies and statements (process, strategic reasons, tactical element can
follow). Management should ensure the high visibility of a formal security policy. This is all
employees at all levels will in some way be affected, major organizational resources will be
addressed, and many new terms, procedures, and activities will be introduced. There are
four types of Policy.
The first policy of any policy creation process is the senior management statement of policy.
This is high-level policy that acknowledges the importance of the computing resources to the
business model. Such as support for information security throughout the enterprise, and
commits to authorizing and managing the definition (lower-level standards, procedures, and
guidelines).
Regulatory Policies
Regulatory policies are security policies that an organization must implement due to
compliance, regulation, or other legal requirements. Such as companies might be financial
institutions, public utilities, or some other type of organization that operates in the public
interest.
Advisory Policies
Advisory policies are security policies that are not mandated but strongly suggested,
perhaps with serious consequences defined for failure to follow them (such as termination, a
job action warning, and so forth).
Informative Policies
Informative policies are policies that exist simply to inform the reader. There are not implied
or specified requirements, and the audience for this information could be certain internal
(within the organization) or external parties.
These classifications are somewhat ambiguous in the IT community at large. The most
important thing to remember from a security perspective is that there is a more significant
impact when a host OS with user applications and interfaces is running outside of a VM at a
level lower than the other VMs (i.e., a Type 2 architecture). Because of its architecture, the
Type 2 environment increases the potential risk of attacks against the host OS. For example,
a laptop running VMware with a Linux VM on a Windows XP system inherits the attack
surface of both OSs, plus the virtualization code (VMM).
The roles assumed by administrators are the Virtualization Server Administrator, Virtual
Machine Administrator, and Guest Administrator. The roles assumed by administrators are
configured in VMS and are defined to provide role responsibilities.
1. Virtual Server Administrator — This role is resp onsible for installing and
configuring the ESX Server hardware, storage, physical and virtual networks,
service console, and management applications.
2. Virtual Machine Administrator — This role is res ponsible for creating and
configuring virtual machines, virtual networks, virtual machine resources, and
security policies. The Virtual Machine Administrator creates, maintains, and
provisions virtual machines.
3. Guest Administrator — This role is responsib le for managing a guest virtual
machine Tasks typically performed by Guest Administrators include connecting
virtual devices, adding system updates, and managing applications that may
reside on the operating system.
Virtual Threats:
Some threats to virtualized systems are general in nature, as they are inherent
threats to all computerized systems (such as denial-of-service, or DoS, attacks). Other
threats and vulnerabilities, however, are unique to virtual machines. Many VM vulnerabilities
stem from the fact that vulnerability in one VM system can be exploited to attack other VM
systems or the host systems, as multiple virtual machines share the same physical
hardware.
SUMMARY
Cloud security refers to a set of policies, technologies, and controls deployed to
protect data, applications, and the associated infrastructure of cloud computing.
Cloud Security Alliance (CSA) is a nonprofit organization that promotes research into
best practices for securing cloud computing and the ability of cloud technologies to
secure other forms of computing.
Different types of models and services used in cloud security. Such as IaaS, PaaS,
SaaS. Cloud reference models are used in various services. Such as IaaS is
supplies the infrastructure, PaaS use adds application development frameworks,
transactions and control structures and SaaS is an operating environment with
applications, management and the user interface.
Cloud brokers provide three categories of services. They are aggregation,
Arbitrage, Intermediation.
Aggregation means cloud broker combines and integrates multiple services into one or
more new services.
Arbitrage means a broker has the flexibility to choose services from multiple Providers,
depending upon the characteristics of the data or the context of the service.
Intermediation means cloud broker enhances a given service by improving some specific
capability and providing value-added services to cloud consumers.
Policies are four types. They are Senior Management Statement of Policy,
Regulatory Policies, Advisory Policies, and Informative Policies.
Virtual machine (VM) is an operating system (OS) or application environment that is
installed on software, which imitates dedicated hardware.
Virtualization security is the collective measures, procedures and processes that
ensure the protection of a virtualization infrastructure / environment.
A virtual security is a computer appliance that runs inside virtual environments.
***
Review Questions
Part-A
Part – B
Part –C
***