Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Cloud Computing 1 28

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Unit 1

Chapter 1
Unit Structure
1.0 Objective
1.1 Introduction
1.2 Cloud computing at a glance
1.2.1 The vision of cloud computing
1.2.2 Defining a cloud
1.2.3 A closer look
1.2.4 The cloud computing reference model
1.2.5 Characteristics and benefits
1.2.6 Challenges ahead
1.3 Historical developments
1.3.1 Distributed systems
1.3.2 Virtualization
1.3.3 Service-oriented computing
1.3.4 Utility-oriented computing
1.4 Building cloud computing environments
1.4.1 Application development
1.4.2 Infrastructure and system development
1.4.3 Computing platforms and technologies
1.4.3.1 Amazon web services (AWS)
1.4.3.2 Google AppEngine
1.4.3.3 Microsoft Azure
1.4.3.4 Hadoop
1.4.3.5 Force.com and Salesforce.com
1.4.3.6 Manjrasoft Aneka
1.5 Summary
1.6 Review questions
1.7 Reference for further reading

Cloud Computing: Unedited Version pg. 1


1.0 Objective

This chapter would make your under the concept of following concepts
• What is a cloud computing?
• What are characteristics and benefits of cloud computing?
• It’s Challenges.
• Historical development of technologies toward the growth of cloud computing
• Types of Cloud Computing Models.
• Different types of Services in the Cloud Computing.
• Application development and Infrastructure and system development technologies
about the Cloud Computing.
• Overview of different sets of Cloud Service Providers.

1.1 Introduction

Historically, computing power was a scarce, costly tool. Today, with the emergence of cloud
computing, it is plentiful and inexpensive, causing a profound paradigm shift — a transition
from scarcity computing to abundance computing. This computing revolution accelerates the
commoditization of products, services and business models and disrupts current information
and communications technology (ICT) Industry .It supplied the services in the same way to
water, electricity, gas, telephony and other appliances. Cloud Computing offers on-demand
computing, storage, software and other IT services with usage-based metered payment. Cloud
Computing helps re-invent and transform technological partnerships to improve marketing,
simplify and increase security and increasing stakeholder interest and consumer experience
while reducing costs. With cloud computing, you don't have to over-provision resources to
manage potential peak levels of business operation. Then, you have the resources you really
required. You can scale these resources to expand and shrink capability instantly as the
business needs evolve. This chapter offers a brief summary of the trend of cloud computing by
describing its vision, addressing its key features, and analyzing technical advances that made
it possible. The chapter also introduces some key cloud computing technologies and some
insights into cloud computing environments.

1.2 Cloud computing at a glance

The notion of computing in the "cloud" goes back to the beginnings of utility computing, a
term suggested publicly in 1961 by computer scientist John McCarthy:

“If computers of the kind I have advocated become the computers of the future, then
computing may someday be organized as a public utility just as the telephone system is a
public utility… The computer utility could become the basis of a new and important industry.”

The chief scientist of the Advanced Research Projects Agency Network (ARPANET),
Leonard Kleinrock, said in 1969:

“as of now, computer networks are still in their infancy, but as they grow up and become
sophisticated, we will probably see the spread of ‘computer utilities’ which, like present
electric and telephone utilities, will service individual homes and offices across the country.”

This vision of the computing utility takes form with cloud computing industry in the 21st
century. The delivery of computing services is easily available on demand just like other
utilities services such as water, electricity, telephone and gas in today's society are available.
Likewise, users (consumers) only have to pay service providers if they have access to
computing resources. Instead of maintaining their own computing systems or data centers,
customer can lease access from cloud service providers to applications and storage. The
advantage of using cloud computing services is that organizations can avoid the upfront
cost and difficulty of running and managing their own IT infrastructure and pay for when they
use it. Cloud providers can benefit from large economies of scale by offering the same
services to a wide variety of customers.
In the case, consumers can access the services according to their requirement with the
knowing where all their services are hosted. These model can called as utility computing as
cloud computing. As cloud computing called as utility computing because users can access the
Cloud Computing: Unedited Version pg. 2
infrastructure as a “cloud” as application as services from anywhere part in the world. Hence
Cloud computing can be defined as a new dynamic provisioning model of computing services
that improves the use of physical resources and data centers is growing uses virtualization and
convergence to support multiple different systems that operate on server platforms
simultaneously. The output achieved with different placement schemes of virtual machines
will differ a lot. .
By observing advancement in several technologies , we can track of cloud computing that is
(virtualization, multi-core chips), especially in hardware; Internet (Web services, service-
oriented architectures, Web 2.0), Distributed computing (clusters, grids), and autonomous
Computing, automation of the data center). The convergence of Figure 1.1 reveals the areas of
technology that have evolved and led to the advent Cloud computing. Any of these
technologies were considered speculation at an early stage of development; however, they
received considerable attention later Academia and big business companies have been
prohibited. Therefore, a Process of specification and standardization followed which resulted
in maturity and wide adoption. . The rise of cloud computing is closely associated with the
maturity of these technologies.

FIGURE 1.1. Convergence of various advances leading to the advent of cloud computing

1.2.1 The vision of cloud computing

The virtual provision of cloud computing is hardware, runtime environment and resources for a
user by paying money. As of these items can be used as long as the User, no upfront commitment
requirement. The whole computer device collection is turned into a Utilities set that can be
supplied and composed in hours rather than days together, to deploy devices without Costs for
maintenance. A cloud computer's long-term vision is that IT services are traded without
technology and as utilities on an open market as barriers to the rules.
We can hope in the near future that it can be identified the solution that clearly satisfies our needs
entering our application on a global digital market services for cloud computing. This market will
make it possible to automate the process of discovery and integration with its existing software
systems. A digital cloud trading platform is available services will also enable service providers to
boost their revenue. A cloud service may also be a competitor's customer service to meet its
consumer commitments.
Company and personal data is accessible in structured formats everywhere, which helps us to
access and communicate easily on an even larger level. Cloud computing's security and stability
will continue to improve, making it even safer with a wide variety of techniques. Instead of
concentrating on what services and applications they allow, we do not consider "cloud" to be the
Cloud Computing: Unedited Version pg. 3
most relevant technology. The combination of the wearable and the bringing your own device
(BYOD) with cloud technology with the Internet of Things ( IOT) would become a common
necessity in person and working life such that cloud technology is overlooked as an enabler.

Cloud Computing: Unedited Version pg. 4


Figure 1.2. Cloud computing vision.
(Reference from “Mastering Cloud Computing Foundations and Applications Programming”
by Rajkumar Buyya)

1.2.2 Defining a cloud


The fairly recent motto in the IT industry "cloud computing," which came into being after many
decades of innovation in virtualization, utility computing, distributed computing, networking and
software services. A cloud establishes an IT environment invented to provide measured and
scalable resources remotely. It has evolved as a modern model for information exchange and
internet services. This provides more secure, flexible and scalable services for consumers. It is used
as a service-oriented architecture that reduces end-user overhead information.
Figure 1.3 illustrates the variety of terms used in current cloud computing definitions.

FIGURE 1.3 Cloud computing technologies, concepts, and ideas.

Cloud Computing: Unedited Version pg. 5


(Reference from “Mastering Cloud Computing Foundations and Applications Programming”
by Rajkumar Buyya)
Internet plays a significant role in cloud computing for representing a transportation medium of
cloud services which can deliver and accessible to cloud consumer. According to the definition
given by Armbrust
Cloud computing refers to both the applications delivered as services over the Internet and the
hardware and system software in the datacenters that provide those services
Above definition indicated about the cloud computing which touching upon entire stack from
underlying hardware to high level software as service. It introduced with the concept of everything
as service called as Xaas where different part of the system like IT Infrastructure , development
platform for an application ,storage ,databases and so on can be delivered as services to the cloud
consumers and consumers has to paid for the services what they want. This new paradigms of the
technologies not only for the development of the software but also how the user can deploy the
application ,make the application accessible and design of IT infrastructure and how this
companies allocate the costs for IT needs. This approach encourage the cloud computing form
global point of views that one single user can upload the documents in the cloud and on the others
side Company owner want to deploy the entire infrastructure in the public cloud. According to the
definition proposed by the U.S. National Institute of Standards and Technology (NIST):
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a
shared pool of configurable computing resources (e.g., networks, servers, storage, applications,
and services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction.
Another approach of cloud computing is “utility computing” where could computing mainly focus
on delivering services based upon the pricing model it called as “pay-per-use” strategy. Cloud
computing make all the resources online mode such as storage, you can lease virtual hardware or
you can use the resource for the application development and users has to pay according to their
usage their will no or minimal amount of upfront cost. All this above operations are performed and
user have to pay the bill by simply entering the credit card details and accesses this services
through the web browsers. According to George Reese
He have defined three criteria on whether a particular service is a cloud service:
• The service is accessible via a web browser (nonproprietary) or web services API.
• Zero capital expenditure is necessary to get started.
• You pay only for what you use as you use it.
Many cloud service providers provides the cloud services freely to the users but some enterprise
class services can be provided by the cloud service providers based upon specific pricing schemes
where users have to subscribe with the service provider on which a service level agreement (SLA)
is defined based on the quality parameters between the cloud service providers and user and cloud
service providers has to delivered the services according the service level agreement (SLA)
RajKumar Buyya defined cloud computing based on the nature of utility computing
A cloud is a type of parallel and distributed system consisting of a collection of interconnected and
virtualized computers that are dynamically provisioned and presented as one or more unified
computing resources based on service-level agreements established through negotiation between
the service provider and consumers.
1.2.3 A closer look
Cloud computing is useful in governments, enterprises, public and private institutions and research
organizations which make more effective and demand-driven computing services systems. There
seem to be a number of specific examples demonstrating emerging applications of cloud computing
in both established companies and startups. Such cases are intended to illustrate the value
proposition of viable cloud computing solutions and the benefits businesses have gained from these
services.
NewYork Times : One of the most widely known examples of cloud computing commitment
comes from New York Times . The New York Times has collected a large number of high-
resolution scanned images of historical newspapers, ranging from 1851-1922. They want to process
this set of images into separate articles in PDF format. Using 100 EC2 instances, they can complete
the processing within 24 hours at a total cost of $ 890 (EC2 calculation time is $ 240, S3 data
transfer and storage use is $ 650, storage and transfer of 4.0TB source image and 1.5TB Output
Cloud Computing: Unedited Version pg. 6
PDF). Derek Gottfrid pointed out: "Actually, it worked so well that we ran it twice, because after
the completion we found an error in the PDF."
The New York Times had the option to utilize 100 servers for 24 hours at the low standard cost of
ten cent an hour for every server. In the event that the New York times had bought even a solitary
server for this errand, the probable expense would have surpassed the $890 for simply the
hardware, and they likewise need to think about the expense of administration, power and cooling
Likewise, the handling would have assumed control more than a quarter of a year with one server.
On the off chance that the New York Times had bought four servers, as Derek Gottfrid had
considered, it would have still taken almost a month of calculation time. The quick turnaround time
(sufficiently quick to run the activity twice) and endlessly lower cost emphatically represents the
prevalent estimation of cloud services.
Washington Post : In a related but more latest event, the Washington Post were able to transform
17,481 pages of scanned document images into a searchable database in just a day using Amazon
EC2. On March 19th at 10am, Hillary Clinton’s official White House schedule from 1993-2001
was published to the public as a large array of scanned photographs (in PDF format, but non-
searchable). Washington Post programmer Peter Harkins utilized 200 Amazon EC2 instances to
conduct OCR (Optical Character Recognition) on the scanned files to create searchable text – “ I
used 1,407 hours of virtual machine time with a total cost of $144.62. We find it a positive proof of
concept.
DISA : Federal Computer Week mentioned that the Defense Information Systems Agency (DISA)
as compared the cost of the usage of Amazon EC2 versus internally maintained servers : “In a
latest take a look at, the Defense Information Systems Agency in comparison the price of growing
a simple application known as the Tech Early Bird on $30,000 well worth of in-house servers and
software program with the costs of growing the equal application using the Amazon Elastic
Compute Cloud from Amazon Web Services. Amazon charged 10 cents an hour for the provider,
and DISA paid a total of $5 to expand a software that matched the overall performance of the in-
house application.
SmugMug : SmugMug, an image posting and hosting web site like Flickr , stores a substantial
level of its photo information in Amazon's S3 cloud storage service . In 2006, they ended up saving
"$500,000 in prepared disk drive expenses in 2006 and reduce its disk storage space array costs in
half" through the use of Amazon S3. Based on the CEO of SmugMug, they might "easily save a lot
more than $1 million" in the next year through the use of S3. The CEO known that their present
growth rate during the article necessitates about $80,000 worth of new hardware, and the regular
costs boost even more considerably after putting "power, cooling, the info center space, along with
the manpower had a need to manage them." On the other hand, Amazon S3 costs around $23,000
per month for equivalent storage which is all-inclusive (power, maintenance, cooling, etc. are
figured into the expense of the storage.
Eli Lily : Eli Lily, among the largest pharmaceutical companies, is needs to utilize Amazon's
storage and compute clouds to supply on-demand high-performance processing for research reasons .
John Foley highlights, "it accustomed to acquire Eli Lilly seven and a half weeks to deploy a server
internally" whereas Amazon can provision a virtual server in 3 minutes. Furthermore "a 64-node
Linux cluster could be online in 5 minutes (compared against 90 days internally)." Amazon's cloud
providers not only deliver on-demand scaling and usage-based billing, they enable Eli Lily to
respond with considerably amplified agility in addition, eliminating time-consuming products
deployment and acquisition functions.
Best Buy’s Giftag: Best Buy's Giftag is a new online wish-list service hosted by Google's App
Engine. In a video interview, the developers suggested that they were beginning to build a platform
with a different technology and moved to Google App Engine with its superior speed of
development and scaling advantages. As one developer eloquently stated it, "a lot of the work that
none of us even needs to do is [already] completed for us." The developers also lauded App Engine
's design to allow effortless scaling; App Engine-based web apps inherit Google's best-in - class
technologies and expertise in running large-scale websites. By the end of the day, App Engine
helps developers to focus on building site-specific separated features: "Not worried with the
operational aspects of an application going away always frees you to create excellent code or
evaluate your code better.
TC3 : TC3 (Total Claims Capture & Control) is a healthcare services company imparting claims
management solution. TC3 now makes use of Amazon’s cloud services to allow on-demand
scaling of resource and lower infrastructure costs . TC3’s CTO notes, “we're making use of
Amazon S3, EC2, and SQS to permit our claim processing capacity to growth and reduce as
required to satisfy our service level agreements (SLAs). There are times we require massive
quantities of computing resource that a long way exceed our machine capacities and when these
Cloud Computing: Unedited Version pg. 7
conditions took place inside the past our natural response became to name our hardware vendor for
a quote. Now, by using the usage of AWS products, we can dramatically reduce our processing
time from weeks or months right down to days or hours and pay much less than shopping, housing
and maintaining the servers ourselves” Another particular feature of TC3 's activities is that,
because they provide US health-related services, they are obligated to abide with the HIPPA
(Health Insurance Portability and Accountability Act). Regulatory compliance is one of the main
obstacles facing corporate adoption of cloud infrastructure – the fact that TC3 is capable of
complying with HIPPA on Amazon's platform is significant.
How all of the computing made possible? in same IT services on demand like computing power ,
storage and providing an runtime environments for development of an applications on pay-as-you
go basis .cloud computing not only provides an opportunity for easily accessing of IT services as
per demand , but also provides newly ideas regarding IT Services and resources as am utilities
.Figure 1.4 provides a bird’s-eye view of cloud computing

FIGURE 1.4 A bird’s-eye view of cloud computing


(Reference from “Mastering Cloud Computing Foundations and Applications Programming”
by Rajkumar Buyya)
There are three deployment models for accessing the services of cloud computing environment are
public, private and hybrids clouds (see Figure 1.5). The public cloud is one of the most common
deployment models in which computing services is offered by third-party vendors that the
consumer are able to access and purchase the resource from the public cloud via the public internet.
These can be free or on-demand, meaning that consumers pay for their CPU cycles, storage or
bandwidth per use. Public clouds will save companies from the expensive procurement,
management and on-site maintenance of hardware and application infrastructure — all
management and maintenance of the system is held to responsibility of the cloud service provider.
Public clouds can also be deployed faster than on-site infrastructures with a platform almost
constantly scalable. Although security issues have been posed by public cloud implementations, the
public cloud could be as secure as the most efficiently operated private cloud deployment when it
is implemented correctly. A private cloud is an essentially one organization's cloud service. In
using a private cloud, the advantages of cloud computing are experienced without sharing
resources with other organizations. There can be a private cloud within an organization, or be
controlled from a third party remotely, and accessed via the Internet (but it is not shared with
others, unlike a public cloud). Private cloud incorporates several of the advantages of cloud
computing — including elasticity, scalability and easy service delivery — with the on-site control,
security, and resource customization .Many companies select private cloud over public cloud
(cloud computing services delivered through multi-customer infrastructure) because private cloud
is a simpler (or the only way) way to satisfy their regulatory compliance requirements. Others
prefer private cloud because their workloads deal with confidential information, intellectual
property, and personally identifiable information (PII), medical records, financial data and other
sensitive data. Hybrid cloud is an infrastructure that contains links between a user's cloud (typically
referred to as "private cloud") and a third-party cloud (typically referred to as "public cloud").
Cloud Computing: Unedited Version pg. 8
Whilst the private and public areas of the hybrid cloud are linked, they remain unique. This allows
a hybrid cloud to simultaneously offer the advantages of several implementation models. The
sophistication of hybrid clouds is very different. Some hybrid clouds, for example, only connect the
on-site to public clouds. The operations and application teams are responsible for all the difficulties
inherent in the two different infrastructures.

FIGURE 1.5 Major deployment models for cloud computing.


Reference from “Mastering Cloud Computing Foundations and Applications Programming”
by Rajkumar Buyya)
1.2.4 The cloud computing reference model
A model that characterizes and standardizes the functions of a cloud computing environment, is the
cloud reference model. This is a basic benchmark for cloud computing development. The growing
popularity of cloud computing has expanded the definitions of different cloud computing
architectures. The cloud environment has a wide range of vendors and multiple offer definitions
which make the evaluation of their services very hard. The way the cloud functions and interacts
with other technology can be a little confusing with such complexity in its implementation.
A standard cloud reference model for architects, software engineers, security experts and
businesses is required to achieve the potential of cloud computing. This cloud landscape is
controlled by the Cloud Reference Model. Figure 1.6 displays various cloud providers and their
innovations in the cloud services models available on the market.

FIGURE 1.6 The Cloud Computing Reference Model.


Cloud Computing: Unedited Version pg. 9
Reference from “Mastering Cloud Computing Foundations and Applications Programming”
by Rajkumar Buyya)
Cloud computing is an all-encompassing term for all resources that are hosted on the Internet.
These services are classified under three main categories: infrastructure as a service (IaaS),
platform as a service (PaaS) and software as a service (SaaS). These categories are mutually related
as outlined in Figure 1.6 which gives an organic view of cloud computing. The model structures the
broad variety of cloud computing services in a layered view from the base to the top of the
computing stack.
At the stack foundation, Infrastructure as Service (IaaS) is the most common cloud computing
service model, offering the basic infrastructure of virtual servers, networks, operating systems and
storage drives. This provides the flexibility, reliability and scalability many companies seek with
the cloud and eliminates the need for the office hardware. This makes it a perfect way to promote
business growth for SMEs looking for a cost-effective IT way. IaaS is a completely outsourced
pay-for-use service that can be run in a public, private or hybrid infrastructure.
The next step in the stack is platform-as-a-service (PaaS) solutions. Cloud providers deploy the
software and infrastructure framework, but companies can develop and run their own apps. Web
applications can easily and quickly be created via PaaS with the flexibility and robustness of the
service to support it. PaaS solutions are scalable and suitable if multiple developers work on a
single project. It is also useful when using an established data source (such as a CRM tool).
Top of the stack, Software as a Service (SaaS) This cloud computing solution includes deploying
Internet-based software to different companies paying via subscription or a paid-per-use model. It
is an important tool for CRM and applications which require a great deal of Web or mobile access
– such as software for mobile sales management. SaaS is managed from a centralized location so
that companies need not be worried about its own maintenance and is ideal for short-term projects.
The big difference in control between PaaS and IaaS is users got. Essentially, PaaS makes it
possible for suppliers to manage everything IaaS calls for more customer management. In general,
companies with a software package or application already have specific purpose and you should
choose to install and run it in the cloud IaaS rather than PaaS.
1.2.5 Characteristics and benefits
As both commercially and technologically mature cloud computing services, companies will be
easier to maximize their potential benefits. However, it is equally important to know what cloud
computing is and what it does.

FIGURE 1. 7 Features of Cloud Computing


Following are the characteristics of Cloud Computing:
1. Resources Pooling
This means that the Cloud provider used a multi-leaner model to deliver the
computing resources to various customers. There are various allocated and reassigned
physical and virtual resources, which rely on customer demand. In general, the customer
Cloud Computing: Unedited Version pg. 10
has no control or information about the location of the resources provided, but can choose
location on a higher level of abstraction.

2. On-Demand Self-Service
This is one of the main and useful advantages of Cloud Computing as the user can track
server uptimes, capability and network storage on an ongoing basis. The user can also
monitor computing functionalities with this feature.
3. Easy Maintenance
The servers are managed easily and the downtime is small and there are no downtime
except in some cases. Cloud Computing offers an update every time that increasingly
enhances it. The updates are more system friendly and operate with patched bugs faster than
the older ones.
4. Large Network Access
The user may use a device and an Internet connection to access the cloud data or upload it
to the cloud from anywhere. Such capabilities can be accessed across the network and
through the internet.
5. Availability
The cloud capabilities can be changed and expanded according to the usage. This review
helps the consumer to buy additional cloud storage for a very small price, if necessary.
6. Automatic System
Cloud computing analyzes the data required automatically and supports a certain service
level of measuring capabilities. It is possible to track, manage and report the usage. It
provides both the host and the customer with accountability.
7. Economical
It is a one-off investment since the company (host) is required to buy the storage, which can
be made available to many companies, which save the host from monthly or annual costs.
Only the amount spent on the basic maintenance and some additional costs are much
smaller.
8. Security
Cloud Security is one of cloud computing's best features. It provides a snapshot of the data
stored so that even if one of the servers is damaged, the data cannot get lost. The
information is stored on the storage devices, which no other person can hack or use. The
service of storage is fast and reliable.
9. Pay as you go
Users only have to pay for the service or the space in cloud computing. No hidden or
additional charge to be paid is liable to pay. The service is economical and space is often
allocated free of charge.
10. Measured Service
Cloud Computing resources that the company uses to monitor and record. This use of
resources is analyzed by charge-per-use capabilities. This means that resource use can be
measured and reported by the service provider, either on the virtual server instances running
through the cloud. You will receive a models pay depending on the manufacturing
company's actual consumption.

1.2.6 Challenges ahead


All has advantages and challenges. We saw many Cloud features and it’s time to identify the Cloud
computing challenges with tips and techniques you can identify all your own. Let's therefore start
to explore cloud computing risk and challenges. Nearly all companies are using cloud computing
because companies need to store the data. The companies generate and store a tremendous amount
of data. Thus, they face many security issues. Companies would include establishments to
streamline and optimize the process and to improve cloud computing management.

Cloud Computing: Unedited Version pg. 11


This is a list of all cloud computing threats and challenges:
1. Security & Privacy
2. Interoperability & Portability
3. Reliable and flexible
4. Cost
5. Downtime
6. Lack of resources
7. Dealing with Multi-Cloud Environments
8. Cloud Migration
9. Vendor Lock-In
10. Privacy and Legal issues

1. Security and Privacy of Cloud


The cloud data store must be secure and confidential. The clients are so dependent on the cloud
provider. In other words, the cloud provider must take security measures necessary to secure
customer data. Securities are also the customer's liability because they must have a good
password, don't share the password with others, and update our password on a regular basis. If
the data are outside of the firewall, certain problems may occur that the cloud provider can
eliminate. Hacking and malware are also one of the biggest problems because they can affect
many customers. Data loss can result; the encrypted file system and several other issues can be
disrupted.
2. Interoperability and Portability
Migration services into and out of the cloud shall be provided to the Customer. No bond period
should be allowed, as the customers can be hampered. The cloud will be capable of supplying
premises facilities. Remote access is one of the cloud obstacles, removing the ability for the
cloud provider to access the cloud from anywhere.
3. Reliable and Flexible
Reliability and flexibility are indeed a difficult task for cloud customers, which can eliminate
leakage of the data provided to the cloud and provide customer trustworthiness. To overcome
this challenge, third-party services should be monitored and the performance, robustness, and
dependence of companies supervised.
4. Cost
Cloud computing is affordable, but it can be sometimes expensive to change the cloud to
customer demand. In addition, it can hinder the small business by altering the cloud as demand
can sometimes cost more. Furthermore, it is sometimes costly to transfer data from the Cloud to
the premises.
5. Downtime
Downtime is the most popular cloud computing challenge as a platform free from downtime is
guaranteed by no cloud provider. Internet connection also plays an important role, as it can be a
problem if a company has a nontrustworthy internet connection, because it faces downtime.
6. Lack of resources
The cloud industry also faces a lack of resources and expertise, with many businesses hoping to
overcome it by hiring new, more experienced employees. These employees will not only help
solve the challenges of the business but will also train existing employees to benefit the
company. Currently, many IT employees work to enhance cloud computing skills and it is
difficult for the chief executive because the employees are little qualified. It claims that
employees with exposure of the latest innovations and associated technology would be more
important in businesses.

Cloud Computing: Unedited Version pg. 12


7. Dealing with Multi-Cloud Environments
Today not even a single cloud is operating with full businesses. According to the RightScale
report revelation, almost 84 percent of enterprises adopt a multi-cloud approach and 58 percent
have their hybrid cloud approaches mixed with the public and private clouds. In addition, five
different public and private clouds are used by organizations.

FIGURE 1. 8 RightScale 2019 report revelation


The teams of the IT infrastructure have more difficulty with a long-term prediction about the
future of cloud computing technology. Professionals have also suggested top strategies to
address this problem, such as rethinking processes, training personnel, tools, active vendor
relations management, and the studies.

8. Cloud Migration
While it is very simple to release a new app in the cloud, transferring an existing app to a cloud
computing environment is harder. 62% said their cloud migration projects are harder than they
expected, according to the report. In addition, 64% of migration projects took longer than
expected and 55% surpassed their budgets. In particular, organizations that migrate their
applications to the cloud reported migration downtime (37%), data before cutbacks
synchronization issues (40%), migration tooling problems that work well (40%), slow
migration of data (44%), security configuration issues (40%), and time-consuming
troubleshooting (47%). And to solve these problems, close to 42% of the IT experts said that
they wanted to see their budget increases and that around 45% of them wanted to work at an in-
house professional, 50% wanted to set the project longer, 56% wanted more pre-migration
tests.
9. Vendor lock-in
The problem with vendor lock-in cloud computing includes clients being reliant (i.e. locked in)
on the implementation of a single Cloud provider and not switching to another vendor without
any significant costs, regulatory restrictions or technological incompatibilities in the future. The
lock-up situation can be seen in apps for specific cloud platforms, such as Amazon EC2,
Microsoft Azure, that are not easily transferred to any other cloud platform and that users are
vulnerable to changes made by their providers to further confirm the lenses of a software
developer. In fact, the issue of lock-in arises when, for example, a company decide to modify
cloud providers (or perhaps integrate services from different providers), but cannot move
applications or data across different cloud services, as the semantics of cloud providers'
resources and services do not correspond. This heterogeneity of cloud semantics and APIs
creates technological incompatibility which in turn leads to challenge interoperability and
portability. This makes it very complicated and difficult to interoperate, cooperate, portability,
handle and maintain data and services. For these reasons, from the point of view of the
company it is important to maintain flexibility in changing providers according to business
needs or even to maintain in-house certain components which are less critical to safety due to
risks. The issue of supplier lock-in will prevent interoperability and portability between cloud
providers. It is the way for cloud providers and clients to become more competitive.
10. Privacy and Legal issues
Apparently, the main problem regarding cloud privacy/data security is 'data breach.'
Cloud Computing: Unedited Version pg. 13
Infringement of data can be generically defined as loss of electronically encrypted personal
information. An infringement of the information could lead to a multitude of losses both for the
provider and for the customer; identity theft, debit/credit card fraud for the customer, loss of
credibility, future prosecutions and so on. In the event of data infringement, American law
requires notification of data infringements by affected persons. Nearly every State in the USA
now needs to report data breaches to the affected persons. Problems arise when data are subject
to several jurisdictions, and the laws on data privacy differ. For example, the Data Privacy
Directive of the European Union explicitly states that 'data can only leave the EU if it goes to a
'additional level of security' country.' This rule, while simple to implement, limits movement of
data and thus decreases data capacity. The EU's regulations can be enforced.
1.3 Historical developments

No state-of-the-art technology is cloud computing. The development of Cloud Computing


through various phases, including Grid Computing, Utility Computing, Application Service
Provision and Software as a Service, etc., has taken place. But the overall (whole) concept of
the provision of computing resources via a global network began in the 1960s. By 2020, it is
projected that the cloud computing market will exceed 241 billion dollars. But the history of
cloud computing is how we got there and where all that started. Cloud computing has a history
that is not that old, the first business and consumer cloud computing website was launched in
1999 (Salesforce.com and Google). Cloud computing is directly connected to Internet
development and the development of corporate technology as cloud computing is the answer
to the problem of how the Internet can improve corporate technology. Business technology
has a rich and interesting background, almost as long as businesses themselves, but the
development that has influenced Cloud computing most directly begins with the emergence of
computers as suppliers of real business solutions.

History of Cloud Computing

Cloud computing is one of today's most breakthrough technology. Then there's a brief cloud-
computing history.

FIGURE 1. 9 History of Cloud Computing [*Gartner, **Constellation Research]

EARLY 1960S
Computer scientist John McCarthy has a time-sharing concept that allows the organization to
use an expensive mainframe at the same time. This machine is described as a major
contribution to Internet development, and as a leader in cloud computing.

IN 1969
J.C.R. Licklider, responsible for the creation of the Advanced Research Projects Agency
(ARPANET), proposed the idea of an "Intergalactic Computer Network" or "Galactic Network"
(a computer networking term similar to today’s Internet). His vision was to connect everyone
around the world and access programs and data from anywhere.

IN 1970
Usage of tools such as VMware for virtualization. More than one operating system can be run
in a separate environment simultaneously. In a different operating system it was possible to
Cloud Computing: Unedited Version pg. 14
operate a completely different computer (virtual machine).

IN 1997
Prof Ramnath Chellappa in Dallas in 1997 seems to be the first known definition of "cloud
computing," "a paradigm in which computing boundaries are defined solely on economic rather
than technical limits alone."

IN 1999
Salesforce.com was launched in 1999 as the pioneer of delivering client applications through its
simple website. The services firm has been able to provide applications via the Internet for both
the specialist and mainstream software companies.

IN 2003
This first public release of Xen ,is a software system that enables multiple virtual guest
operating systems to be run simultaneous on a single machine, which also known as the Virtual
Machine Monitor ( VMM) as a hypervisor.

IN 2006
The Amazon cloud service was launched in 2006. First, its Elastic Compute Cloud ( EC2)
allowed people to use their own cloud applications and to access computers. Simple Storage
Service (S3) was then released. This incorporated the user-as-you-go model and has become
the standard procedure for both users and the industry as a whole.

IN 2013
A total of £ 78 billion in the world 's market for public cloud services was increased by 18.5%
in 2012, with IaaS as one of the fastest growing services on the market.

IN 2014
Global business spending for cloud-related technology and services is estimated to be £ 103.8
billion in 2014, up 20% from 2013 (Constellation Research).

Figure gives an analysis of the development of cloud computing distributed technologies.


When we track the historic developments, we review briefly five key technologies that have
played a significant role in cloud computing. They are distributed systems, virtualization, Web
2.0, service orientation and utility computing.

FIGURE 1.10: The evolution of distributed computing technologies, 1950s- 2010s.


Reference from “Mastering Cloud Computing Foundations and Applications Programming”
by Rajkumar Buyya)

Distributed computing is a computer concept that refers most of the time to multiple computer
systems that work on a single problem. A single problem in distributed computing is broken
Cloud Computing: Unedited Version pg. 15
down into many parts, and different computers solve each part. While the computers are
interconnected, they can communicate to each other to resolve the problem. The computer
functions as a single entity if done properly.

The ultimate goal of distributed computing is to improve the overall performance through cost-
effective, transparent and secure connections between users and IT resources. It also ensures
defect tolerance and provides access to resources in the event of failure of one component.

There really is nothing special about distributing resources in a computer network. This began
with the use of mainframe terminals, then moved to minicomputers and is now possible in
personal computers and client server architecture with several tiers.

A distributed computer architecture consists of a number of very lightweight client machines


installed with one or several dedicated servers for computer management. Client agents
normally recognize when the machine is idle, so that the management server is notified that the
machine is not in use or that it is available. The agent then asks for a package. When this
application package is delivered from the management server to the client, when it has free
CPU cycles, the software runs the application software and returns the results to the
management server. When the user returns, the management server will return the resources
used to perform a number of tasks in the absence of the user.
Distributed systems show heterogeneity, openness, scalability, transparency, concurrency,
continuous availability and independent failures. These characterize clouds to some extent,
especially with regard to scalability, concurrency and continuous availability.
Cloud computing has contributed to three major milestones: mainframe, cluster computing and
grid computing.

Mainframes: A mainframe is a powerful computer which often serves as the main data
repository for an IT infrastructure of an organization. It is connected with users via less
powerful devices like workstations or terminals. It is easier to manage, update and protect the
integrity of data by centralizing data into a single mainframe repository. Mainframes are
generally used for large-scale processes which require greater availability and safety than
smaller machines. Mainframes computers or mainframes are primarily machines for essential
purposes used by large organizations; bulk data processing, for example census, industry and
consumer statistics, enterprise resource planning and transaction processing. During the late
1950s, mainframes only had a basic interactive interface, using punched cards, paper tape or
magnetic tape for data transmission and programs. They worked in batch mode to support back
office functions, like payroll and customer billing, mainly based on repetitive tape and merging
operations followed by a line printing to continuous stationary pre-printed. Introducing digital
user interfaces almost solely used to execute applications (e.g. airline booking) rather than to
build the software. Typewriter and Teletype machines were standard network operators' control
consoles in the early ' 70s, although largely replaced with keypads.

FIGURE 1.11 Mainframes

Cluster computing: The approach to computer clustering typically connects some computer
Cloud Computing: Unedited Version pg. 16
nodes (personal computer used as a server) ready for download via a fast local zone (LAN)
network. Computing node activity coordinated by the software "clustering middleware," a
software layer situated in front of nodes that enables the users to access the cluster as a whole
by means of a Single system image concept. A cluster is a type of computer system that is
parallel or distributed and which consists of a collection of interconnected independent
computers, working together as a highly centralized computing tool that integrates software and
networking with independent computers in a single system. Clusters are usually used to provide
greater computational power than can be provided by a single computer for high availability for
greater reliability or high performance computing. In comparison with other technology, the
cluster technique is economical with respect to power and processing speeds, since it uses shelf
hardware and software components in comparison with mainframe computers which use own
hardware and software components custom built. Multiple computers in a cluster work together
to deliver unified processing and faster processing. A Cluster can be upgraded to a higher
specification, or extended by adding additional nodes as opposed to a mainframe computer.
Redundant machines which take over the processing continuously minimize the single
component failure. For mainframe applications, this kind of redundancy is absent.

PVM and MPI are the two methods most widely used in cluster communication.

PVM is the parallel virtual machine. The Oak Ridge National Laboratory was developed the
PVM around 1989. It is installed directly on each node and provides a set of libraries that
transform the node into a "parallel virtual machine." It offers a runtime environment for control
of resources and tasks management , error reporting and message passing . C, C++ or Fortran
may use PVM for user programs.

MPI is the message passing interface. In the 1990s PVM was created and replaced. Different
commercially available systems of the time are the basis for MPI design. It typically uses TCP /
IP and socket connections for implementation. The communication system currently used most
widely allows for parallel scheduling in C, Fortran, Python etc.

FIGURE 1.12 Cluster computing

Grid computing: It is a processor architecture that combines computer resources from


different fields to achieve the main purpose. The network computers can work together on a
work task in grid computing and therefore work as a super-computer. In general, a grid works
on several tasks in a network, but can also work on specific applications. In general, a grid
operates on different network tasks, but it can also operate on specific applications. It is
intended to solve problems that are too large for a supercomputer and to retain the ability to
handle several small problems. Computing grids have a multi-user network that meets
discontinuous information processing requirements.

A grid is connected to a computer cluster, which runs on an operating system, on Linux or free
software, using parallel nodes. The cluster can vary in size from small to multiple networks.
The technology is used through several computing resources in a broad variety of applications,
such as mathematical, research or educational tasks. It is often used in structural analysis as
well as in web services such as ATM banking, back office infrastructure, and research in
sciences or marketing. Grid computing consists of applications which are used in a parallel
networking environment to solve computational problems. It connects every PC and combines
information into a computational application.
Cloud Computing: Unedited Version pg. 17
Grids have a range of resources, whether through a network, or through open standards with
clear guidelines to achieve common goals and objectives, based on different software and
hardware structures, computer languages and framework.

Generally, grid operations are divided into two categories:


Data Grid: a system that handles large distributed sets of data used to control data and to share
users. It builds virtual environments that facilitate scattered and organized research. A data grid
example is Southern California’s Earthquake Center, which uses a middle software framework
to construct a digital library, a distributed filesystem and a continuous archive.

CPU Scavenging Grids: A cycle-scavenging system that moves projects as necessary from one
PC to another. The search for extraterrestrial intelligence computing, including more than 3
million computers, represents a familiar CPU scavenging grid. The detection of radio signals in
Searches for Extra Terrestrial Intelligence (SETI), is one of radio astronomy's most exciting
applications. A radio astronomy dish was used by the first SETI team in the late 1950s. A few
years later, the privately funded SETI Institute was established to perform more searches with
several American radio telescopes. Today, in cooperation with the radio astronomy engineers
and researchers of various observatories and universities, the SETi Institute creates its own
collection of private funds again. SETI 's vast computing capacity has led to a unique grid
computing concept which has now been expanded into many applications.

SETI@home is a scientific experiment using Internet connected computers for downloading


and analyzing SETI program radio telescope data. A free software program utilizes the power
of millions of computers and uses idle computer capacity to run in the background. Over two
million years of combined processing time have taken place over 5.2 million participants.

Grid computing for biology, medicine, Earth sciences, physics, astronomy, chemistry and
mathematics are being used. The Berkeley Open Infrastructure for Network Computing
(BOINC) is free open source computer and desktop grid computing software. By using the
BOINC platform, users can divide jobs between several grid computing projects and decide to
only give them one percentage of CPU time.

FIGURE 1.13 Grid computing Environment

1.3.2 Virtualization

Virtualization is a process that makes the use of physical computer hardware more effective and
forms the basis for cloud computing. Virtualization uses software to create a layer of abstraction
over computer hardware, enabling multiple virtual computers, usually referred to as VMs, to split
the hardware elements from a single computer — processors, memory, storage and more. Every
VM performs its own OS and acts like an autonomous computer given the fact that it runs on only
a portion of the underlying computer hardware.

The virtualization therefore facilitates a much more effective use of physical computer hardware,
thus allowing a larger return on the hardware investment of an organization.

Cloud Computing: Unedited Version pg. 18


Virtualization is today a common practice in IT architecture for companies. It is also the
technology that drives the business of cloud computing. Virtualization allows cloud providers to
service consumers with their own physical computing hardware and allows cloud users to
purchase only the computer resources they need when they need it and scale them cost-effectively
as their workloads increase.

Virtualization involves the creation of something's virtual platform, including virtual computer
hardware, virtual storage devices and virtual computer networks.

Software called hypervisor is used for hardware virtualization. With the help of a virtual machine
hypervisor, software is incorporated into the server hardware component. The role of hypervisor is
to control the physical hardware that is shared between the client and the provider. Hardware
virtualization can be done using the Virtual Machine Monitor (VVM) to remove physical
hardware. There are several extensions to the processes which help to speed up virtualization
activities and increase hypervisor performance. When this virtualization is done for the server
platform, it is called server socialization.
Hypervisor creates an abstract layer from the software to the hardware in use. After a hypervisor
is installed, virtual representations such as virtual processors take place. After installation, we
cannot use physical processors. There are several popular hypervisors including ESXi-based
VMware vSphere and Hyper-V.

FIGURE 1.14 Hardware Virtualization

Instances in virtual machines are typically represented by one or more data, which can be easily
transported in physical structures. In addition, they are also autonomous since they do not have
other dependencies for their use other than the virtual machine manager.

A Process virtual machine, sometimes known as an application virtual machine, runs inside a host
OS as a common application, supporting a single process. It is created at the beginning and at the
end of the process. Its aim is to provide a platform-independent programming environment which
abstracts the information about the hardware or operating system underlying the program and
allows it to run on any platform in the same way. For example, Linux wine software helps you run
Windows.

A high level abstraction of a VM process is the high level programming language (compared with
the low-level ISA abstraction of the VM system). Process VMs are implemented by means of an
interpreter; just-in-time compilation achieves performance comparable to compiled programming
languages.

The Java programming language introduced with the Java virtual machine has become popular
with this form of VM. The .NET System, which runs on a VM called the Common Language
Runtime, is another example.

Cloud Computing: Unedited Version pg. 19


FIGURE 1.15 process virtual machine design
Reference from “Mastering Cloud Computing Foundations and Applications Programming”
by Rajkumar Buyya)

Web 2.0

"Websites which emphasize user-generated content, user-friendliness, participatory culture, and


interoperability for end users" or participatory, or participative / activist and social websites. Web
2.0 is a new concept that was first used in common usage in 1999 about 20 years ago. It was first
coined by Darcy DiNucci and later popularized during a conference held in 2004 by Tim O'Reilly
and Dale Doughtery. It is necessary to remember that Web 2.0 frameworks deal only with website
design and use without placing the designers with technical requirements.

Web 2.0 is the term used to represent a range of websites and applications that permit anyone to
create or share information or material created online. One key feature of the technology is the
ability to people to create, share and communicate. Web 2.0 is different from other kinds of sites
because it does not require the participation of any Web design or publishing skills and makes the
creation, publication or communication of work in the world easy for people. The design allows it
to be simple and popular for a small community or a much wider audience to share knowledge.
The University will use these tools to communicate with students, staff and the university
community in general. It can also be a good way for students and colleagues to communicate and
interact.

It represents the evolution of the World Wide Web; the web apps, which enable interactive data
sharing, user-centered design and worldwide collaboration. Web 2.0 is a collective concept of
Web-based technologies that include blogging and wikis, online networking platforms,
podcasting, social networks, social bookmaking websites, Really Simple Syndication (RSS) feeds.
The main concept behind Web 2.0 is to enhance Web applications' connectivity and enable users
to easily and efficiently access the Web. Cloud computing services are essentially Web
applications that provide computing services on the Internet on demand. As a consequence, Cloud
Computing uses a Web 2.0 methodology, Cloud Computing is considered to provide a main Web
2.0 infrastructure; it facilitates and is improved by the Web 2.0 Framework Beneath Web 2.0 is a
set of web technologies. Recently appeared or shifted to a new production stage RIAs (Rich
Internet Applications).One of them Web's most prominent technology and quasi-standard AJAX
(Asynchronous JavaScript and XML). Other technologies like RSS (Really Simple Syndication),
Widgets (plug-in modular components) and Web services ( e.g. SOAP, REST).

Cloud Computing: Unedited Version pg. 20


FIGURE 1.15 Components of the social web (Web 2.0)

1.3.3 Service-oriented computing

The computing paradigm that uses services as a fundamental component in the creation of
applications / solutions is service oriented computing (SOC). Services are computer platform-
specific self-description components that enable the easy and cost-effective composition of
distributed applications. Services perform functions, from simple requests to complex business
processes. Services permit organisations, using common XML languages and protocols, to display
their core skills programming over the Internet or intra-network, and to execute it via an open-
standard self-description interface.

Because services provide uniform and ubiquitous distributors of information for a wide variety of
computing devices ( e.g. handheld computers, PDAs, cell phones or equipment) as well as
software platforms (e.g. UNIX and Windows), they are the next major step in distributed
computing technology. Services are provided by service providers – organizations that provide the
implementation of the service, provide their descriptions of service and related technical and
business support. Since different services can be available

Companies and Internet communications provide a centralized networking network for the
integration and collaboration intra- and cross-company application. Service customers can be
other companies 'or clients' applications, whether they are external applications, processes or
clients / users. These can include external applications.

Consequently, to satisfy these requirements services should be:

• Technology neutral: they must be invisible through standardized lowest common


denominator technologies that are available to almost all IT environments. This implies
that the invocation mechanisms (protocols, descriptions and discovery mechanisms)
should comply with widely accepted standards.
• Loosely coupled: no customer or service side needs knowledge or any internal structures
or conventions (context).
• Transparency of support locations: Services should have their definitions and location
information saved in a repository such as UDDI and accessible to a range of customers
which can locate services and invoke them regardless of their location.

Web-service interactions take place with the use of Web Service Description Language
(WSDL) as the common (XML) standard when calling Simple Object Access Protocol
(SOAP) containing XML data, and the web-service descriptions. WSDL is used for the
publishing of web services, for port types (the conceptual description of the procedure and
interchange of messages), and for binding ports and addresses (the tangible concept of
which packaging and transport protocols, for instance SOAPs, are used to interlink two
conversational end-points). The UDDI Standard is a directory service that contains
publications of services and enables customers to find and learn about candidate services.

The software-as-a-service concept advocated by service-oriented computing (SOC) was


pioneering and first appeared on the software model ASP (Application Service Provider).
An Application Service Provider ( ASP) is an entity which implements, hosts and handles
the access of a third party. Packaged application and provides clients with software-based
services and solutions from a central data center through a broad network. Subscription or
Cloud Computing: Unedited Version pg. 21
rental applications are delivered through networks. In essence, ASPs provided businesses
with a way to outsource any or all parts of their IT needs.

The ASP maintains the responsibility for managing the application in its infrastructure,
using the Internet as a connection between every customer and the key software
application, through a centrally hosted Intent application. What this means for an
organization is for the ASP to retention and guarantee the program and data are accessible
whenever appropriate, including the related infrastructure and the customer data.

While the ASP model first introduced the software as a service definition, it was not able
to provide full applications with customizable due to numerous inherent constraints such
as its inability of designing extremely interactive applications.. The result has been the
monolithic architectures and highly vulnerable integration of applications based on tight
coupling principle in customer-specific architectures.
We are in the middle of yet another significant development today in the development of a
software as a service architecture for asynchronous loosely linked interactions based on
XML standards with the intention of making it easier for applications to access and
communicate over the Internet. The SOC model enables the idea software-as-a-service to
extend to use the provision of complicated business processes and transactions as a
service, and allow applications to be created on the fly and services to be replicated across
and by everyone. Many ASPs are pursuing more of a digital infrastructure and business
models which are similar with those of cloud service providers to the relative advantages
of internet technology.

Functional and non-functional attributes consist of the web services. Quality of service (
QoS) is the so called unfunctional attributes. QoS is defined as a set of nonfunctional
characteristics of entities used to move from a web service repository to consumers who
rely on the ability of a web service to fulfill its specified or implied needs in an end-to -
end way, according to the quality definition of ISO 8402. Examples of QoS features
include performance, reliability, security, accessibility, usability, discovery, adaptively and
composability. A SLA that identifies the minimum (or acceptable range) values for QoS
attributes to be complied with on calling the service shall establish a QoS requirement
between the clients and providers.

What is Service Oriented Architecture?


Service-oriented Architecture or SOA bring us all to understand it as a architecture which
orients around services.. Services are discreet software components implemented using
well-defined interface standards. Service is delivered to a directory or registry until it is
created and validated to allow other developers to access the service. The registry also
provides a repository that contains information on the published service, for example how
to create the interface, what levels of service are required, how to retain authority, etc.

FIGURE 1.16 Service-oriented Architecture

SOA benefits
SOA services allow for agility of business. By integrating existing services, developers can
create applications quickly.

The services are distinct entities and can be invoked without a platform or programming
language knowledge at run-time.

The services follow a series of standards – Web Services Description Language (WSDL),
Cloud Computing: Unedited Version pg. 22
Representational State Transfer (REST), or the Simple Object Access Protocol(SOAP) –
which facilitate their integration with both existing and new applications. The SOAP
services are complemented by the following standards. SOAP.

Safety through Service Quality (QoS). Certain elements of QoS include authentication and
authorisation, reliable and consistent messaging, permission policies, etc.
There is no interdependence of each other's service components.

SOA and cloud computing challenges


The network dependency of both of these technologies is one of the major challenges.
In addition, dependence on the cloud provider, contracts and service levels agreements is
the challenges specific to cloud computing.

One of the challenges for SOA today are the requests to improve or change the service
provided by SOA service providers.

Does Cloud Computing compete with SOA?

Some see cloud computing as a descendant of SOA. It would not be completely untrue, as
the principles of service guidelines both apply to cloud computing and SOA. The
following illustration shows how Cloud Computing Services overlap SOA-

Cloud Computing Overlap SOA via Web Services


• Software as a Service • Application Layer • System of Systems
(SaaS) Components/Services Integration Focus
• Utility Computing • Network Dependence • Driving Consistency of
• Terabytes on Demand • Cloud/IP Wide Area Integration
• Data Distributed in a Network (WAN)- • Enterprise Application
Cloud supported Service Integration (EAI)
• Platform as a Service Invocations • Reasonably Mature
• Standards Evolving for • Leveraging Distributed Implementing
Different Layers of the Software Assets Standards
Stack • Producer/Consumer (REST,SOAP,WSDL,
Model UDDI,etc.)

It is very important to realize that while cloud computing overlaps with SOA, they focus
on various implementation projects. In order to exchange information between systems
and a network of systems, SOA implementations are primarily used. Cloud computing, on
the other hand, aims to leverage the network across the whole range of IT functions.

SOA is not suitable for cloud computing, actually they are additional activities. Providers
need a very good service-oriented architecture to be able to provide cloud services
effectively.

There are many common features of SOA and cloud computing, however, they are not and
can coexist. In its requirements for delivery of digital services, SOA seems to have
matured. Cloud Computing and its services are new as are numerous vendors such as
public, community, hybrid and private clouds, with their offerings. They are also growing.

1.3.4 Utility-oriented computing


The concept Utility Computing pertains to utilities and business models that provide its
customers with a service provider, and charges you for consumption. The computing
power, storage or applications are examples of such IT services. In this scenario the
customer will be the single divisions of the company as a service provider at a data center
of the company.

The concept utility applies to utility services offered by a utilities provider, such as
electricity, telephone, water and gas. Related to electricity or telephone, where the
consumer receives the utility computing, computing power is measured and paid on the
basis of a shared computer network.
Cloud Computing: Unedited Version pg. 23
The concept utility applies to utility services offered by a utilities provider, such as
electricity, telephone, water and gas. Related to electricity or telephone, where the
consumer receives the utility computing, computing power is measured and paid on the
basis of a shared computer network.

Utility computing is very analogous to virtualization so that the total volume of web
storage and the computing capacity available to customers is much greater than that of a
single computer. To make this type of web server possible, several network backend
servers are often used. The dedicated webservers can be used in explicitly built and leased
cluster types for end users. The distributed computing is the approach used for a single
'calculation' on multiple web servers.

FIGURE 1.17 Cloud Computing Technology – Utility Computing


Properties of utility computing

Even though meanings of utility computing are various, they usually contain the following
five characteristics.

Scalability

The utility computing shall ensure that adequate IT resources are available under all
situations. Improved service demand does not suffer from its quality (e.g. response time).

Price of demand
Until now, companies must purchase their own computing power such as hardware and
software. It is necessary to pay for this IT infrastructure beforehand, irrespective of its use
in the future. For instance, technology providers to reach this link depends on how many
CPUs the client has enabled during leasing rate for their servers. If the computer capacity
to assert the individual sections actually can be measured in a company, the IT costs may
be primarily attributable to each individual unit at internal cost. Additional forms of
connection are possible with the use of IT costs.

Standardized Utility Computing Services

A collection of standardized services is accessible from the utility computing service


provider. These agreements may differ in the level of service (Quality Agreement and IT
Price). The consumer does not have any impact on the infrastructure, such as the server
platform

Utility Computing and Virtualization

Virtualization technologies can be used to share web and other resources in the common
pool of machines. Instead of the physical resources available, this divides the network into
logical resources. No predetermined servers or storage of any other than a free server or
pool memory are assigned to an application.

Automation
Cloud Computing: Unedited Version pg. 24
Repeated management activities may be automated, such as setting up new servers or
downloading updates. Furthermore, the resource allocation to the services and IT service
management to be optimized must be considered, along with service standard agreements
and IT resource operating costs.

Advantages of Utility Computing

Utility computing lowers IT costs, despite the flexibility of existing resources. In fact,
expenses are clear and can be allocated directly to the different departments of a
organization. Fewer people are required for operational activities in the IT departments.

The companies are more flexible because their IT resources are adapted to fluctuating
demand more quickly and easily. All in all, the entire IT system is simpler to handle,
because no longer apps can take advantage of a particular IT infrastructure for any
program.

1.4 Building cloud computing environments

Cloud Computing Environment Application development occurs through platforms &


framework applications that provide various types of services, from the bare metal
infrastructure to custom applications that serve certain purposes.

1.4.1 Application development

A powerful computing model that enables users to use application on demand is provided
by cloud computing. One of the most advantageous classes of applications in this feature
are Web applications. Their performance is mostly influenced by broad range of
applications using various cloud services can generate workloads for specific user demands.
Several factors have facilitated the rapid diffusion of Web 2.0. First, Web 2.0 builds on a
variety of technological developments and advancements that allow users to easily create
rich and complex applications, including enterprise applications by leveraging the Internet
now as the main utility and user interaction platform. Such applications are characterized by
significant complex processes caused by user interactions and by interaction between
multiple steps behind the Web front. This is the application are most sensitive to improper
infrastructure and service deployment sizing or work load variability.

The resource-intensive applications represent another class of applications that could


potentially benefit greatly by using cloud computing. These applications may be
computational or data-intensive. Significant resources are in both cases
Required in reasonable time to complete the execution. It should be noted that such huge
quantities of resources are not constantly or for a long time needed. Scientific applications,
for example, may require huge computational capacity to conduct large-scale testing once
in a while so that infrastructure supports them cannot be purchased. Cloud computing is the
solution in this case. Resource-intensive applications are not collaborative, and are
characterized mainly by batch processing.

1.4.2 Infrastructure and system development

1.4.3 Computing platforms and technologies

Cloud application development involves leveraging platforms and frameworks which offer
different services, from the bare metal infrastructure to personalized applications that serve
specific purposes.

1.4.3.1 Amazon web services (AWS)

Amazon Web Services (AWS) is a cloud computing platform with functionalities such as
database storage, delivery of content, and secure IT infrastructure for companies, among
others. It is known for its on-demand services namely Elastic Compute Cloud (EC2) and
Simple Storage Service (S3). Amazon EC2 and Amazon S3 are essential tools to
Cloud Computing: Unedited Version pg. 25
understand if you want to make the most of AWS cloud.

Amazon EC2 is a software for running cloud servers that is short for Elastic Cloud
compute. Amazon launched EC2 in 2006, as it allowed companies to rapidly and easily spin
servers into the cloud, instead of having to buy, set up, and manage their own servers on the
premises.

While Amazon EC2 server instances can also have bare-metal EC2 instances, most Amazon
EC2 server instances are virtual machines housed on Amazon's infrastructure. The server is
operated by the cloud provider and you don't need to set up or maintain the hardware.) A
vast number of EC2 instances are available for different prices; generally speaking the more
computing capacity you use, the higher the EC2 instance you need. (Bare metal Cloud
Instances permit you to host a working load on a physical computer, rather than a virtual
machine. In certain Amazon EC2 examples, different types of applications such as the
parallel processing of big data workload GPUs are optimized for use.

EC2 offers functionality such as auto-scaling, which automates the process of increasing or
decreasing compute resources available for a given workload, not just to make the
deployment of a server simpler and quicker. Auto-scaling thus helps to optimize costs and
efficiency, especially in working conditions with significant variations in volume.

Amazon S3 is a storage service operating on the AWS cloud (as its full name, Simple
Storage Service). It enables users to store virtually every form of data in the cloud and
access the storage over a web interface, AWS Command Line Interface, or AWS API. You
need to build what Amazon called a 'bucket' which is a specific object that you use to store
and retrieve data for the purpose of using S3. If you like, you can set up many buckets.

Amazon S3 is an object storage system which works especially well for massive, uneven or
highly dynamic data storage.

1.4.3.2 Google AppEngine

The Google AppEngine (GAE) is a cloud computing service (belonging to the platform as a
service (PaaS) category) to create and host web-based applications within Google's data
centers. GAE web applications are sandboxed and run across many redundancy servers to
allow resources to be scaled up according to currently-existing traffic requirements. App
Engine assigns additional resources to servers to handle increased load.

Google App Engine is a Google platform for developers and businesses to create and run
apps using advanced Google infrastructure. These apps must be written in one of the few
languages supported, namely Java, Python, PHP and Go. This also requires the use of
Google query language and Google Big Table is the database used. The applications must
comply with these standards, so that applications must either be developed in keeping with
GAE or modified to comply.

GAE is a platform for running and hosting Web apps, whether on mobile devices and on the
Web. Without this all-in function, developers should be responsible for creating their own
servers, database software and APIs that make everyone work together correctly. GAE
takes away the developers' pressure so that they can concentrate on the app's front end and
features to enhance user experience.

1.4.3.3 Microsoft Azure

Microsoft Azure is a platform as a service (PaaS) to develop and manage applications for
using their Microsoft products and in their data centers. This is a complete suite of cloud
products that allow users to develop business-class applications without developing their
own infrastructure.

Three cloud-centric products are available on the Azure Cloud platform: the Windows
Azure, SQL Azure & Azure App Fabric controller. This involve the infrastructure hosting
facility for the application.

In the Azure, the Cloud service role is a set of virtual platforms that work together to
Cloud Computing: Unedited Version pg. 26
accomplish basic tasks, which is managed, load-balanced and Platform-as-a-Service. Cloud
Service Roles are controlled by Azure fabric controller and provide the perfect combination
of scalability, control, and customization.

Web Role is the role of an Azure Cloud service which is configured and adapted to operate
web applications developed on the Internet Information (IIS) programming languages and
technologies, such as ASP.NET, PHP, Windows Communication Foundation and Fast CGI.

Web Role is the role of an Azure Cloud service which is configured and adapted to operate
web applications developed on the Internet Information (IIS) programming languages and
technologies, such as ASP.NET, PHP, Windows Communication Foundation and Fast CGI.

Worker role is any role for Azure that works on applications and services that do not
usually require IIS. IIS is not enabled default in Worker Roles. They are mainly utilized to
support web-based background processes and to do tasks such as compressing uploaded
images automatically, run scripts, get new messages out of queue and process and more,
when something changes the database.

VM Role: The VM role is a type of Azure Platform role that supports the automated
management of already installed service packages, fixes, updates and applications for
Windows Azure.

The principal difference is that:

A Web Role deploys and hosts the application automatically via IIS A Worker Role does
not use IIS and runs the program independently The two can be handled similarly and can
be run on the same Azure instances if they are deployed and supplied via the Azure Service
Platform.

For certain cases, instances of Web Role and Worker Roles work together and are also used
concurrently by an application. For example, a web role example can accept applications
from users, and then pass them to a database worker role example.

1.4.3.4 Hadoop

Apache Hadoop is an open source software framework for storage and large-scale
processing of data sets of commodity hardware clusters. Hadoop is a top-level Apache
project created and operated by a global community of contributors and users. It is under
the Apache License 2.0.

Two phases of MapReduce function, Map and Reduce. Map tasks are concerned with data
splitting and mapping of the data, while Reduce tasks shuffle and reduce the data.
Hadoop can run MapReduce programs in a variety of languages like Java, Ruby, Python,
and C++,. MapReduce program is parallel in nature and thus very useful for large-scale
analyzes of data via multiple cluster machines.

The input to each phase is key-value pairs. In addition, every programmer needs to specify
two functions: map function and reduce function.

1.4.3.5 Force.com and Salesforce.com

The fundamental concepts on cloud computing must be understood to understand the


divergence between salesforce.com and force.com.

Salesforce is a company and salesforce.com is an application built on the basis of software


as a service (SaaS) for customer relationships management (CRM). The force.com platform
assists developers and business users in creating successful business applications.

Salesforce is a SaaS product that includes the Out of Box (OOB) features built into a CRM
system for sales automation, marketing, service automation, etc. Some SaaS examples are
Dropbox, Google Apps and GoToMeeting that refer to taking the software from your
computer to the cloud.

Cloud Computing: Unedited Version pg. 27


Force.com is a PaaS (Platform-as-a-Service) product; it includes a framework that allows
you to build applications. It contains a development environment. Force.com helps you to
customize the user interface, functionality and business logic.

Simply put, Salesforce.com functionality saves contacts, text messages, calls and other
standard functions within the iPhone application. In force.com, the applications are
constructed and operated. Salesforce.com runs on force.com, like the iPhone dialer
works on the iPhone OS.

1.4.3.6 Manjrasoft Aneka

MANJRASOFT Pvt. Ltd. is one of organization that works on cloud computing technology
by developing software compatible with distributed networks across multiple servers.

• Create scalable, customizable building blocks essential to cloud computing


platforms.
• Build software to accelerate applications that is designed for networked multi-core
computers.
• Provide quality of service (QoS) and service level Agreement (SLA)-solutions
based on the service level agreement (SLA) which allow the scheduling,
dispatching, pricing of applications and accounting services, Business and/or
public computing network environments.
• Development of applications by enabling the rapid generation of legacy and new
applications using innovative parallel and distributed models of programming.
• Ability of organizations to use computing resources .Business to speed up
"compute" or "data" execution-intensive applications

1.5 Summary
In this chapter, we explored the goal and advantages and challenges associated with the cloud
computing. As a consequence of the development and integration of many of its supportive
models and technologies, especially distributed computing, the cloud computing
technologies Web 2.0, virtualization, Services orientated and Utility Computing. We are
examining various definitions, meanings and implementations of the concept. Only the
dynamic provision of IT services (whether it is virtual infrastructure, runtime environments or
application services) and the implementation of a utility-based cost model to value such
services is the component shared by all different views of cloud computing. This architecture
is applied throughout the entire computing stack and allows the dynamic provision of IT and
runtime resources in the context of cloud-hosted platforms to create scalable applications and
their services. The cloud computing reference method is represented by this concept. This
model defines three important components of Cloud computing's industry and Services there
are offering: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-
as-service (SaaS). These components explicitly map the wide categories of the various types
of cloud computing services.

1.6 Review questions

1. What is cloud computing’s innovative characteristic?


2. What are the technologies that are supported by cloud computing?
3. Provide a brief characterization of a distributed system.
4. Define cloud computing and Identify the main features of cloud computing.
5. What are the most important distributed technologies that have contributed to cloud
computing?
6. What is a virtualization?
7. Explain the major revolution introduced by web 2.0
8. Give examples of applications for Web 2.0.
9. Describe the main features of the service orientation.
10. Briefly summarize the Cloud Computing Reference Model.
11. What is the major advantage of cloud computing?
12. Explain the different types of models in the cloud computing
13. Explain the three cloud services in cloud computing.
14. What is Web services? Explain the different types of web services.

Cloud Computing: Unedited Version pg. 28

You might also like