Cloud Computing 1 28
Cloud Computing 1 28
Cloud Computing 1 28
Chapter 1
Unit Structure
1.0 Objective
1.1 Introduction
1.2 Cloud computing at a glance
1.2.1 The vision of cloud computing
1.2.2 Defining a cloud
1.2.3 A closer look
1.2.4 The cloud computing reference model
1.2.5 Characteristics and benefits
1.2.6 Challenges ahead
1.3 Historical developments
1.3.1 Distributed systems
1.3.2 Virtualization
1.3.3 Service-oriented computing
1.3.4 Utility-oriented computing
1.4 Building cloud computing environments
1.4.1 Application development
1.4.2 Infrastructure and system development
1.4.3 Computing platforms and technologies
1.4.3.1 Amazon web services (AWS)
1.4.3.2 Google AppEngine
1.4.3.3 Microsoft Azure
1.4.3.4 Hadoop
1.4.3.5 Force.com and Salesforce.com
1.4.3.6 Manjrasoft Aneka
1.5 Summary
1.6 Review questions
1.7 Reference for further reading
This chapter would make your under the concept of following concepts
• What is a cloud computing?
• What are characteristics and benefits of cloud computing?
• It’s Challenges.
• Historical development of technologies toward the growth of cloud computing
• Types of Cloud Computing Models.
• Different types of Services in the Cloud Computing.
• Application development and Infrastructure and system development technologies
about the Cloud Computing.
• Overview of different sets of Cloud Service Providers.
1.1 Introduction
Historically, computing power was a scarce, costly tool. Today, with the emergence of cloud
computing, it is plentiful and inexpensive, causing a profound paradigm shift — a transition
from scarcity computing to abundance computing. This computing revolution accelerates the
commoditization of products, services and business models and disrupts current information
and communications technology (ICT) Industry .It supplied the services in the same way to
water, electricity, gas, telephony and other appliances. Cloud Computing offers on-demand
computing, storage, software and other IT services with usage-based metered payment. Cloud
Computing helps re-invent and transform technological partnerships to improve marketing,
simplify and increase security and increasing stakeholder interest and consumer experience
while reducing costs. With cloud computing, you don't have to over-provision resources to
manage potential peak levels of business operation. Then, you have the resources you really
required. You can scale these resources to expand and shrink capability instantly as the
business needs evolve. This chapter offers a brief summary of the trend of cloud computing by
describing its vision, addressing its key features, and analyzing technical advances that made
it possible. The chapter also introduces some key cloud computing technologies and some
insights into cloud computing environments.
The notion of computing in the "cloud" goes back to the beginnings of utility computing, a
term suggested publicly in 1961 by computer scientist John McCarthy:
“If computers of the kind I have advocated become the computers of the future, then
computing may someday be organized as a public utility just as the telephone system is a
public utility… The computer utility could become the basis of a new and important industry.”
The chief scientist of the Advanced Research Projects Agency Network (ARPANET),
Leonard Kleinrock, said in 1969:
“as of now, computer networks are still in their infancy, but as they grow up and become
sophisticated, we will probably see the spread of ‘computer utilities’ which, like present
electric and telephone utilities, will service individual homes and offices across the country.”
This vision of the computing utility takes form with cloud computing industry in the 21st
century. The delivery of computing services is easily available on demand just like other
utilities services such as water, electricity, telephone and gas in today's society are available.
Likewise, users (consumers) only have to pay service providers if they have access to
computing resources. Instead of maintaining their own computing systems or data centers,
customer can lease access from cloud service providers to applications and storage. The
advantage of using cloud computing services is that organizations can avoid the upfront
cost and difficulty of running and managing their own IT infrastructure and pay for when they
use it. Cloud providers can benefit from large economies of scale by offering the same
services to a wide variety of customers.
In the case, consumers can access the services according to their requirement with the
knowing where all their services are hosted. These model can called as utility computing as
cloud computing. As cloud computing called as utility computing because users can access the
Cloud Computing: Unedited Version pg. 2
infrastructure as a “cloud” as application as services from anywhere part in the world. Hence
Cloud computing can be defined as a new dynamic provisioning model of computing services
that improves the use of physical resources and data centers is growing uses virtualization and
convergence to support multiple different systems that operate on server platforms
simultaneously. The output achieved with different placement schemes of virtual machines
will differ a lot. .
By observing advancement in several technologies , we can track of cloud computing that is
(virtualization, multi-core chips), especially in hardware; Internet (Web services, service-
oriented architectures, Web 2.0), Distributed computing (clusters, grids), and autonomous
Computing, automation of the data center). The convergence of Figure 1.1 reveals the areas of
technology that have evolved and led to the advent Cloud computing. Any of these
technologies were considered speculation at an early stage of development; however, they
received considerable attention later Academia and big business companies have been
prohibited. Therefore, a Process of specification and standardization followed which resulted
in maturity and wide adoption. . The rise of cloud computing is closely associated with the
maturity of these technologies.
FIGURE 1.1. Convergence of various advances leading to the advent of cloud computing
The virtual provision of cloud computing is hardware, runtime environment and resources for a
user by paying money. As of these items can be used as long as the User, no upfront commitment
requirement. The whole computer device collection is turned into a Utilities set that can be
supplied and composed in hours rather than days together, to deploy devices without Costs for
maintenance. A cloud computer's long-term vision is that IT services are traded without
technology and as utilities on an open market as barriers to the rules.
We can hope in the near future that it can be identified the solution that clearly satisfies our needs
entering our application on a global digital market services for cloud computing. This market will
make it possible to automate the process of discovery and integration with its existing software
systems. A digital cloud trading platform is available services will also enable service providers to
boost their revenue. A cloud service may also be a competitor's customer service to meet its
consumer commitments.
Company and personal data is accessible in structured formats everywhere, which helps us to
access and communicate easily on an even larger level. Cloud computing's security and stability
will continue to improve, making it even safer with a wide variety of techniques. Instead of
concentrating on what services and applications they allow, we do not consider "cloud" to be the
Cloud Computing: Unedited Version pg. 3
most relevant technology. The combination of the wearable and the bringing your own device
(BYOD) with cloud technology with the Internet of Things ( IOT) would become a common
necessity in person and working life such that cloud technology is overlooked as an enabler.
2. On-Demand Self-Service
This is one of the main and useful advantages of Cloud Computing as the user can track
server uptimes, capability and network storage on an ongoing basis. The user can also
monitor computing functionalities with this feature.
3. Easy Maintenance
The servers are managed easily and the downtime is small and there are no downtime
except in some cases. Cloud Computing offers an update every time that increasingly
enhances it. The updates are more system friendly and operate with patched bugs faster than
the older ones.
4. Large Network Access
The user may use a device and an Internet connection to access the cloud data or upload it
to the cloud from anywhere. Such capabilities can be accessed across the network and
through the internet.
5. Availability
The cloud capabilities can be changed and expanded according to the usage. This review
helps the consumer to buy additional cloud storage for a very small price, if necessary.
6. Automatic System
Cloud computing analyzes the data required automatically and supports a certain service
level of measuring capabilities. It is possible to track, manage and report the usage. It
provides both the host and the customer with accountability.
7. Economical
It is a one-off investment since the company (host) is required to buy the storage, which can
be made available to many companies, which save the host from monthly or annual costs.
Only the amount spent on the basic maintenance and some additional costs are much
smaller.
8. Security
Cloud Security is one of cloud computing's best features. It provides a snapshot of the data
stored so that even if one of the servers is damaged, the data cannot get lost. The
information is stored on the storage devices, which no other person can hack or use. The
service of storage is fast and reliable.
9. Pay as you go
Users only have to pay for the service or the space in cloud computing. No hidden or
additional charge to be paid is liable to pay. The service is economical and space is often
allocated free of charge.
10. Measured Service
Cloud Computing resources that the company uses to monitor and record. This use of
resources is analyzed by charge-per-use capabilities. This means that resource use can be
measured and reported by the service provider, either on the virtual server instances running
through the cloud. You will receive a models pay depending on the manufacturing
company's actual consumption.
8. Cloud Migration
While it is very simple to release a new app in the cloud, transferring an existing app to a cloud
computing environment is harder. 62% said their cloud migration projects are harder than they
expected, according to the report. In addition, 64% of migration projects took longer than
expected and 55% surpassed their budgets. In particular, organizations that migrate their
applications to the cloud reported migration downtime (37%), data before cutbacks
synchronization issues (40%), migration tooling problems that work well (40%), slow
migration of data (44%), security configuration issues (40%), and time-consuming
troubleshooting (47%). And to solve these problems, close to 42% of the IT experts said that
they wanted to see their budget increases and that around 45% of them wanted to work at an in-
house professional, 50% wanted to set the project longer, 56% wanted more pre-migration
tests.
9. Vendor lock-in
The problem with vendor lock-in cloud computing includes clients being reliant (i.e. locked in)
on the implementation of a single Cloud provider and not switching to another vendor without
any significant costs, regulatory restrictions or technological incompatibilities in the future. The
lock-up situation can be seen in apps for specific cloud platforms, such as Amazon EC2,
Microsoft Azure, that are not easily transferred to any other cloud platform and that users are
vulnerable to changes made by their providers to further confirm the lenses of a software
developer. In fact, the issue of lock-in arises when, for example, a company decide to modify
cloud providers (or perhaps integrate services from different providers), but cannot move
applications or data across different cloud services, as the semantics of cloud providers'
resources and services do not correspond. This heterogeneity of cloud semantics and APIs
creates technological incompatibility which in turn leads to challenge interoperability and
portability. This makes it very complicated and difficult to interoperate, cooperate, portability,
handle and maintain data and services. For these reasons, from the point of view of the
company it is important to maintain flexibility in changing providers according to business
needs or even to maintain in-house certain components which are less critical to safety due to
risks. The issue of supplier lock-in will prevent interoperability and portability between cloud
providers. It is the way for cloud providers and clients to become more competitive.
10. Privacy and Legal issues
Apparently, the main problem regarding cloud privacy/data security is 'data breach.'
Cloud Computing: Unedited Version pg. 13
Infringement of data can be generically defined as loss of electronically encrypted personal
information. An infringement of the information could lead to a multitude of losses both for the
provider and for the customer; identity theft, debit/credit card fraud for the customer, loss of
credibility, future prosecutions and so on. In the event of data infringement, American law
requires notification of data infringements by affected persons. Nearly every State in the USA
now needs to report data breaches to the affected persons. Problems arise when data are subject
to several jurisdictions, and the laws on data privacy differ. For example, the Data Privacy
Directive of the European Union explicitly states that 'data can only leave the EU if it goes to a
'additional level of security' country.' This rule, while simple to implement, limits movement of
data and thus decreases data capacity. The EU's regulations can be enforced.
1.3 Historical developments
Cloud computing is one of today's most breakthrough technology. Then there's a brief cloud-
computing history.
EARLY 1960S
Computer scientist John McCarthy has a time-sharing concept that allows the organization to
use an expensive mainframe at the same time. This machine is described as a major
contribution to Internet development, and as a leader in cloud computing.
IN 1969
J.C.R. Licklider, responsible for the creation of the Advanced Research Projects Agency
(ARPANET), proposed the idea of an "Intergalactic Computer Network" or "Galactic Network"
(a computer networking term similar to today’s Internet). His vision was to connect everyone
around the world and access programs and data from anywhere.
IN 1970
Usage of tools such as VMware for virtualization. More than one operating system can be run
in a separate environment simultaneously. In a different operating system it was possible to
Cloud Computing: Unedited Version pg. 14
operate a completely different computer (virtual machine).
IN 1997
Prof Ramnath Chellappa in Dallas in 1997 seems to be the first known definition of "cloud
computing," "a paradigm in which computing boundaries are defined solely on economic rather
than technical limits alone."
IN 1999
Salesforce.com was launched in 1999 as the pioneer of delivering client applications through its
simple website. The services firm has been able to provide applications via the Internet for both
the specialist and mainstream software companies.
IN 2003
This first public release of Xen ,is a software system that enables multiple virtual guest
operating systems to be run simultaneous on a single machine, which also known as the Virtual
Machine Monitor ( VMM) as a hypervisor.
IN 2006
The Amazon cloud service was launched in 2006. First, its Elastic Compute Cloud ( EC2)
allowed people to use their own cloud applications and to access computers. Simple Storage
Service (S3) was then released. This incorporated the user-as-you-go model and has become
the standard procedure for both users and the industry as a whole.
IN 2013
A total of £ 78 billion in the world 's market for public cloud services was increased by 18.5%
in 2012, with IaaS as one of the fastest growing services on the market.
IN 2014
Global business spending for cloud-related technology and services is estimated to be £ 103.8
billion in 2014, up 20% from 2013 (Constellation Research).
Distributed computing is a computer concept that refers most of the time to multiple computer
systems that work on a single problem. A single problem in distributed computing is broken
Cloud Computing: Unedited Version pg. 15
down into many parts, and different computers solve each part. While the computers are
interconnected, they can communicate to each other to resolve the problem. The computer
functions as a single entity if done properly.
The ultimate goal of distributed computing is to improve the overall performance through cost-
effective, transparent and secure connections between users and IT resources. It also ensures
defect tolerance and provides access to resources in the event of failure of one component.
There really is nothing special about distributing resources in a computer network. This began
with the use of mainframe terminals, then moved to minicomputers and is now possible in
personal computers and client server architecture with several tiers.
Mainframes: A mainframe is a powerful computer which often serves as the main data
repository for an IT infrastructure of an organization. It is connected with users via less
powerful devices like workstations or terminals. It is easier to manage, update and protect the
integrity of data by centralizing data into a single mainframe repository. Mainframes are
generally used for large-scale processes which require greater availability and safety than
smaller machines. Mainframes computers or mainframes are primarily machines for essential
purposes used by large organizations; bulk data processing, for example census, industry and
consumer statistics, enterprise resource planning and transaction processing. During the late
1950s, mainframes only had a basic interactive interface, using punched cards, paper tape or
magnetic tape for data transmission and programs. They worked in batch mode to support back
office functions, like payroll and customer billing, mainly based on repetitive tape and merging
operations followed by a line printing to continuous stationary pre-printed. Introducing digital
user interfaces almost solely used to execute applications (e.g. airline booking) rather than to
build the software. Typewriter and Teletype machines were standard network operators' control
consoles in the early ' 70s, although largely replaced with keypads.
Cluster computing: The approach to computer clustering typically connects some computer
Cloud Computing: Unedited Version pg. 16
nodes (personal computer used as a server) ready for download via a fast local zone (LAN)
network. Computing node activity coordinated by the software "clustering middleware," a
software layer situated in front of nodes that enables the users to access the cluster as a whole
by means of a Single system image concept. A cluster is a type of computer system that is
parallel or distributed and which consists of a collection of interconnected independent
computers, working together as a highly centralized computing tool that integrates software and
networking with independent computers in a single system. Clusters are usually used to provide
greater computational power than can be provided by a single computer for high availability for
greater reliability or high performance computing. In comparison with other technology, the
cluster technique is economical with respect to power and processing speeds, since it uses shelf
hardware and software components in comparison with mainframe computers which use own
hardware and software components custom built. Multiple computers in a cluster work together
to deliver unified processing and faster processing. A Cluster can be upgraded to a higher
specification, or extended by adding additional nodes as opposed to a mainframe computer.
Redundant machines which take over the processing continuously minimize the single
component failure. For mainframe applications, this kind of redundancy is absent.
PVM and MPI are the two methods most widely used in cluster communication.
PVM is the parallel virtual machine. The Oak Ridge National Laboratory was developed the
PVM around 1989. It is installed directly on each node and provides a set of libraries that
transform the node into a "parallel virtual machine." It offers a runtime environment for control
of resources and tasks management , error reporting and message passing . C, C++ or Fortran
may use PVM for user programs.
MPI is the message passing interface. In the 1990s PVM was created and replaced. Different
commercially available systems of the time are the basis for MPI design. It typically uses TCP /
IP and socket connections for implementation. The communication system currently used most
widely allows for parallel scheduling in C, Fortran, Python etc.
A grid is connected to a computer cluster, which runs on an operating system, on Linux or free
software, using parallel nodes. The cluster can vary in size from small to multiple networks.
The technology is used through several computing resources in a broad variety of applications,
such as mathematical, research or educational tasks. It is often used in structural analysis as
well as in web services such as ATM banking, back office infrastructure, and research in
sciences or marketing. Grid computing consists of applications which are used in a parallel
networking environment to solve computational problems. It connects every PC and combines
information into a computational application.
Cloud Computing: Unedited Version pg. 17
Grids have a range of resources, whether through a network, or through open standards with
clear guidelines to achieve common goals and objectives, based on different software and
hardware structures, computer languages and framework.
CPU Scavenging Grids: A cycle-scavenging system that moves projects as necessary from one
PC to another. The search for extraterrestrial intelligence computing, including more than 3
million computers, represents a familiar CPU scavenging grid. The detection of radio signals in
Searches for Extra Terrestrial Intelligence (SETI), is one of radio astronomy's most exciting
applications. A radio astronomy dish was used by the first SETI team in the late 1950s. A few
years later, the privately funded SETI Institute was established to perform more searches with
several American radio telescopes. Today, in cooperation with the radio astronomy engineers
and researchers of various observatories and universities, the SETi Institute creates its own
collection of private funds again. SETI 's vast computing capacity has led to a unique grid
computing concept which has now been expanded into many applications.
Grid computing for biology, medicine, Earth sciences, physics, astronomy, chemistry and
mathematics are being used. The Berkeley Open Infrastructure for Network Computing
(BOINC) is free open source computer and desktop grid computing software. By using the
BOINC platform, users can divide jobs between several grid computing projects and decide to
only give them one percentage of CPU time.
1.3.2 Virtualization
Virtualization is a process that makes the use of physical computer hardware more effective and
forms the basis for cloud computing. Virtualization uses software to create a layer of abstraction
over computer hardware, enabling multiple virtual computers, usually referred to as VMs, to split
the hardware elements from a single computer — processors, memory, storage and more. Every
VM performs its own OS and acts like an autonomous computer given the fact that it runs on only
a portion of the underlying computer hardware.
The virtualization therefore facilitates a much more effective use of physical computer hardware,
thus allowing a larger return on the hardware investment of an organization.
Virtualization involves the creation of something's virtual platform, including virtual computer
hardware, virtual storage devices and virtual computer networks.
Software called hypervisor is used for hardware virtualization. With the help of a virtual machine
hypervisor, software is incorporated into the server hardware component. The role of hypervisor is
to control the physical hardware that is shared between the client and the provider. Hardware
virtualization can be done using the Virtual Machine Monitor (VVM) to remove physical
hardware. There are several extensions to the processes which help to speed up virtualization
activities and increase hypervisor performance. When this virtualization is done for the server
platform, it is called server socialization.
Hypervisor creates an abstract layer from the software to the hardware in use. After a hypervisor
is installed, virtual representations such as virtual processors take place. After installation, we
cannot use physical processors. There are several popular hypervisors including ESXi-based
VMware vSphere and Hyper-V.
Instances in virtual machines are typically represented by one or more data, which can be easily
transported in physical structures. In addition, they are also autonomous since they do not have
other dependencies for their use other than the virtual machine manager.
A Process virtual machine, sometimes known as an application virtual machine, runs inside a host
OS as a common application, supporting a single process. It is created at the beginning and at the
end of the process. Its aim is to provide a platform-independent programming environment which
abstracts the information about the hardware or operating system underlying the program and
allows it to run on any platform in the same way. For example, Linux wine software helps you run
Windows.
A high level abstraction of a VM process is the high level programming language (compared with
the low-level ISA abstraction of the VM system). Process VMs are implemented by means of an
interpreter; just-in-time compilation achieves performance comparable to compiled programming
languages.
The Java programming language introduced with the Java virtual machine has become popular
with this form of VM. The .NET System, which runs on a VM called the Common Language
Runtime, is another example.
Web 2.0
Web 2.0 is the term used to represent a range of websites and applications that permit anyone to
create or share information or material created online. One key feature of the technology is the
ability to people to create, share and communicate. Web 2.0 is different from other kinds of sites
because it does not require the participation of any Web design or publishing skills and makes the
creation, publication or communication of work in the world easy for people. The design allows it
to be simple and popular for a small community or a much wider audience to share knowledge.
The University will use these tools to communicate with students, staff and the university
community in general. It can also be a good way for students and colleagues to communicate and
interact.
It represents the evolution of the World Wide Web; the web apps, which enable interactive data
sharing, user-centered design and worldwide collaboration. Web 2.0 is a collective concept of
Web-based technologies that include blogging and wikis, online networking platforms,
podcasting, social networks, social bookmaking websites, Really Simple Syndication (RSS) feeds.
The main concept behind Web 2.0 is to enhance Web applications' connectivity and enable users
to easily and efficiently access the Web. Cloud computing services are essentially Web
applications that provide computing services on the Internet on demand. As a consequence, Cloud
Computing uses a Web 2.0 methodology, Cloud Computing is considered to provide a main Web
2.0 infrastructure; it facilitates and is improved by the Web 2.0 Framework Beneath Web 2.0 is a
set of web technologies. Recently appeared or shifted to a new production stage RIAs (Rich
Internet Applications).One of them Web's most prominent technology and quasi-standard AJAX
(Asynchronous JavaScript and XML). Other technologies like RSS (Really Simple Syndication),
Widgets (plug-in modular components) and Web services ( e.g. SOAP, REST).
The computing paradigm that uses services as a fundamental component in the creation of
applications / solutions is service oriented computing (SOC). Services are computer platform-
specific self-description components that enable the easy and cost-effective composition of
distributed applications. Services perform functions, from simple requests to complex business
processes. Services permit organisations, using common XML languages and protocols, to display
their core skills programming over the Internet or intra-network, and to execute it via an open-
standard self-description interface.
Because services provide uniform and ubiquitous distributors of information for a wide variety of
computing devices ( e.g. handheld computers, PDAs, cell phones or equipment) as well as
software platforms (e.g. UNIX and Windows), they are the next major step in distributed
computing technology. Services are provided by service providers – organizations that provide the
implementation of the service, provide their descriptions of service and related technical and
business support. Since different services can be available
Companies and Internet communications provide a centralized networking network for the
integration and collaboration intra- and cross-company application. Service customers can be
other companies 'or clients' applications, whether they are external applications, processes or
clients / users. These can include external applications.
Web-service interactions take place with the use of Web Service Description Language
(WSDL) as the common (XML) standard when calling Simple Object Access Protocol
(SOAP) containing XML data, and the web-service descriptions. WSDL is used for the
publishing of web services, for port types (the conceptual description of the procedure and
interchange of messages), and for binding ports and addresses (the tangible concept of
which packaging and transport protocols, for instance SOAPs, are used to interlink two
conversational end-points). The UDDI Standard is a directory service that contains
publications of services and enables customers to find and learn about candidate services.
The ASP maintains the responsibility for managing the application in its infrastructure,
using the Internet as a connection between every customer and the key software
application, through a centrally hosted Intent application. What this means for an
organization is for the ASP to retention and guarantee the program and data are accessible
whenever appropriate, including the related infrastructure and the customer data.
While the ASP model first introduced the software as a service definition, it was not able
to provide full applications with customizable due to numerous inherent constraints such
as its inability of designing extremely interactive applications.. The result has been the
monolithic architectures and highly vulnerable integration of applications based on tight
coupling principle in customer-specific architectures.
We are in the middle of yet another significant development today in the development of a
software as a service architecture for asynchronous loosely linked interactions based on
XML standards with the intention of making it easier for applications to access and
communicate over the Internet. The SOC model enables the idea software-as-a-service to
extend to use the provision of complicated business processes and transactions as a
service, and allow applications to be created on the fly and services to be replicated across
and by everyone. Many ASPs are pursuing more of a digital infrastructure and business
models which are similar with those of cloud service providers to the relative advantages
of internet technology.
Functional and non-functional attributes consist of the web services. Quality of service (
QoS) is the so called unfunctional attributes. QoS is defined as a set of nonfunctional
characteristics of entities used to move from a web service repository to consumers who
rely on the ability of a web service to fulfill its specified or implied needs in an end-to -
end way, according to the quality definition of ISO 8402. Examples of QoS features
include performance, reliability, security, accessibility, usability, discovery, adaptively and
composability. A SLA that identifies the minimum (or acceptable range) values for QoS
attributes to be complied with on calling the service shall establish a QoS requirement
between the clients and providers.
SOA benefits
SOA services allow for agility of business. By integrating existing services, developers can
create applications quickly.
The services are distinct entities and can be invoked without a platform or programming
language knowledge at run-time.
The services follow a series of standards – Web Services Description Language (WSDL),
Cloud Computing: Unedited Version pg. 22
Representational State Transfer (REST), or the Simple Object Access Protocol(SOAP) –
which facilitate their integration with both existing and new applications. The SOAP
services are complemented by the following standards. SOAP.
Safety through Service Quality (QoS). Certain elements of QoS include authentication and
authorisation, reliable and consistent messaging, permission policies, etc.
There is no interdependence of each other's service components.
One of the challenges for SOA today are the requests to improve or change the service
provided by SOA service providers.
Some see cloud computing as a descendant of SOA. It would not be completely untrue, as
the principles of service guidelines both apply to cloud computing and SOA. The
following illustration shows how Cloud Computing Services overlap SOA-
It is very important to realize that while cloud computing overlaps with SOA, they focus
on various implementation projects. In order to exchange information between systems
and a network of systems, SOA implementations are primarily used. Cloud computing, on
the other hand, aims to leverage the network across the whole range of IT functions.
SOA is not suitable for cloud computing, actually they are additional activities. Providers
need a very good service-oriented architecture to be able to provide cloud services
effectively.
There are many common features of SOA and cloud computing, however, they are not and
can coexist. In its requirements for delivery of digital services, SOA seems to have
matured. Cloud Computing and its services are new as are numerous vendors such as
public, community, hybrid and private clouds, with their offerings. They are also growing.
The concept utility applies to utility services offered by a utilities provider, such as
electricity, telephone, water and gas. Related to electricity or telephone, where the
consumer receives the utility computing, computing power is measured and paid on the
basis of a shared computer network.
Cloud Computing: Unedited Version pg. 23
The concept utility applies to utility services offered by a utilities provider, such as
electricity, telephone, water and gas. Related to electricity or telephone, where the
consumer receives the utility computing, computing power is measured and paid on the
basis of a shared computer network.
Utility computing is very analogous to virtualization so that the total volume of web
storage and the computing capacity available to customers is much greater than that of a
single computer. To make this type of web server possible, several network backend
servers are often used. The dedicated webservers can be used in explicitly built and leased
cluster types for end users. The distributed computing is the approach used for a single
'calculation' on multiple web servers.
Even though meanings of utility computing are various, they usually contain the following
five characteristics.
Scalability
The utility computing shall ensure that adequate IT resources are available under all
situations. Improved service demand does not suffer from its quality (e.g. response time).
Price of demand
Until now, companies must purchase their own computing power such as hardware and
software. It is necessary to pay for this IT infrastructure beforehand, irrespective of its use
in the future. For instance, technology providers to reach this link depends on how many
CPUs the client has enabled during leasing rate for their servers. If the computer capacity
to assert the individual sections actually can be measured in a company, the IT costs may
be primarily attributable to each individual unit at internal cost. Additional forms of
connection are possible with the use of IT costs.
Virtualization technologies can be used to share web and other resources in the common
pool of machines. Instead of the physical resources available, this divides the network into
logical resources. No predetermined servers or storage of any other than a free server or
pool memory are assigned to an application.
Automation
Cloud Computing: Unedited Version pg. 24
Repeated management activities may be automated, such as setting up new servers or
downloading updates. Furthermore, the resource allocation to the services and IT service
management to be optimized must be considered, along with service standard agreements
and IT resource operating costs.
Utility computing lowers IT costs, despite the flexibility of existing resources. In fact,
expenses are clear and can be allocated directly to the different departments of a
organization. Fewer people are required for operational activities in the IT departments.
The companies are more flexible because their IT resources are adapted to fluctuating
demand more quickly and easily. All in all, the entire IT system is simpler to handle,
because no longer apps can take advantage of a particular IT infrastructure for any
program.
A powerful computing model that enables users to use application on demand is provided
by cloud computing. One of the most advantageous classes of applications in this feature
are Web applications. Their performance is mostly influenced by broad range of
applications using various cloud services can generate workloads for specific user demands.
Several factors have facilitated the rapid diffusion of Web 2.0. First, Web 2.0 builds on a
variety of technological developments and advancements that allow users to easily create
rich and complex applications, including enterprise applications by leveraging the Internet
now as the main utility and user interaction platform. Such applications are characterized by
significant complex processes caused by user interactions and by interaction between
multiple steps behind the Web front. This is the application are most sensitive to improper
infrastructure and service deployment sizing or work load variability.
Cloud application development involves leveraging platforms and frameworks which offer
different services, from the bare metal infrastructure to personalized applications that serve
specific purposes.
Amazon Web Services (AWS) is a cloud computing platform with functionalities such as
database storage, delivery of content, and secure IT infrastructure for companies, among
others. It is known for its on-demand services namely Elastic Compute Cloud (EC2) and
Simple Storage Service (S3). Amazon EC2 and Amazon S3 are essential tools to
Cloud Computing: Unedited Version pg. 25
understand if you want to make the most of AWS cloud.
Amazon EC2 is a software for running cloud servers that is short for Elastic Cloud
compute. Amazon launched EC2 in 2006, as it allowed companies to rapidly and easily spin
servers into the cloud, instead of having to buy, set up, and manage their own servers on the
premises.
While Amazon EC2 server instances can also have bare-metal EC2 instances, most Amazon
EC2 server instances are virtual machines housed on Amazon's infrastructure. The server is
operated by the cloud provider and you don't need to set up or maintain the hardware.) A
vast number of EC2 instances are available for different prices; generally speaking the more
computing capacity you use, the higher the EC2 instance you need. (Bare metal Cloud
Instances permit you to host a working load on a physical computer, rather than a virtual
machine. In certain Amazon EC2 examples, different types of applications such as the
parallel processing of big data workload GPUs are optimized for use.
EC2 offers functionality such as auto-scaling, which automates the process of increasing or
decreasing compute resources available for a given workload, not just to make the
deployment of a server simpler and quicker. Auto-scaling thus helps to optimize costs and
efficiency, especially in working conditions with significant variations in volume.
Amazon S3 is a storage service operating on the AWS cloud (as its full name, Simple
Storage Service). It enables users to store virtually every form of data in the cloud and
access the storage over a web interface, AWS Command Line Interface, or AWS API. You
need to build what Amazon called a 'bucket' which is a specific object that you use to store
and retrieve data for the purpose of using S3. If you like, you can set up many buckets.
Amazon S3 is an object storage system which works especially well for massive, uneven or
highly dynamic data storage.
The Google AppEngine (GAE) is a cloud computing service (belonging to the platform as a
service (PaaS) category) to create and host web-based applications within Google's data
centers. GAE web applications are sandboxed and run across many redundancy servers to
allow resources to be scaled up according to currently-existing traffic requirements. App
Engine assigns additional resources to servers to handle increased load.
Google App Engine is a Google platform for developers and businesses to create and run
apps using advanced Google infrastructure. These apps must be written in one of the few
languages supported, namely Java, Python, PHP and Go. This also requires the use of
Google query language and Google Big Table is the database used. The applications must
comply with these standards, so that applications must either be developed in keeping with
GAE or modified to comply.
GAE is a platform for running and hosting Web apps, whether on mobile devices and on the
Web. Without this all-in function, developers should be responsible for creating their own
servers, database software and APIs that make everyone work together correctly. GAE
takes away the developers' pressure so that they can concentrate on the app's front end and
features to enhance user experience.
Microsoft Azure is a platform as a service (PaaS) to develop and manage applications for
using their Microsoft products and in their data centers. This is a complete suite of cloud
products that allow users to develop business-class applications without developing their
own infrastructure.
Three cloud-centric products are available on the Azure Cloud platform: the Windows
Azure, SQL Azure & Azure App Fabric controller. This involve the infrastructure hosting
facility for the application.
In the Azure, the Cloud service role is a set of virtual platforms that work together to
Cloud Computing: Unedited Version pg. 26
accomplish basic tasks, which is managed, load-balanced and Platform-as-a-Service. Cloud
Service Roles are controlled by Azure fabric controller and provide the perfect combination
of scalability, control, and customization.
Web Role is the role of an Azure Cloud service which is configured and adapted to operate
web applications developed on the Internet Information (IIS) programming languages and
technologies, such as ASP.NET, PHP, Windows Communication Foundation and Fast CGI.
Web Role is the role of an Azure Cloud service which is configured and adapted to operate
web applications developed on the Internet Information (IIS) programming languages and
technologies, such as ASP.NET, PHP, Windows Communication Foundation and Fast CGI.
Worker role is any role for Azure that works on applications and services that do not
usually require IIS. IIS is not enabled default in Worker Roles. They are mainly utilized to
support web-based background processes and to do tasks such as compressing uploaded
images automatically, run scripts, get new messages out of queue and process and more,
when something changes the database.
VM Role: The VM role is a type of Azure Platform role that supports the automated
management of already installed service packages, fixes, updates and applications for
Windows Azure.
A Web Role deploys and hosts the application automatically via IIS A Worker Role does
not use IIS and runs the program independently The two can be handled similarly and can
be run on the same Azure instances if they are deployed and supplied via the Azure Service
Platform.
For certain cases, instances of Web Role and Worker Roles work together and are also used
concurrently by an application. For example, a web role example can accept applications
from users, and then pass them to a database worker role example.
1.4.3.4 Hadoop
Apache Hadoop is an open source software framework for storage and large-scale
processing of data sets of commodity hardware clusters. Hadoop is a top-level Apache
project created and operated by a global community of contributors and users. It is under
the Apache License 2.0.
Two phases of MapReduce function, Map and Reduce. Map tasks are concerned with data
splitting and mapping of the data, while Reduce tasks shuffle and reduce the data.
Hadoop can run MapReduce programs in a variety of languages like Java, Ruby, Python,
and C++,. MapReduce program is parallel in nature and thus very useful for large-scale
analyzes of data via multiple cluster machines.
The input to each phase is key-value pairs. In addition, every programmer needs to specify
two functions: map function and reduce function.
Salesforce is a SaaS product that includes the Out of Box (OOB) features built into a CRM
system for sales automation, marketing, service automation, etc. Some SaaS examples are
Dropbox, Google Apps and GoToMeeting that refer to taking the software from your
computer to the cloud.
Simply put, Salesforce.com functionality saves contacts, text messages, calls and other
standard functions within the iPhone application. In force.com, the applications are
constructed and operated. Salesforce.com runs on force.com, like the iPhone dialer
works on the iPhone OS.
MANJRASOFT Pvt. Ltd. is one of organization that works on cloud computing technology
by developing software compatible with distributed networks across multiple servers.
1.5 Summary
In this chapter, we explored the goal and advantages and challenges associated with the cloud
computing. As a consequence of the development and integration of many of its supportive
models and technologies, especially distributed computing, the cloud computing
technologies Web 2.0, virtualization, Services orientated and Utility Computing. We are
examining various definitions, meanings and implementations of the concept. Only the
dynamic provision of IT services (whether it is virtual infrastructure, runtime environments or
application services) and the implementation of a utility-based cost model to value such
services is the component shared by all different views of cloud computing. This architecture
is applied throughout the entire computing stack and allows the dynamic provision of IT and
runtime resources in the context of cloud-hosted platforms to create scalable applications and
their services. The cloud computing reference method is represented by this concept. This
model defines three important components of Cloud computing's industry and Services there
are offering: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-
as-service (SaaS). These components explicitly map the wide categories of the various types
of cloud computing services.