Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CS8791 Cloud Computing Unit I

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

KGiSL INSTITUTE OF TECHNOLOGY

CS8791 CLOUD COMPUTING


(2017 R)
YEAR/SEM: IV YEAR-VII SEM

Presented by,
Mr. S. JEEVANANDHAM,
Assistant Professor/IT
OBJECTIVES:
 To understand the concept of cloud computing.
 To appreciate the evolution of cloud from the
existing technologies.
 To have knowledge on the various issues in cloud
computing.
 To be familiar with the lead players in cloud.
 To appreciate the emergence of cloud as the next
generation computing paradigm.
UNIT I -INTRODUCTION

Introduction to Cloud Computing – Definition of


Cloud – Evolution of Cloud Computing –
Underlying Principles of Parallel and Distributed
Computing – Cloud Characteristics – Elasticity in
Cloud - –On-demand
– Provisioning.
1.1 Introduction to Cloud Computing

• What Is the Cloud?


 The term cloud has been used historically as a metaphor for the
Internet.
 This usage was originally derived from its common depiction in
network diagrams as an outline of a cloud, used to represent the
transport of data across carrier backbones (which owned the
cloud) to an endpoint location on the other side of the cloud. This
concept dates back as early as 1961,
 When Professor John McCarthy suggested that computer time-
sharing technology might lead to a future where computing power
and even specific applications might be sold through a utility-type
business model.
Cloud Computing Vision
Introduction to Cloud Computing
• This idea became very popular in the late
1960s, but by the mid-1970s the idea faded
away when it became clear that the IT-related
technologies of the day were unable to sustain
such a futuristic computing model.
• However, since the turn of the millennium, the
concept has been revitalized. It was during this
time of revitalization that the term cloud
computing
Cloud Computing
•Definition –
• “A cloud is a pool of virtualized computer resources. A cloud can
host a variety of different workloads, including batch-style
backend jobs and interactive and user-facing applications”
• The cloud allows workloads to be deployed and scaled out quickly
through rapid provisioning of virtual or physical machines.
• The cloud supports redundant, self-recovering, highly scalable
programming models that allow workloads to recover from many
unavoidable hardware/software failures.
• Finally, the cloud system should be able to monitor resource use in
real time to enable rebalancing of allocations when needed.
Cloud Computing Technologies, Concepts
and Ideas
The Emergence of Cloud Computing

• Utility computing can be defined as the


provision of computational and storage
resources as a metered service, similar to
those provided by a traditional public utility
company.
• Companies have begun to extend the model to
a cloud computing paradigm providing virtual
servers that IT departments and users can
access on demand.
The Global Nature of the Cloud

• Internet Clouds
• Cloud computing applies a virtualized platform
with elastic resources on demand by provisioning
hardware, software, and data sets dynamically.
• The idea is to move desktop computing to a
service-oriented platform using server clusters
and huge databases at data centers.
• Cloud computing leverages its low cost and
simplicity to benefit both users and providers.
Internet Clouds
The Cloud Landscape

• The traditional systems have encountered


several performance bottlenecks: constant
system maintenance, poor utilization, and
increasing ate
cost associated with
hardware/software upgrades.
• Cloud computing as an on-demand computing
paradigm resolves or relieves us from these
problems.
CLOUD SERVICE MODELS:
• Infrastructure as a Service (IaaS)---This model puts together
infrastructures demanded by users — namely servers, storage,
networks, and the data center fabric.
• Platform as a Service (PaaS) - This model enables the user to
deploy user-built applications onto a virtualized cloud platform.
PaaS includes middleware, databases, development tools, and
some runtime support such as Web 2.0 and Java.
• Software as a Service (SaaS)-The SaaS model applies to
business processes, industry applications, consumer
relationship management (CRM), enterprise resources planning
(ERP), human resources (HR) , and collaborative applications.
CLOUD SERVICE REFERENCE MODELS:
CLOUD SERVICE MODELS:
CLOUD SERVICE MODELS
• Cloud Computing is a general term used to
describe a new class of network based
computing that takes place over the Internet,

Using the Inter basically a step on from Utility


Computing a collection/group of integrated and
networked hardware, software and Internet
infrastructure is called a platform.
1.2 Evolution of Cloud Computing
• Underlying Principles of Parallel and Distributed Computing:
• Distributed computing is a field of computer science that
studies distributed systems. A distributed system is models in
which components located on networked computers
communicate and coordinate their actions by passing
messages
• The components interact with each other in order to achieve
a common goal.
• Three significant characteristics of distributed systems are:
concurrency of components, lack of a global clock, and
independent failure of components.
Distributed Computing:
• Examples of distributed systems vary from SOA-
based systems to massively multiplayer online games
to peer-to-peer applications.
• A computer program that runs in a distributed
system is called a distributed program, and
distributed programming is the process of writing
such programs.
• There are many alternatives for the message passing
mechanism, including pure HTTP, RPC-like connectors
and message queues.
HISTORY
HISTORY
• The use of concurrent processes that communicate by message-passing has its
roots in operating system architectures studied in the 1960s.
• The first widespread distributed systems were local area networks such as
Ethernet, which was invented in the 1970s.
• ARPANET (Advanced Research Projects Agency Network), the predecessor of the
Internet, was introduced in the late 1960s, and ARPANET email was invented in
the early 1970s. E-mail became the most successful application of ARPANET, and
it is probably the earliest example of a large-scale distributed application.
• In addition to ARPANET, and its successor, the Internet, other early worldwide
computer networks included Usenet and FidoNet from the 1980s, both of which
were used to support distributed discussion systems.
• The study of distributed computing became its own branch of computer science
in the late 1970s and early 1980s.
• The first conference in the field, Symposium on Principles of Distributed
Computing (PODC), dates back to 1982, and its European counterpart
International Symposium on Distributed Computing (DISC) was first held in 1985.
SCALABLE COMPUTING OVER THE INTERNET

• Over the past 60 years, computing technology has


undergone a series of platform and environment changes.
• Evolutionary changes in machine include architecture,
operating system, platform, network connectivity, and
application workload.
• Instead of using a centralized computer to solve
computational problems, a parallel and distributed
computing system uses multiple computers to solve large-
scale problems over the Internet. Thus, distributed
computing becomes data-intensive and network-centric.
SCALABLE COMPUTING OVER THE INTERNET
• 2.1 The Age of Internet Computing
• Billions of people use the Internet every day.
• Supercomputer sites and large data centers must provide high-performance computing services to
huge numbers of Internet users concurrently.
• The Linpack Benchmark for high-performance computing (HPC) applications is no longer optimal for
measuring system performance.
• The emergence of computing clouds instead demands high-throughput computing (HTC) systems
built with parallel and distributed computing technologies.
• The purpose is to advance network-based computing and web services with the emerging new
technologies.
• The Platform Evolution
• Computer technology has gone through five generations of development, with each generation
lasting from 10 to 20 years.
• 1950 to 1970 – mainframes – eg: IBM 360 and CDC 6400.
• 1960 to 1980 - lower-cost mini-computers – eg: DEC PDP 11 and VAX Series .
• 1970 to 1990 - personal computers built with VLSI microprocessors.
• 1980 to 2000 - portable computers
High-Performance Computing
• For many years, HPC systems emphasize the raw speed performance.
• The speed of HPC systems has increased from Gflops in the early 1990s
to now Pflops in 2010.
• This improvement was driven mainly by the demands from scientific,
engineering, and manufacturing communities.
High-Throughput Computing
– The development of market-oriented high-end computing systems is
undergoing a strategic change from an HPC paradigm to an HTC paradigm.
– The main application for high-flux computing is in Internet searches and
web services by millions or more users simultaneously.
– The performance goal thus shifts to measure high throughput or the
number of tasks completed per unit of time.
– HTC technology needs to not only improve in terms of batch processing
speed, but also address the acute problems of cost, energy savings,
security, and reliability at many data and enterprise computing centers.
• Three New Computing Paradigms:
– The maturity of radio-frequency identification (RFID), Global Positioning
System (GPS), and sensor technologies has triggered the development of
the Internet of Things (IoT).
Computing Paradigm Distinctions
• The following list defines these terms more clearly; their architectural and operational
differences are discussed ,
• Centralized computing: This is a computing paradigm by which all computer resources are
centralized in one physical system. All resources (processors, memory, and storage) are fully
shared and tightly coupled within one integrated OS. Many data centers and supercomputers
are centralized systems, but they are used in parallel, distributed, and cloud computing
applications.
• Parallel computing: In parallel computing, all processors are either tightly coupled with
centralized shared memory or loosely coupled with distributed memory. Some authors refer
to this discipline as parallel processing. Inter processor communication is accomplished
through shared memory or via message passing
• Distributed computing: A distributed system consists of multiple autonomous computers,
each having its own private memory, communicating through a computer network.
Information exchange in a distributed system is accomplished through message passing. A
computer program that runs in a distributed system is known as a distributed program. The
process of writing distributed programs is referred to as distributed programming.
• Cloud computing An Internet cloud of resources can be either a centralized or a distributed
computing system. The cloud applies parallel or distributed computing, or both. Clouds can
be built with physical or virtualized resources over large data centers that are centralized or
distributed. Some authors consider cloud computing to be a form of utility
computing or service computing.
Distributed system families
• The system efficiency is decided by speed,
programming, facand energy factors (i.e.,
throughput per watt of energy consumed).
Meeting these goals requires yielding the
following design objectives:
 Efficiency
 Dependability.
 Adaptation in the programming model
 Flexibility in application deployment
The Trend toward Utility Computing
Characteristics
• Ubiquitous
• Reliability
• Scalability
• Autonomic
• Self-organized to support dynamic discovery.
• Utility vision - Utility computing focuses on a business
model in which customers receive computing
resources from a paid service provider. All grid/cloud
platforms are regarded as utility service providers.
Cloud Characteristics
• Centralization of infrastructure and lower costs
• Increased peak-load capacity
• Efficiency improvements for systems that are
often underutilized
• Dynamic allocation of CPU, storage, and
network bandwidth
• Consistent performance that is monitored by
the provider of the service
Cloud Characteristics
 No up-front commitments
 On-demand access
 Nice pricing
 Simplified application acceleration and scalability
 Efficient resource allocation
 Energy efficiency
 Seamless creation and use of third-party services

You might also like