Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CSE 423 Virtualization and Cloud Computinglecture0

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Virtualization and

cloud computing

CSE 423
Computing is being
transformed into a
model consisting of
services that are
commoditized and
delivered in a manner
similar to utilities
let's break it down

Overview of Distributed computing


Virtualization techniques
Introduction to Cloud Computing
Migrating into a Cloud
Understanding cloud architecture
Cloud mechanisms
Evaluations

Three evaluations

Test 1 and 2 will be analytical MCQ


questions relevant to scenarios and
Technology covered in syllabus
Term Paper
The student has to write an original paper
on the latest technological advancement
The Internet plays a
significant role in cloud
computing for representing
a transportation medium of
cloud services that can
deliver and accessible to
cloud consumer.
History
The last decades have reinforced the idea that information processing can be done more
efficiently centrally, on large farms of computing and storage systems accessible via the
Internet
When computing resources in distant data centers are used rather than local computing
systems, network-centric computing and network-centric content are accessed.
Advancements in networking and other areas are responsible for the acceptance of the two
new computing models and led to the grid computing movement in the early 1990s and,
since 2005, to utility computing and cloud computing.
In utility computing the hardware and software resources are concentrated in large data
centers and users can pay as they consume computing, storage, and communication
resources.
Cloud computing is a path to utility computing embraced by major IT companies such as
Amazon, Apple, Google, HP, IBM, Microsoft, Oracle, and others.
Network-centric computing

The concepts and technologies for network-centric computing and content evolved
through the years and led to several large-scale distributed system developments:
• The Web and the semantic Web are expected to support the composition of
services (not necessarily computational services) available on the Web.
• The Grid, initiated in the early 1990s by National Laboratories and Universities, is
used primarily for applications in the area of science and engineering.
• Computer clouds, promoted since 2005 as a form of service-oriented computing by
large IT companies, are used for enterprise computing, high-performance computing,
Web hosting, and storage for network-centric content.
Peer-to-peer Sysyems
The user-centric model, in place since the early 1960s, was
challenged in the 1990s by the peer-to-peer (P2P) model. P2P
systems can be regarded as one of the precursors of today’s
clouds. This new model for distributed computing promoted the
idea of low-cost access to storage and central processing unit
(CPU) cycles provided by participant systems; in this case, the
resources are located in different administrative domains.

P2P systems exploit the network infrastructure to provide access to


distributed computing resources. Decentralized applications
developed in the 1980s, such as Simple Mail Transfer Protocol
(SMTP), a protocol for email distribution.
Utility ComputinG
Utility computing is a model in which computing resources
are provided to the customer based on specific demand.
The service provider charges exactly for the services
provided, instead of a flat rate.
The foundational concept is that users or businesses pay
the providers of utility computing for the amenities used –
such as computing capabilities, storage space, and
applications services. The customer is thus, absolved from
the responsibility of maintenance and management of the
hardware. Consequently, the financial layout is minimal for
the organization.
Utility computing helps eliminate data redundancy, as huge
volumes of data are distributed across multiple servers or
backend systems. The client, however, can access the
data anytime and from anywhere.
Parallel SYSTEMS
The concepts introduced in this section are very important in practice. Communication
protocols support coordination of distributed processes and transport information through
communication channels
Parallel computing allows us to solve large problems by splitting them into smaller ones
and solving them concuParallel hardware and software systems allow us to solve
problems demanding more resources than those provided by a single system and, at the
same time, to reduce the time required to obtain a solution.
The speed-up measures the effectiveness of parallelization; in the general case, the
speed-up of the parallel computation is defined by Amdahl's Law
Coordination of concurrent computations could be quite challenging and involves
overhead, which ultimately reduces the speed-up of parallel computations. Often the
parallel computation involves multiple stages, and all concurrent activities must finish one
stage before starting the execution of the next one
Elements in Parallel Computing Model

Modularity: A complex system is made of components, or modules, with


well-defined functions. Modularity supports the separation of concerns,
encourages specialization, improves maintainability, reduces costs, and
decreases the development time of a system.
Client-Server Paradigm: The ubiquitous client-server paradigm is based on
enforced modularity; this means that the modules are forced to interact only
by sending and receiving messages.
Remote Procedure Call (RPC): RPC is often used for the implementation of
client-server systems interactions.
Atomic Actions: a multistep operation should be allowed to proceed
to completion without any interruptions, and the operation should be atomic.
Distributed SYSTEMS
A distributed system is a collection of autonomous computers that are connected through
a network and distribution software called middleware, which enables computers to
coordinate their activities and to share the resources of the system. A distributed system’s
users perceive the system as a single integrated computing facility.
A distributed system has several characteristics: Its components are autonomous,
scheduling and other resource management and security policies are implemented by
each system, there are multiple points of control and multiple points of failure, and the
resources may not be accessible at all times.
Distributed systems can be scaled by adding additional resources and can be designed to
maintain availability even at low levels of hardware/software/network reliability.
Distributed systems have been around for several decades. For example, distributed file
systems and network file systems have been used for user convenience and to improve
reliability and functionality of file systems for many years.
Distributed SYSTEMS

The remote procedure call (RPC) supports inter-process communication and allows a
procedure on a system to invoke a procedure running in a different address space,
possibly on a remote system
A communication channel provides the means for processes or threads to communicate
with one another and coordinate their actions by exchanging messages
These two abstractions allow us to concentrate on critical properties of distributed
systems without the need to discuss the detailed physical properties of the entities
involved
Key Advantages
Distributed computing makes all computers in the cluster work together as if they were
one computer. While there is some complexity in this multi-computer model, there are
greater benefits around:
Scalability. Distributed computing clusters are easy to scale through a “scale-out
architecture” in which higher loads can be handled by simply adding new hardware
(versus replacing existing hardware).
Performance. Through parallelism in which each computer in the cluster
simultaneously handles a subset of an overall task, the cluster can achieve high levels
of performance through a divide-and-conquer approach.
Resilience. Distributed computing clusters typically copy or “replicate” data across all
computer servers to ensure there is no single point of failure. Should a computer fail,
copies of the data on that computer are stored elsewhere so that no data is lost.
Cost-effectiveness. Distributed computing typically leverages low-cost, commodity
hardware, making initial deployments as well as cluster expansions very economical.
Difference Between
PArallel Computing Distributed Computing

Multiple processors perform Multiple computers perform


multiple tasks assigned to tasks at the same time to
them simultaneously. These achieve a single decided
tasks are broken down from a goal through networked
single main problem. computers.
Done using a single computer Requires multiple computers
Memory in parallel systems is Each computer has its own
either shared or distributed memory
between processors Multiple computers perform
Multiple processors perform multiple operations
processing Allows scalability, to share
Parallel computing helps to resources and helps to
significantly increase the perform computation tasks
performance of the system, efficiently
provides concurrency, and
saves time
Difference Between
PArallel Computing Distributed Computing

To provide synchronization all There is no global clock,


processors share a single uses synchronization
master clock algorithms
Environments are tightly Environments might be
coupled loosely coupled or tightly
Used in places requiring coupled
excessively higher and faster Used when computers are
processing power located at different
Example: Supercomputers geographical locations and
speed doesn’t matter
Example: Facebook

You might also like