Unit1 Parallel and Distributed
Unit1 Parallel and Distributed
09/08/2023
UNIT – I, PART - C
OVERVIEW OF DISTRIBUTED COMPUTING
Abhishek Bhattacherjee
systems but operate as a single system. A distributed system's computers can be physically
close together and linked by a local network or geographically distant and linked by a wide
area network (WAN). A distributed system can be made up of any number of different
configurations, such as mainframes, PCs, workstations, and minicomputers. The main aim
of distributed computing is to make a network work as a single computer.
4 PART -II: PARALLEL & DISTRIBUTED COMPUTING
INTRODUCTION
FEATURES OF DISTRIBUTED SYSTEMS
No Common Physical Clock
It is am important assumption as it introduces the element of “distribution” in the system and
gives rise to the inherent asynchrony amongst the processors.
No Shared Memory
A key feature that requires message passing for communication.
This feature implies the absence of the common physical clock
Geographical Separation:
Geographically the wider apart the processors are, the more representative the system of the
distributed system.
It is not necessary for the processors to be on a wide area network .
Autonomy and Heterogeneity: The processors are “loosely coupled”. They have different speeds
and each can be running on a different operating system. They are not part of dedicated systems, but
cooperate with one another by offering services or solving a problem jointly.
5 PART -II: PARALLEL & DISTRIBUTED COMPUTING
RELATION TO COMPUTER SYSTEM COMPONENT
The distributed system is presented as:
Each computer has a memory processing It shows the relationship of the software
unit and the computers are connected by a components that run on each of the computers and
communication network. use the local operating system and network
protocol stack for functioning.
The distributed software is also termed as “middleware”
A distributed execution is the execution of the process across the distributed system to collaboratively
achieve a common goal.
An execution is also termed as a “computation” or “run”
6 PART -II: PARALLEL & DISTRIBUTED COMPUTING
RELATION TO COMPUTER SYSTEM COMPONENT
The distributed system is presented as:
Various primitives and calls to functions defined in various libraries of the middleware layer are
embedded in the user program code
7 PART -II: PARALLEL & DISTRIBUTED COMPUTING
PARALLEL COMPUTING
INTRODUCTION
It is the use of multiple processing elements
simultaneously for solving any problem.
Problems are broken down into instructions and are solved
concurrently as each resource that has been applied to work
is working at the same time.
Advantages (over Serial Computing)
It saves time and money as many resources working
together will reduce the time and cut potential costs.
It can be impractical to solve larger problems on Serial
Computing.
It can take advantage of non-local resources when the local
resources are finite.
Serial Computing ‘wastes’ the potential computing power,
thus Parallel Computing makes better work of the hardware.
8 PART -II: PARALLEL & DISTRIBUTED COMPUTING
PARALLEL COMPUTING
TYPES OF PARALLELISM
Bit Level Parallelism
It is parallel computing based on the increasing processor’s Word Size.
It reduces the number of instructions that the system must execute in order to perform
a task on large-sized data.
Task Parallelism
It employs the decomposition of a task into subtasks and then allocates each of the subtasks for
execution.
The processors perform the execution of sub-tasks concurrently.
BLP
Example
1 PART -II: PARALLEL & DISTRIBUTED COMPUTING
0
PARALLEL COMPUTING
TYPES OF PARALLELISM
Data Level Parallelism
Instructions from a single stream operate concurrently on
several data – Limited by non-regular data manipulation
patterns and by memory bandwidth
WHY PARALLEL COMPUTING?
Real-world data needs more dynamic simulation and modeling, and for achieving the same, parallel
computing is the key.
Parallel computing provides concurrency and saves time and money.
Complex, large datasets, and their management can be organized only and only using parallel
computing’s approach
Ensures the effective utilization of the resources
APPLICATIONS
Databases and Data mining.
Real-time simulation of systems
Advanced graphics, augmented reality, and virtual reality.
11 PART -II: PARALLEL & DISTRIBUTED COMPUTING
PARALLEL COMPUTING
LIMITATIONS OF PARALLEL COMPUTING
It addresses such as communication and synchronization between multiple sub-tasks and processes
which is difficult to achieve.
The algorithms must be managed in such a way that they can be handled in a parallel mechanism.
The algorithms or programs must have low coupling and high cohesion. But it’s difficult to create
such programs.
More technically skilled and expert programmers can code a parallelism-based program well.
12 PART -II: PARALLEL & DISTRIBUTED COMPUTING
PARALLEL COMPUTING ARCHITECTURE
Parallel Architecture Types:
The parallel computer architecture is classified based on the following:
Multiprocessors
Multi Computers
Models based on Shared Memory Multi Computers:
1. Uniform Memory Access (UMA)
all the processors share the physical memory uniformly.
All the processors have equal access time to all the memory words.
Each processor may have a private cache memory.
Same rule is followed for peripheral devices.
13 PART -II: PARALLEL & DISTRIBUTED COMPUTING
PARALLEL COMPUTING ARCHITECTURE
Parallel Architecture Types:
1. Uniform Memory Access (UMA)
When all the processors have equal access to all the peripheral devices, the system is called
a symmetric multiprocessor.
When only one or a few processors can access the peripheral devices, the system is called
an asymmetric multiprocessor.
14 PART -II: PARALLEL & DISTRIBUTED COMPUTING
PARALLEL COMPUTING ARCHITECTURE
Parallel Architecture Types:
2. Non-Uniform Memory Access (NUMA)
In the NUMA multiprocessor model, the access time varies with the location of the memory word.
In this, the shared memory is physically distributed among all the processors, called local
memories.
The collection of all local memories forms a global address space that can be accessed by all the
processors.
15 PART -II: PARALLEL & DISTRIBUTED COMPUTING
DISTRIBUTED SYSTEM TYPES
Distributed System Types
The nodes in the distributed system can be arranged in the form of client/server or peer-to-peer
systems.
Client/Server Model
In this the client request the resources and the server provides the resources with centralized control.
A server may serve multiple clients at the same time while a client is in contact with only one server
The client and the server communicate with each other via computer networks.
Peer-to-Peer Model
They contain nodes that are equal participants in data sharing.
All the tasks are equally divided between all the nodes.
The nodes interact with each other as required and share the resources.
16 PART -II: PARALLEL & DISTRIBUTED COMPUTING
DISTRIBUTED SYSTEM – ADVANTAGES/DIS. ADV.
Advantages of Distributed System
All the nodes in the distributed system are connected to each other. So nodes can share data with
other nodes
More nodes are easily be added to the distributed system i.e. it can scale as required
Failure of one node does not lead to the failure of an entire distributed system. Other nodes can still
communicate with each other.
Resources like printers can be shared with multiple nodes rather than been restricted to just one
Dis-Advantages of Distributed System
It is difficult to provide adequate security in distributed systems because the nodes as well the
connection need to be secured
Some messages and data can be lost in the network while traveling from one node to another
The database connected to the distributed system is quite complicated and difficult to handle as
compared to a single user system
Overloading may occur in the network if all the nodes try to send data at once.
17 PART -II: PARALLEL & DISTRIBUTED COMPUTING
DISTRIBUTED COMPUTING MODELS
The Models are:
Virtualization
Service-oriented Architecture (SOA)
Grid Computing
Utility Computing
Virtualization
It is a technique that allows sharing
single physical instance of an
application or resource among
multiple organization or tenants
18 PART -II: PARALLEL & DISTRIBUTED COMPUTING
DISTRIBUTED COMPUTING MODELS
Service-Oriented Architecture
It helps to use the application as a service for other applications regardless of the type of vendor,
product, or technology
It is possible to exchange the data between applications of different vendors without additional
programming or making changes to service.
19 PART -II: PARALLEL & DISTRIBUTED COMPUTING
DISTRIBUTED COMPUTING MODELS
Grid Computing
It refers to distributed computing, in which a group of computers from multiple locations are
connected with each other to achieve a common objective. These computer resources are
heterogeneous and graphically distributed.
It breaks complex tasks into smaller pieces, which are distributed to CPUs that reside within the grid.
Utility Computing
It is based on a pay-per-use model.
It offers computational resources on-
demand as a metered service.
Cloud computing, Grid computing and
managed IT services are based on the
concept of utility computing.
20 PART -II: PARALLEL & DISTRIBUTED COMPUTING
COMPARISION OF PARALLEL & DISTRIBUTED COMPUTING
Parameter Parallel Computing Distributed Computing
Distributed computing is a computation
Parallel computing is a computation
Parallel vs. type in which multiple computers execute
type in which multiple processors
Distributed common tasks while communicating with
execute multiple tasks simultaneously
each other using message passing
No. of
Parallel computing occurs on one Distributed computing occurs between
Computers
computer. multiple computers.
Required
Processing In parallel computing multiple In distributed computing, computers rely on
Mechanism processors perform processing. message passing.
There is no global clock in distributed
Synchronizati All processors share a single master
computing, it uses synchronization
on clock for synchronization
algorithms.