Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
23 views

Parallel and Distributed Computing

The document describes different architectural styles for distributed systems including layered, object-oriented, data-centered, and event-based architectures. It also provides examples of each style.

Uploaded by

Rana Noman
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Parallel and Distributed Computing

The document describes different architectural styles for distributed systems including layered, object-oriented, data-centered, and event-based architectures. It also provides examples of each style.

Uploaded by

Rana Noman
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Q#2 Describe Architectural Model of Distributed Syst with neat

diagram.

Distributed Systems:

 A distributed system consists of multiple machines.


 All computation work is divided among the different systems.
 Its performance is high as the workload is divided among different computers to efficiently
use their capacity.
 There are systems present in backup, so if the main system fails then work will not stop.
A distributed system contains multiple nodes that are physically separate but mixed together
using the communication networks.

Architecture Styles:
To show different arrangement styles among computers Architecture styles are proposed.

1. Layered Architecture:

In Layered architecture, different components are organised in layers. Each layer communicates
with its adjacent layer by sending requests and getting responses. The layered architecture
separates components into units. It is an efficient way of communication. Any layer can not
directly communicate with another layer. A layer can only communicate with its neighbouring
layer and then the next layer transfers information to another layer and so on the process goes
on.
In some cases, layered architecture is in cross-layer coordination. In a cross-layer, any adjacent
layer can be skipped until it fulfils the request and provides better performance results. Request
flow from top to bottom(downwards) and response flow from bottom to top(upwards). The
advantage of layered architecture is that each layer can be modified independently without
affecting the whole system. This type of architecture is used in Open System Interconnection
(OSI) model.
To the layers on top, the layers at the bottom offer a service. While the response is transmitted
from bottom to top, the request is sent from top to bottom. This method has the advantage that
calls always follow a predetermined path and that each layer is simple to replace or modify
without affecting the architecture as a whole.

2. Object-Oriented Architecture:

In this type of architecture, components are treated as objects which convey information to each
other. Object-Oriented Architecture contains an arrangement of loosely coupled objects. Objects
can interact with each other through method calls. Objects are connected to each other through
the Remote Procedure Call (RPC) mechanism or Remote Method Invocation (RMI) mechanism.
Web Services and REST API are examples of object-oriented architecture. Invocations of
methods are how objects communicate with one another. Typically, these are referred to as
Remote Procedure Calls (RPC). REST API Calls, Web Services, and Java RMI are a few well-
known examples. These characteristics apply to this.
3. Data Centered Architecture:

Data Centered Architecture is a type of architecture in which a common data space is present at
the centre. It contains all the required data in one place a shared data space. All the components
are connected to this data space and they follow publish/subscribe type of communication. It has
a central data repository at the centre. Required data is then delivered to the components.
Distributed file systems, producer-consumer systems, and web-based data services are a few
well-known examples.
For example Producer-Consumer system. The producer produces data in common data space and
consumers request data.
4. Event-Based Architecture:

Event-Based Architecture is almost similar to Data centered architecture just the difference is
that in this architecture events are present instead of data. Events are present at the centre in the
Event bus and delivered to the required component whenever needed. In this architecture, the
entire communication is done through events. When an event occurs, the system, as well as the
receiver, get notified. Data, URLs etc are transmitted through events. The components of this
system are loosely coupled that’s why it is easy to add, remove and modify them. Heterogeneous
components can communicate through the bus. One significant benefit is that these
heterogeneous components can communicate with the bus using any protocol. However, a
specific bus or an ESB has the ability to handle any kind of incoming request and respond
appropriately.

Q#3 Explain Lamport's Distributed Mutual Algorithm.

Lamport’s Distributed Mutual Exclusion Algorithm is a permission based algorithm proposed


by Lamport as an illustration of his synchronization scheme for distributed systems. In
permission based timestamp is used to order critical section requests and to resolve any conflict
between requests. In Lamport’s Algorithm critical section requests are executed in the increasing
order of timestamps i.e a request with smaller timestamp will be given permission to execute
critical section first than a request with larger timestamp. In this algorithm:
Three type of messages ( REQUEST, REPLY and RELEASE) are used and communication
channels are assumed to follow FIFO order.
A site send a REQUEST message to all other site to get their permission to enter critical section.
A site send a REPLY message to requesting site to give its permission to enter the critical section.
A site send a RELEASE message to all other site upon exiting the critical section.
Every site Si, keeps a queue to store critical section requests ordered by their
timestamps. request_queuei denotes the queue of site Si
A timestamp is given to each critical section request using Lamport’s logical clock.
Timestamp is used to determine priority of critical section requests. Smaller timestamp gets high
priority over larger timestamp. The execution of critical section request is always in the order of
their timestamp.
Algorithm:
To enter Critical section:
 When a site Si wants to enter the critical section, it sends a request message Request(tsi, i) to
all other sites and places the request on request_queuei. Here, Tsi denotes the timestamp of
Site Si
 When a site Sj receives the request message REQUEST(tsi, i) from site Si, it returns a
timestamped REPLY message to site Si and places the request of site Si on request_queuej
 To execute the critical section:
 A site Si can enter the critical section if it has received the message with timestamp larger
than (tsi, i) from all other sites and its own request is at the top of request_queuei
 To release the critical section:
 When a site Si exits the critical section, it removes its own request from the top of its request
queue and sends a timestamped RELEASE message to all other sites
 When a site Sj receives the timestamped RELEASE message from site Si, it removes the
request of Si from its request queue
 Message Complexity: Lamport’s Algorithm requires invocation of 3(N – 1) messages per
critical section execution. These 3(N – 1) messages involves
(N – 1) request messages
(N – 1) reply messages
(N – 1) release messages

Q#4 Explain Matrix Multiplication SIMD

Matrix multiplication on SIMD (Single Instruction, Multiple Data) architecture is a parallel


computing technique that leverages the simultaneous execution of identical instructions on
multiple data elements. This optimization is particularly useful for matrix operations, which are
fundamental in various scientific and machine learning applications.

Here's a step-by-step explanation of matrix multiplication on SIMD:


1. Data Alignment: Matrices are divided into smaller blocks or sub-matrices, ensuring that each
block fits into the SIMD register width (e.g., 128-bit or 256-bit).

2. Vectorization: Each block is transformed into a vector, allowing SIMD instructions to operate
on multiple elements simultaneously.

3. Matrix Multiplication: SIMD instructions perform the matrix multiplication, executing the
same instruction on multiple data elements in parallel. This is typically done using a combination
of addition and multiplication operations.

4. Parallel Execution: SIMD executes the instructions on multiple data elements concurrently,
significantly reducing computation time.

5. Results Collection: The resulting vectors are combined to form the final matrix product.

SIMD architectures like SSE (Streaming SIMD Extensions), AVX (Advanced Vector
Extensions), and ARM NEON provide dedicated instructions for matrix multiplication, making
it an efficient and optimized process.

By harnessing the power of SIMD, matrix multiplication can be significantly accelerated,


leading to improved performance in various applications, including scientific simulations,
machine learning, and data analytics.

Q#5 Explain dataflow computer with examples

A data flow computer is a type of computer architecture that processes data as it becomes
available, rather than following a predetermined sequence of instructions. It's based on the
concept of data flow graphs, where data flows through a network of nodes, each performing a
specific operation.

Here's an example to illustrate how a data flow computer works:

Example: Calculating the average of two numbers


Nodes:

1. Input 1 and Input 2: Provide the two numbers to be averaged.


2. - (Addition): Adds the two numbers.
3. / (Division): Divides the sum by 2.
4. 2 (Constant): Provides the divisor (2).
5. Output: Displays the calculated average.

Data Flow:

1. Input 1 and Input 2 provide their values to the + node.


2. The + node adds the values and passes the result to the / node.
3. The / node receives the result and the constant 2, and divides the sum by 2.
4. The Output node receives the calculated average and displays it.

In a data flow computer, the nodes are executed as soon as the required input data becomes
available, without a predetermined sequence. This allows for efficient processing and
parallelism, making it suitable for applications like scientific simulations, image processing, and
machine learning.

Q#6 How Pipeline Architecture is different from Array Processor

Here are the differences between Pipeline Architecture and Array Processor:

Pipeline Architecture:

- A single instruction is processed in a series of stages, with each stage completing a specific
function before passing the instruction to the next stage ¹.
- Each stage operates independently, allowing for parallel processing and increased efficiency.
- Used inside a CPU (Central Processing Unit) for tasks such as instruction decoding, arithmetic,
and memory access.
Array Processor (or Vector Processor):

- A type of parallel processing where a single instruction is applied to multiple data sets
simultaneously ².
- Uses a number of Arithmetic Logic Units (ALU) to process all elements of an array at the same
time.
- Suitable for tasks that require the same instruction to be applied to large amounts of data, such
as graphics processing and scientific simulations.

You might also like