Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
3 views

Parallel and distributed computing lec 1 & 2

The document provides an overview of parallel and distributed computing, including definitions, historical context, and architectural types. It highlights the advantages of both computing paradigms, such as scalability, performance optimization, and fault tolerance. Additionally, it distinguishes between parallel and distributed computing, emphasizing their operational differences and use cases.

Uploaded by

reactuser76
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Parallel and distributed computing lec 1 & 2

The document provides an overview of parallel and distributed computing, including definitions, historical context, and architectural types. It highlights the advantages of both computing paradigms, such as scalability, performance optimization, and fault tolerance. Additionally, it distinguishes between parallel and distributed computing, emphasizing their operational differences and use cases.

Uploaded by

reactuser76
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Parallel and Distributed

Computing
COMP3139
Reference Books
1. Parallel programming: techniques and
applications using networked workstations
a n d p a ral l e l c o mp u t e rs ‖ , Wi l k i n s o n an d M .
Allen, 1stedition
2. Distributed systems: principles and
p a r a d i g m s , A . S . T a n e n b a u m a n d M . V. S t e e n ,
prentice hall, 2nd edition, 2007
Lecture 1 and 2
Computing
• Computing is the process of using computer technology to complete a given goal-
oriented task.

• Computing may encompass the design and development of software and hardware
systems for a broad range of purposes

• often structuring, processing and managing any kind of information

• For example, cloud computing, social computing, ubiquitous computing, parallel


computing and grid computing all fall under the umbrella of the general meaning of
computing.

4
History of Computing
Four Decades

• Batch Era

• Time-Sharing Era

• Desktop Era

• Network Era

5
Batch Era

• Batch processing is when a computer processes several tasks that it has collected in a
group.

• It is designed to be a completely automated process, without human intervention.

• It can also be called workload automation (WLA) and job scheduling.

• Batch processing is an incredibly cost effective

• A good example of batch processing is how credit card companies do their billing.

6
Time-Sharing Era

• Time-sharing, in data processing method of operation in which multiple users


with different programs interact nearly simultaneously.

• Commonly used time-sharing techniques include multiprocessing, parallel


operation, and multiprogramming.

• Developing a system that support multiple users at a time.

7
Desktop Era

• desktop computers became the predominant type, the most popular being the
IBM PC

• Early personal computers, like the original IBM Personal Computer, were
enclosed in a "desktop case.

• these cases had to be sturdy enough to support the weight of CRT displays
that were widespread at the time

8
Network Era

• A computer network is a collection of computers capable


of transmitting, receiving, and exchanging voice, data, and video traffic.

• Systems with:

• Shared Resources

• Distributed Memory

• Example for parallel computer: Intel iPSC, nCUBE

9
FLYNN’s Taxonomy of Computer
Architecture

 Data

• Data refers to the values like numbers, text, images etc.

• It is information that a program is processing, manipulating or storing.

 Instruction

• These are specific step-by-step commands that tell processor how to execute a
program.

10
FLYNN’s Taxonomy of Computer
Architecture

 Single-instruction, single-data (SISD) systems

• An SISD computing system is a uniprocessor machine which can execute a single


instruction, operating on a single data stream.

• In SISD, machine instructions are processed in a sequential manner and computers


adopting this model are popularly called sequential computers.

11
FLYNN’s Taxonomy of Computer Architecture

Single-instruction, multiple-data (SIMD) systems


An SIMD system is a multiprocessor machine capable of executing the
same instruction on all the CPUs but operating on different data streams
the information can be passed to all the processing elements (PEs)
organized data elements of vectors.

12
FLYNN’s Taxonomy of Computer Architecture

 Multiple-instruction, single-data (MISD) systems


• An MISD computing system is a multiprocessor machine capable
of executing different instructions on different PEs but all of them
operating on the same dataset
• Machines built using the MISD model are not useful in most of the
application

13
FLYNN’s Taxonomy of Computer Architecture

Multiple-instruction, multiple-data (MIMD) systems


An MIMD system is a multiprocessor machine which can
execute multiple instructions on multiple data sets
machines built using this model are capable to any kind of
application

14
Parallel Computing?

1. Serial Computing

15
Parallel Computing?

1. Parallel Computing
Parallel computing is a computing
architecture that divides a problem
into smaller tasks and runs them
concurrently.

16
Why We Use Parallel Computing

Increased Performance
• By distributing tasks across multiple processing units, parallel computing can
handle complex calculations
Scalability
• Parallel computing offers excellent scalability, meaning it can efficiently
handle larger workloads as the number of processing units increases
• parallel computing can take full advantage of these resources, enabling faster
and more efficient processing of data and tasks.
17
Why We Use Parallel Computing

 Real-time Processing
• Certain applications, such as video processing, real-time simulations, and
online gaming, require rapid and continuous processing of data.
Speed
• Parallel computing can perform computations much faster than traditional,
serial computing.
• The more processors available, the faster the speed.
18
Distributed Computing

• Distributed computing is the method of making multiple computers work together to solve a
common problem.
• It makes a computer network appear as a powerful single computer that provides large-scale
resources to deal with complex challenges.
• Distributed systems, distributed programming, and distributed algorithms are some other terms
that all refer to distributed computing.
• For example: computer networks, world wide web, multi player video games etc.

19
Advantages of Distributed Computing
 Scalability
• You can add new nodes, that is, more computing devices, to the distributed computing network
when they are needed.
 Availability
• Your distributed computing system will not crash if one of the computers goes down. The design
show's fault tolerance
 Consistency
• The system automatically manages data consistency across all the different computers.

20
Advantages of Distributed Computing

 Transparency
• Distributed computing systems provide logical separation between the user and the physical devices.
 Efficiency
• Distributed systems offer faster performance with optimum resource use of the underlying hardware.
 Decentralization
• Decentralization in distributed systems means spreading out control and decision-making across
many nodes instead of having one main authority. This helps make the system more reliable

21
Advantages of Distributed Computing
 Fault Tolerance
• Fault tolerance is about how well a distributed system can handle things going wrong.
• It means the system can find out when something’s not working right, fix it, and keep running
smoothly.
 Performance Optimization
• Performance optimization means making a distributed system work faster and better by improving
how data is stored, how computers talk to each other, and how tasks are done.
• using efficient ways for computers to communicate, like sending messages in a smart order to
reduce delays.
22
Types Of Distributed System
Architecture

In distributed computing, you design applications that


can run on several computers instead of on just one
computer. You achieve this by designing the software so
that different computers perform different functions and
communicate to develop the final solution. There are
four main types of distributed architecture.

23
Types Of Distributed System Architecture
Client-server Architecture

• Client-server is the most common method of software organization on a distributed system.


• The functions are separated into two categories: clients and servers.
• Clients
• Clients have limited information and processing ability
• . Instead, they make requests to the servers, which manage most of the data and other resources.
• You can make requests to the client, and it communicates with the server on your behalf.
• Servers
• Server computers synchronize and manage access to resources.
• They respond to client requests with data or status information. 24
Types Of Distributed System Architecture
Client-server Architecture

 Benefits and limitations


• Client-server architecture gives the benefits of security and ease of ongoing management.
• You have only to focus on securing the server computers. Similarly, any changes to the database
systems require changes to the server only.
• The limitation of client-server architecture is that servers can cause communication bottlenecks,
especially when several machines make requests simultaneously.

Three-tier architecture
• In three-tier distributed systems, client machines remain as the first tier you access. Server machines,
on the other hand, are further divided into two categories:
25
Types Of Distributed System Architecture
Three-tier Architecture

 Application servers
• Application servers act as the middle tier for communication.
• They contain the application logic or the core functions that you design the distributed system for.
 Database servers
• Database servers act as the third tier to store and manage the data.
• They are responsible for data retrieval and data consistency.
• By dividing server responsibility, three-tier distributed systems reduce communication bottlenecks and
improve distributed computing performance.

26
Types Of Distributed System Architecture
N-tier Architecture

• N-tier models include several different client-server systems communicating with each other to solve
the same problem.
• Most modern distributed systems use an n-tier architecture with different enterprise applications
working together as one system behind the scenes.
 Peer-to-peer architecture
• Assign equal responsibilities to all networked computers.
• There is no separation between client and server computers, and any computer can perform all
responsibilities.

27
HOW DOES DISTRIBUTED COMPUTING WORK?

• Distributed computing works by computers passing messages to each other within


the distributed systems architecture.
• Communication protocols or rules create a dependency
• This interdependence is called coupling, and there are two main types of coupling.

28
HOW DOES DISTRIBUTED COMPUTING WORK?
LOOSE COUPLING

• Components are weakly connected so that changes to one component do not


affect the other.
• or example, client and server computers can be loosely coupled by time.
• Messages from the client are added to a server queue, and the client can
continue to perform other functions until the server responds to its message.

29
HOW DOES DISTRIBUTED COMPUTING WORK?
TIGHT COUPLING

• High-performing distributed systems often use tight coupling.


• Fast local area networks typically connect several computers, which creates a
cluster.
• Central control systems, called clustering middleware, control and schedule the
tasks and coordinate communication between the different computers.

30
Parallel Computing Vs. Distributed Computing

• Parallel computing is a particularly tightly coupled form of distributed computing.


• In parallel processing, all processors have access to shared memory for exchanging
information between them.
• On the other hand, in distributed processing, each processor has private memory
(distributed memory). Processors use message passing to exchange information.

31
THANK YOU

You might also like