Parallel and distributed computing lec 1 & 2
Parallel and distributed computing lec 1 & 2
Computing
COMP3139
Reference Books
1. Parallel programming: techniques and
applications using networked workstations
a n d p a ral l e l c o mp u t e rs ‖ , Wi l k i n s o n an d M .
Allen, 1stedition
2. Distributed systems: principles and
p a r a d i g m s , A . S . T a n e n b a u m a n d M . V. S t e e n ,
prentice hall, 2nd edition, 2007
Lecture 1 and 2
Computing
• Computing is the process of using computer technology to complete a given goal-
oriented task.
• Computing may encompass the design and development of software and hardware
systems for a broad range of purposes
4
History of Computing
Four Decades
• Batch Era
• Time-Sharing Era
• Desktop Era
• Network Era
5
Batch Era
• Batch processing is when a computer processes several tasks that it has collected in a
group.
• A good example of batch processing is how credit card companies do their billing.
6
Time-Sharing Era
7
Desktop Era
• desktop computers became the predominant type, the most popular being the
IBM PC
• Early personal computers, like the original IBM Personal Computer, were
enclosed in a "desktop case.
• these cases had to be sturdy enough to support the weight of CRT displays
that were widespread at the time
8
Network Era
• Systems with:
• Shared Resources
• Distributed Memory
9
FLYNN’s Taxonomy of Computer
Architecture
Data
Instruction
• These are specific step-by-step commands that tell processor how to execute a
program.
10
FLYNN’s Taxonomy of Computer
Architecture
11
FLYNN’s Taxonomy of Computer Architecture
12
FLYNN’s Taxonomy of Computer Architecture
13
FLYNN’s Taxonomy of Computer Architecture
14
Parallel Computing?
1. Serial Computing
15
Parallel Computing?
1. Parallel Computing
Parallel computing is a computing
architecture that divides a problem
into smaller tasks and runs them
concurrently.
16
Why We Use Parallel Computing
Increased Performance
• By distributing tasks across multiple processing units, parallel computing can
handle complex calculations
Scalability
• Parallel computing offers excellent scalability, meaning it can efficiently
handle larger workloads as the number of processing units increases
• parallel computing can take full advantage of these resources, enabling faster
and more efficient processing of data and tasks.
17
Why We Use Parallel Computing
Real-time Processing
• Certain applications, such as video processing, real-time simulations, and
online gaming, require rapid and continuous processing of data.
Speed
• Parallel computing can perform computations much faster than traditional,
serial computing.
• The more processors available, the faster the speed.
18
Distributed Computing
• Distributed computing is the method of making multiple computers work together to solve a
common problem.
• It makes a computer network appear as a powerful single computer that provides large-scale
resources to deal with complex challenges.
• Distributed systems, distributed programming, and distributed algorithms are some other terms
that all refer to distributed computing.
• For example: computer networks, world wide web, multi player video games etc.
19
Advantages of Distributed Computing
Scalability
• You can add new nodes, that is, more computing devices, to the distributed computing network
when they are needed.
Availability
• Your distributed computing system will not crash if one of the computers goes down. The design
show's fault tolerance
Consistency
• The system automatically manages data consistency across all the different computers.
20
Advantages of Distributed Computing
Transparency
• Distributed computing systems provide logical separation between the user and the physical devices.
Efficiency
• Distributed systems offer faster performance with optimum resource use of the underlying hardware.
Decentralization
• Decentralization in distributed systems means spreading out control and decision-making across
many nodes instead of having one main authority. This helps make the system more reliable
21
Advantages of Distributed Computing
Fault Tolerance
• Fault tolerance is about how well a distributed system can handle things going wrong.
• It means the system can find out when something’s not working right, fix it, and keep running
smoothly.
Performance Optimization
• Performance optimization means making a distributed system work faster and better by improving
how data is stored, how computers talk to each other, and how tasks are done.
• using efficient ways for computers to communicate, like sending messages in a smart order to
reduce delays.
22
Types Of Distributed System
Architecture
23
Types Of Distributed System Architecture
Client-server Architecture
Three-tier architecture
• In three-tier distributed systems, client machines remain as the first tier you access. Server machines,
on the other hand, are further divided into two categories:
25
Types Of Distributed System Architecture
Three-tier Architecture
Application servers
• Application servers act as the middle tier for communication.
• They contain the application logic or the core functions that you design the distributed system for.
Database servers
• Database servers act as the third tier to store and manage the data.
• They are responsible for data retrieval and data consistency.
• By dividing server responsibility, three-tier distributed systems reduce communication bottlenecks and
improve distributed computing performance.
26
Types Of Distributed System Architecture
N-tier Architecture
• N-tier models include several different client-server systems communicating with each other to solve
the same problem.
• Most modern distributed systems use an n-tier architecture with different enterprise applications
working together as one system behind the scenes.
Peer-to-peer architecture
• Assign equal responsibilities to all networked computers.
• There is no separation between client and server computers, and any computer can perform all
responsibilities.
27
HOW DOES DISTRIBUTED COMPUTING WORK?
28
HOW DOES DISTRIBUTED COMPUTING WORK?
LOOSE COUPLING
29
HOW DOES DISTRIBUTED COMPUTING WORK?
TIGHT COUPLING
30
Parallel Computing Vs. Distributed Computing
31
THANK YOU