Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
8 views

Week 3 Parallel Algorithms

Uploaded by

pakismunaakosayo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Week 3 Parallel Algorithms

Uploaded by

pakismunaakosayo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Parallel Algorithms

Parallel algorithms are designed to take advantage of multiple


processing units to solve problems faster. They leverage the power of
parallelism to break down complex tasks into smaller, independent
units that can be executed concurrently.

sv
by DARWIN VARGAS
Common Parallel Algorithms
1 Sorting Algorithms 2 Searching Algorithms
Parallel sorting algorithms, such as Parallel search algorithms, such as
merge sort and quicksort, efficiently parallel binary search, accelerate the
sort large datasets by dividing them process of finding specific elements
into smaller sub-problems that can within large datasets by dividing the
be processed concurrently. search space among multiple
processors.

3 Graph Algorithms 4 Numerical Algorithms


Parallel graph algorithms, such as Parallel numerical algorithms, such
parallel breadth-first search and as parallel matrix multiplication and
minimum spanning tree algorithms, linear algebra algorithms, leverage
enable efficient processing of large parallelism to speed up
graphs by distributing the workload computationally intensive numerical
across multiple processors. operations.
Shared Memory vs. Distributed Memory Models
Shared Memory Model Distributed Memory Model

In the shared memory model, multiple processors have In the distributed memory model, each processor has its
access to a common memory space. This allows own private memory. Processors communicate with each
processors to directly share data and synchronize their other through message passing, exchanging data and
operations. coordinating their actions.
Parallelism and Concurrency

Parallelism Concurrency
Parallelism refers to the Concurrency refers to the
simultaneous execution of ability of a system to handle
multiple tasks or operations, multiple tasks or requests at
leveraging multiple the same time, even if those
processing units to speed up tasks are not executing
computation. simultaneously. This often
involves managing shared
resources efficiently.
Divide-and-Conquer Strategies

1 Divide
The problem is divided into smaller, independent sub-
problems that can be solved concurrently.

2 Conquer
Each sub-problem is solved independently by a separate
processor or processing unit.

3 Combine
The solutions to the sub-problems are combined to
produce the final solution to the original problem.
Load Balancing and Scheduling
Load Balancing
Distributing tasks evenly among available processors to ensure optimal performance and prevent any single processor from becoming overloaded.

Scheduling
Determining the order in which tasks are executed on each processor, taking into account factors such as task dependencies and processor
availability.

Dynamic Allocation
Adjusting the allocation of tasks to processors dynamically based on changing workload and system conditions, ensuring efficient resource utilization.
Parallel Sorting Algorithms
Algorithm Description

Merge Sort Divides the input list into


halves, recursively sorts each
half, and then merges the
sorted halves.

Quick Sort Partitions the input list around


a pivot element and recursively
sorts the partitions. Parallelism
is achieved by partitioning and
sorting sub-lists concurrently.
Parallel Graph Algorithms

Breadth-First Search Minimum Spanning Tree


Traverses a graph level by level, Finds the minimum-weight set of
exploring all neighbors of a node edges that connects all vertices in
before moving to the next level. a graph. Parallelism can be
Parallelism can be achieved by achieved by exploring different
exploring different parts of the edges concurrently and merging
graph concurrently. partial solutions.
Parallel Numerical
Algorithms
1 Matrix Multiplication 2 Linear Algebra
Solves systems of linear
Multiplies two matrices by equations, performs matrix
dividing them into blocks decompositions, and
and performing the solves eigenvalue
multiplication of each block problems, leveraging
concurrently. parallelism for efficient
computation.
Challenges and Considerations
Communication Overhead Synchronization
The time required for processors Ensuring that processors access
to communicate with each other shared resources and update
can significantly impact shared data in a consistent and
performance. coordinated manner.

Data Locality Scalability


Minimizing the amount of data Designing algorithms that can
that needs to be transferred effectively utilize increasing
between processors, ensuring numbers of processors,
that each processor has access to maintaining performance and
the data it needs locally. efficiency as the problem size
grows.

You might also like