Parallel Programming: Lecture #9
Parallel Programming: Lecture #9
Parallel Programming: Lecture #9
1
Agenda
Principles of parallel Algorithm Design
o Introduction
• Decomposition, Tasks, & Dependency Graphs
• Granularity, concurrency & task interaction
• Processes & mapping
o Decomposition Techniques
1. Recursive Decomposition
2. Data Decomposition
3. Exploratory decomposition
2
4. Speculative decomposition
o Algorithm development is a critical component in solving a problem using computer
o Sequential Algorithm → a sequence of basic steps for solving a given problem using
serial computer
4
Decomposition, Tasks, &
Dependency Graphs
Decomposition
o The process of dividing a computation into tasks or smaller parts executed
in parallel
Tasks
o Are programmer defined units of computations
o Tasks may be of same, different, or even intermediate sizes.
Dependency Graphs
o Abstraction used to express dependencies among tasks (order of
execution). It is a directed graph with nodes corresponding to tasks and
edges indicating next task 5
Example: Multiplying a Dense Matrix with a Vector.
oObservations: While tasks share data (namely, the vector b ), they do not
have any control dependencies - i.e., no task needs to wait for the (partial)
completion of any other.
6
oAll tasks are of the same size in terms of number of operations.
Granularity, Concurrency & Tasks
interaction
Granularity
o The number of tasks into which a problem is decomposed determines
its granularity.
7
Fine grained
Coarse-grained
8
Example: Database Query Processing
Consider a relational database of vehicles, Each row is record contains data of a vehicle such ID,
model, year , color,…
Consider the computation to execute of the query:
MODEL = ``CIVIC'' AND YEAR = 2001 AND (COLOR = ``GREEN'' OR COLOR = ``WHITE)
9
o The previous query is processed by creating a number of intermediate
tables (ex 4 tables)
12
Degree of Concurrency
o Max number of tasks that can be executed in parallel at a given time
o Maximum degree of concurrency <= total no of tasks …why?
Due to dependencies among tasks
13
4
14
Critical path:
oThe longest directed path between any pair of start node (node with
no incoming edge) and finish node (node with on outgoing edges).
15
Degree of Concurrency
o The average degree of concurrency is the average number of tasks
that can be processed in parallel over the execution of the program.
19
Task Interaction Graphs:
Are the dependency graph and interaction graph both the same?
oAlthough all tasks are independent they need access to specific data
oReturn to matrix example , they all need to access vector b , hence send
and receive messages to access the entire vector in distributed memory
20
Example:
oConsider the problem of multiplying a sparse matrix A with a vector
b. The following observations can be made:
oA matrix is said to be sparse if
1. Significant no of entries =zero
2. No of zero entries ≠predefined structure or pattern
21
Assume that task I is responsible for sending b[i] , hence task 4 is responsible
for sending b[4] to 0,5,8,9 and so on , then task interaction graph is shown in
the figure
22
Processes and Mapping
o Process → computing agent that perform tasks
23
Objectives:
o Maximize concurrency: Task dependency graphs can be used
to ensure that work is equally spread across all processes at any
point (minimum idling and optimal load balance).
24