08 Parallel algorithms approches
08 Parallel algorithms approches
• Design of parallel computers : Design so that it scales to a large # of processor and are
capable of supporting fast communication and data sharing among processors.
• Design of Efficient Algorithms : Designing parallel algorithms are different from
designing serial algorithms. Significant amount of work is being done for numerical and
non-numerical parallel algorithms
• Methods of Evaluating Parallel Algorithms : Given a parallel computer and a parallel
algorithm we need to evaluate the performance of the resulting system. How fast
problem is solved and how efficiently the processors are used.
Issues in Parallel Computing
• According to the Flynn’s taxonomy [1], computers architecture are classified one of
four basic types. These are:
• Single instruction, single data (SISD): single scalar processor
• Single instruction, multiple data (SIMD): Thinking machines CM-2
• Multiple instruction, single data (MISD): various special purpose machines
• Multiple instruction, multiple data (MIMD): Nearly all parallel machines
memory architecture. M M M M M M
P P P P P P
B U S / Cross Bar
M e m o ry
Pipelining and Superscalar Execution
• Pipelining overlaps various stages of
instruction execution to achieve performance
(Parallelism).
•True Data Dependency: The result of one operation is an input to the next.
In a more aggressive model, instructions can be issued out of order. In this case,
if the second instruction has data dependencies with the first, but the third
instruction does not, the first and third instructions can be co-scheduled. This is
also called dynamic issue.
Superscalar Execution: Performance Consideration
Not all functional units can be kept busy at all times.
If during a cycle, no functional units are utilized, this is referred to as vertical
waste.
If during a cycle, only some of the functional units are utilized, this is referred
to as horizontal waste.