Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

08 Parallel algorithms approches

The document discusses parallel computing, its types, and the challenges associated with it, including the design of parallel computers, efficient algorithms, and evaluation methods. It outlines Flynn's taxonomy of parallel architectures: SISD, SIMD, MISD, and MIMD, along with techniques like pipelining and superscalar execution. Additionally, it highlights the importance of programming languages and tools for implementing parallel algorithms effectively.

Uploaded by

amnashah001122
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

08 Parallel algorithms approches

The document discusses parallel computing, its types, and the challenges associated with it, including the design of parallel computers, efficient algorithms, and evaluation methods. It outlines Flynn's taxonomy of parallel architectures: SISD, SIMD, MISD, and MIMD, along with techniques like pipelining and superscalar execution. Additionally, it highlights the importance of programming languages and tools for implementing parallel algorithms effectively.

Uploaded by

amnashah001122
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 12

Recap

• What is Parallel Computing


• Why Parallel Computing
• Types of Parallelism
• Amdahl’s Law
• Effect of multiple Processors on run time
• Effect of multiple processors on speedup.
Issues in Parallel Computing

• Design of parallel computers : Design so that it scales to a large # of processor and are
capable of supporting fast communication and data sharing among processors.
• Design of Efficient Algorithms : Designing parallel algorithms are different from
designing serial algorithms. Significant amount of work is being done for numerical and
non-numerical parallel algorithms
• Methods of Evaluating Parallel Algorithms : Given a parallel computer and a parallel
algorithm we need to evaluate the performance of the resulting system. How fast
problem is solved and how efficiently the processors are used.
Issues in Parallel Computing

• Parallel Computer Languages : Parallel algorithms are implemented using a


programming language. This language must be flexible enough to allow efficient
implementation and must be easy to program. Must efficiently use the hardware.
• Parallel Proramming Tools : Tools (compilers, libraries, debuggers, other monitoring
or performance evaluation tools) must shield users from low level machine
characteristics.
• Portable Parallel Programs: This is one of the main problems with current parallel
computers. Program written for one parallel computer require extensive work to port to
another parallel computer.
Types of Parallel Computing

• According to the Flynn’s taxonomy [1], computers architecture are classified one of
four basic types. These are:
• Single instruction, single data (SISD): single scalar processor
• Single instruction, multiple data (SIMD): Thinking machines CM-2
• Multiple instruction, single data (MISD): various special purpose machines
• Multiple instruction, multiple data (MIMD): Nearly all parallel machines

[1] M. J. Flynn, Some computer organizations and their effectiveness,


IEEE Transactions on Computers, 21(9), 1972, pp.
948–960
Single Instruction, Single Data (SISD)
• Uniprocessor machines, such as a PC with a
single control unit in this category.

• The instructions are executed one at a time


following the so-called machine cycle
(fetch-decode- execute) sequence.

• Pipelined processors and superscalar


processors are common examples found in
most modern SISD computers.

• Pipelining and superscalar are techniques for


implementing instruction-level parallelism
within a single processor.

• Exploit instruction level parallelism


Single Instruction, Multiple Data (SIMD)
• Single instruction, multiple data, is
a class of parallel computers in
Flynn's taxonomy.

• It describes computers with


multiple processing elements that
perform the same operation on
multiple data streams
simultaneously.

• Exploit data level parallelism

• The vector processors (array


processors) or GPUs are in this
category.

• The number of parallel data


elements processed at the same
time is dependent on the size of the
elements and the capacity of the
data processing resources available.
Multiple Instruction, Single Data (MISD)

• MISD is a type of parallel computing


architecture where many functional
units (instruction) perform different
operations on the same data.

• This is an uncommon architecture


generally used for fault tolerance
Multiple Instruction, Multiple Data (MIMD)
• MIMD is a technique employed to achieve
parallelism.

• Computers in this category are known as


multi-core/ multiprocessors (parallel
computers), where a number of autonomous
processors simultaneously execute different
instructions on different data.

• Work with shared memory and distributed P P P P P P

memory architecture. M M M M M M

• The distributed systems / cluster computing


are generally recognized to be MIMD Network
architectures

P P P P P P

B U S / Cross Bar

M e m o ry
Pipelining and Superscalar Execution
• Pipelining overlaps various stages of
instruction execution to achieve performance
(Parallelism).

• At a high level of abstraction, an instruction


can be executed while the next one is being
decoded and the next one is being fetched.

• Pipelining, however, has several limitations.

• The speed of a pipeline is eventually limited


by the slowest stage.

• For this reason, conventional processors rely


on very deep pipelines (20 stage pipelines in
state-of-the-art Pentium processors).

• One simple way of alleviating these


bottlenecks is to use multiple pipelines.
Pipelining and Superscalar Execution: Dependency
Scheduling of instructions is determined by a number of factors:

•True Data Dependency: The result of one operation is an input to the next.

•Resource Dependency: Two operations require the same resource.


Superscalar Execution: Issue Mechanism
 In the simpler model, instructions can be issued only in the order in which they
are encountered. That is, if the second instruction cannot be issued because it
has a data dependency with the first, only one instruction is issued in the cycle.
This is called in-order issue.

 In a more aggressive model, instructions can be issued out of order. In this case,
if the second instruction has data dependencies with the first, but the third
instruction does not, the first and third instructions can be co-scheduled. This is
also called dynamic issue.
Superscalar Execution: Performance Consideration
 Not all functional units can be kept busy at all times.
 If during a cycle, no functional units are utilized, this is referred to as vertical
waste.

 If during a cycle, only some of the functional units are utilized, this is referred
to as horizontal waste.

 Due to limited parallelism in typical instruction traces, dependencies, or the


inability of the scheduler to extract parallelism, the performance of superscalar
processors is eventually limited

You might also like