LLNL Introduction To Parallel Computing
LLNL Introduction To Parallel Computing
Tutorials | Exercises | Abstracts | LC Workshops | Comments | Search | Privacy & Legal Notice
Table of Contents
1. Abstract
2. Overview
1. What is Parallel Computing?
2. Why Use Parallel Computing?
3. Concepts and Terminology
1. von Neumann Computer Architecture
2. Flynn's Classical Taxonomy
3. Some General Parallel Terminology
4. Parallel Computer Memory Architectures
1. Shared Memory
2. Distributed Memory
3. Hybrid Distributed-Shared Memory
5. Parallel Programming Models
1. Overview
2. Shared Memory Model
3. Threads Model
4. Message Passing Model
5. Data Parallel Model
6. Other Models
6. Designing Parallel Programs
1. Automatic vs. Manual Parallelization
2. Understand the Problem and the Program
3. Partitioning
4. Communications
5. Synchronization
6. Data Dependencies
7. Load Balancing
8. Granularity
9. I/O
10. Limits and Costs of Parallel Programming
11. Performance Analysis and Tuning
7. Parallel Examples
1. Array Processing
2. PI Calculation
3. Simple Heat Equation
4. 1-D Wave Equation
8. References and More Information
Abstract
This tutorial covers the very basics of parallel computing, and is intended for someone who is just
becoming acquainted with the subject. It begins with a brief overview, including concepts and
terminology associated with parallel computing. The topics of parallel memory architectures and
programming models are then explored. These topics are followed by a discussion on a number of issues
related to designing parallel programs. The tutorial concludes with several examples of how to parallelize
simple serial programs.
Level/Prerequisites: None
Overview
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 1∕39
Introduction to Parallel Computing 2010/4/3 6:45p
In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to
solve a computational problem:
To be run using multiple CPUs
A problem is broken into discrete parts that can be solved concurrently
Each part is further broken down to a series of instructions
Instructions from each part execute simultaneously on different CPUs
The computational problem usually demonstrates characteristics such as the ability to be:
Broken apart into discrete pieces of work that can be solved simultaneously;
Execute multiple program instructions at any moment in time;
Solved in less time with multiple compute resources than with a single compute resource.
Parallel computing is an evolution of serial computing that attempts to emulate what has always
been the state of affairs in the natural world: many complex, interrelated events happening at the
same time, yet within a sequence. For example:
Galaxy formation Rush hour traffic
Planetary movement Automobile assembly line
Weather and ocean patterns Building a space shuttle
Tectonic plate drift Ordering a hamburger at the drive through.
The Real World is Massively Parallel
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 2∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Historically, parallel computing has been considered to be "the high end of computing", and has
been used to model difficult scientific and engineering problems found in the real world. Some
examples:
Atmosphere, Earth, Environment
Physics - applied, nuclear, particle, condensed matter, high pressure, fusion, photonics
Bioscience, Biotechnology, Genetics
Chemistry, Molecular Sciences
Geology, Seismology
Mechanical Engineering - from prosthetics to spacecraft
Electrical Engineering, Circuit Design, Microelectronics
Computer Science, Mathematics
Today, commercial applications provide an equal or greater driving force in the development of
faster computers. These applications require the processing of large amounts of data in
sophisticated ways. For example:
Databases, data mining
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 3∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Oil exploration
Web search engines, web based business services
Medical imaging and diagnosis
Pharmaceutical design
Management of national and multi-national corporations
Financial and economic modeling
Advanced graphics and virtual reality, particularly in the entertainment industry
Networked video and multi-media technologies
Collaborative work environments
Overview
Save time and/or money: In theory, throwing more resources at a task will shorten its time to
completion, with potential cost savings. Parallel clusters can be built from cheap, commodity
components.
Solve larger problems: Many problems are so large and/or complex that it is impractical or
impossible to solve them on a single computer, especially given limited computer memory. For
example:
"Grand Challenge" (en.wikipedia.org/wiki/Grand_Challenge) problems requiring
PetaFLOPS and PetaBytes of computing resources.
Web search engines/databases processing millions of transactions per second
Provide concurrency: A single compute resource can only do one thing at a time. Multiple
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 4∕39
Introduction to Parallel Computing 2010/4/3 6:45p
computing resources can be doing many things simultaneously. For example, the Access Grid
(www.accessgrid.org) provides a global collaboration network where people from around the
world can meet and conduct work "virtually".
Use of non-local resources: Using compute resources on a wide area network, or even the
Internet when local compute resources are scarce. For example:
SETI@home (setiathome.berkeley.edu) uses over 330,000 computers for a compute power
over 528 TeraFLOPS (as of August 04, 2008)
Folding@home (folding.stanford.edu) uses over 340,000 computers for a compute power of
4.2 PetaFLOPS (as of November 4, 2008)
Limits to serial computing: Both physical and practical reasons pose significant constraints to
simply building ever faster serial computers:
Transmission speeds - the speed of a serial computer is directly dependent upon how fast
data can move through hardware. Absolute limits are the speed of light (30 cm/nanosecond)
and the transmission limit of copper wire (9 cm/nanosecond). Increasing speeds necessitate
increasing proximity of processing elements.
Limits to miniaturization - processor technology is allowing an increasing number of
transistors to be placed on a chip. However, even with molecular or atomic-level
components, a limit will be reached on how small components can be.
Economic limitations - it is increasingly expensive to make a single processor faster. Using a
larger number of moderately fast commodity processors to achieve the same (or better)
performance is less expensive.
Current computer architectures are increasingly relying upon hardware level parallelism to
improve performance:
Top500.org provides statistics on parallel computing users - the charts below are just a sample.
Some things to note:
Sectors may overlap - for example, research may be classified research. Respondents have to
choose between the two.
"Not Specified" is by far the largest application - probably means multiple applications.
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 5∕39
Introduction to Parallel Computing 2010/4/3 6:45p
The Future:
During the past 20 years, the trends indicated by ever faster networks, distributed systems, and
multi-processor computer architectures (even at the desktop level) clearly show that parallelism is
the future of computing.
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 6∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Flynn's taxonomy distinguishes multi-processor computer architectures according to how they can
be classified along the two independent dimensions of Instruction and Data. Each of these
dimensions can have only one of two possible states: Single or Multiple.
SISD SIMD
MISD MIMD
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 7∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Single Instruction,
Multiple Data (SIMD):
ILLIAC IV MasPar
Cray X-MP Cray Y-MP Thinking Machines CM-2 Cell Processor (GPU)
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 8∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Multiple Instruction,
Single Data (MISD):
Multiple Instruction,
Multiple Data (MIMD):
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 9∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Task
A logically discrete section of computational work. A task is typically a program or program-like
set of instructions that is executed by a processor.
Parallel Task
A task that can be executed by multiple processors safely (yields correct results)
Serial Execution
Execution of a program sequentially, one statement at a time. In the simplest sense, this is what
happens on a one processor machine. However, virtually all parallel tasks will have sections of a
parallel program that must be executed serially.
Parallel Execution
Execution of a program by more than one task, with each task being able to execute the same or
different statement at the same moment in time.
Pipelining
Breaking a task into steps performed by different processor units, with inputs streaming through,
much like an assembly line; a type of parallel computing.
Shared Memory
From a strictly hardware point of view, describes a computer architecture where all processors
have direct (usually bus based) access to common physical memory. In a programming sense, it
describes a model where parallel tasks all have the same "picture" of memory and can directly
address and access the same logical memory locations regardless of where the physical memory
actually exists.
Distributed Memory
In hardware, refers to network based memory access for physical memory that is not common. As
a programming model, tasks can only logically "see" local machine memory and must use
communications to access memory on other machines where other tasks are executing.
Communications
Parallel tasks typically need to exchange data. There are several ways this can be accomplished,
such as through a shared memory bus or over a network, however the actual event of data
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 10∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Synchronization
The coordination of parallel tasks in real time, very often associated with communications. Often
implemented by establishing a synchronization point within an application where a task may not
proceed further until another task(s) reaches the same or logically equivalent point.
Synchronization usually involves waiting by at least one task, and can therefore cause a parallel
application's wall clock execution time to increase.
Granularity
In parallel computing, granularity is a qualitative measure of the ratio of computation to
communication.
Coarse: relatively large amounts of computational work are done between communication
events
Fine: relatively small amounts of computational work are done between communication
events
Observed Speedup
Observed speedup of a code which has been parallelized, defined as:
One of the simplest and most widely used indicators for a parallel program's performance.
Parallel Overhead
The amount of time required to coordinate parallel tasks, as opposed to doing useful work. Parallel
overhead can include factors such as:
Massively Parallel
Refers to the hardware that comprises a given parallel system - having many processors. The
meaning of "many" keeps increasing, but currently, the largest parallel computers can be
comprised of processors numbering in the hundreds of thousands.
Embarrassingly Parallel
Solving many similar, but independent tasks simultaneously; little to no need for coordination
between the tasks.
Scalability
Refers to a parallel system's (hardware and/or software) ability to demonstrate a proportionate
increase in parallel speedup with the addition of more processors. Factors that contribute to
scalability include:
Multi-core Processors
Multiple processors (cores) on a single chip.
Cluster Computing
Use of a combination of commodity units (processors, networks or SMPs) to build a parallel
system.
Shared Memory
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 11∕39
Introduction to Parallel Computing 2010/4/3 6:45p
General Characteristics:
Shared memory parallel
computers vary widely,
but generally have in
common the ability for
all processors to access
all memory as global
address space.
Changes in a memory
location effected by one
processor are visible to
all other processors. Shared Memory (UMA)
Shared memory
machines can be
divided into two main
classes based upon
memory access times:
UMA and NUMA.
Uniform Memory
Access (UMA):
Most commonly
represented today by
Symmetric
Shared Memory (NUMA)
Multiprocessor (SMP)
machines
Identical processors
Equal access and access
times to memory
Sometimes called CC-
UMA - Cache Coherent
UMA. Cache coherent
means if one processor
updates a location in
shared memory, all the
other processors know
about the update. Cache
coherency is
accomplished at the
hardware level.
Non-Uniform Memory
Access (NUMA):
Often made by
physically linking two
or more SMPs
One SMP can directly
access memory of
another SMP
Not all processors have
equal access time to all
memories
Memory access across
link is slower
If cache coherency is
maintained, then may
also be called CC-
NUMA - Cache
Coherent NUMA
Advantages:
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 12∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Disadvantages:
Primary disadvantage is the lack of scalability between memory and CPUs. Adding more CPUs
can geometrically increases traffic on the shared memory-CPU path, and for cache coherent
systems, geometrically increase traffic associated with cache/memory management.
Programmer responsibility for synchronization constructs that insure "correct" access of global
memory.
Expense: it becomes increasingly difficult and expensive to design and produce shared memory
machines with ever increasing numbers of processors.
Distributed Memory
General Characteristics:
Like shared memory systems, distributed memory systems vary widely but share a common
characteristic. Distributed memory systems require a communication network to connect inter-
processor memory.
Processors have their own local memory. Memory addresses in one processor do not map to
another processor, so there is no concept of global address space across all processors.
Because each processor has its own local memory, it operates independently. Changes it makes to
its local memory have no effect on the memory of other processors. Hence, the concept of cache
coherency does not apply.
When a processor needs access to data in another processor, it is usually the task of the
programmer to explicitly define how and when data is communicated. Synchronization between
tasks is likewise the programmer's responsibility.
The network "fabric" used for data transfer varies widely, though it can can be as simple as
Ethernet.
Advantages:
Memory is scalable with number of processors. Increase the number of processors and the size of
memory increases proportionately.
Each processor can rapidly access its own memory without interference and without the overhead
incurred with trying to maintain cache coherency.
Cost effectiveness: can use commodity, off-the-shelf processors and networking.
Disadvantages:
The programmer is responsible for many of the details associated with data communication
between processors.
It may be difficult to map existing data structures, based on global memory, to this memory
organization.
Non-uniform memory access (NUMA) times
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 13∕39
Introduction to Parallel Computing 2010/4/3 6:45p
architectures.
The shared memory component is usually a cache coherent SMP machine. Processors on a given
SMP can address that machine's memory as global.
The distributed memory component is the networking of multiple SMPs. SMPs know only about
their own memory - not the memory on another SMP. Therefore, network communications are
required to move data from one SMP to another.
Current trends seem to indicate that this type of memory architecture will continue to prevail and
increase at the high end of computing for the foreseeable future.
Advantages and Disadvantages: whatever is common to both shared and distributed memory
architectures.
Overview
There are several parallel programming models in common use:
Shared Memory
Threads
Message Passing
Data Parallel
Hybrid
Parallel programming models exist as an abstraction above hardware and memory architectures.
Although it might not seem apparent, these models are NOT specific to a particular type of
machine or memory architecture. In fact, any of these models can (theoretically) be implemented
on any underlying hardware. Two examples:
1. Shared memory model on a distributed memory machine: Kendall Square Research (KSR)
ALLCACHE approach.
Machine memory was physically distributed, but appeared to the user as a single shared
memory (global address space). Generically, this approach is referred to as "virtual shared
memory". Note: although KSR is no longer in business, there is no reason to suggest that a
similar implementation will not be made available by another vendor in the future.
The SGI Origin employed the CC-NUMA type of shared memory architecture, where every
task has direct access to global memory. However, the ability to send and receive messages
with MPI, as is commonly done over a network of distributed memory machines, is not only
implemented but is very commonly used.
Which model to use is often a combination of what is available and personal choice. There is no
"best" model, although there certainly are better implementations of some models over others.
The following sections describe each of the models mentioned above, and also discuss some of
their actual implementations.
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 14∕39
Introduction to Parallel Computing 2010/4/3 6:45p
In the shared-memory programming model, tasks share a common address space, which they read
and write asynchronously.
Various mechanisms such as locks / semaphores may be used to control access to the shared
memory.
An advantage of this model from the programmer's point of view is that the notion of data
"ownership" is lacking, so there is no need to specify explicitly the communication of data
between tasks. Program development can often be simplified.
Implementations:
On shared memory platforms, the native compilers translate user program variables into actual
memory addresses, which are global.
Threads Model
In the threads model of parallel programming, a single process can have multiple, concurrent
execution paths.
Perhaps the most simple analogy that can be used to describe threads is the concept of a single
program that includes a number of subroutines:
A thread's work may best be described as a subroutine within the main program. Any thread
can execute any subroutine at the same time as other threads.
Threads communicate with each other through global memory (updating address locations).
This requires synchronization constructs to insure that more than one thread is not updating
the same global address at any time.
Threads can come and go, but a.out remains present to provide the necessary shared
resources until the application has completed.
Threads are commonly associated with shared memory architectures and operating systems.
Implementations:
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 15∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Threaded implementations are not new in computing. Historically, hardware vendors have
implemented their own proprietary versions of threads. These implementations differed
substantially from each other making it difficult for programmers to develop portable threaded
applications.
Unrelated standardization efforts have resulted in two very different implementations of threads:
POSIX Threads and OpenMP.
POSIX Threads
Library based; requires parallel coding
Specified by the IEEE POSIX 1003.1c standard (1995).
C Language only
Commonly referred to as Pthreads.
Most hardware vendors now offer Pthreads in addition to their proprietary threads
implementations.
Very explicit parallelism; requires significant programmer attention to detail.
OpenMP
Compiler directive based; can use serial code
Jointly defined and endorsed by a group of major computer hardware and software vendors.
The OpenMP Fortran API was released October 28, 1997. The C/C++ API was released in
late 1998.
Portable / multi-platform, including Unix and Windows NT platforms
Available in C/C++ and Fortran implementations
Can be very easy and simple to use - provides for "incremental parallelism"
Microsoft has its own implementation for threads, which is not related to the UNIX POSIX
standard or OpenMP.
More Information:
Implementations:
Historically, a variety of message passing libraries have been available since the 1980s. These
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 16∕39
Introduction to Parallel Computing 2010/4/3 6:45p
implementations differed substantially from each other making it difficult for programmers to
develop portable applications.
In 1992, the MPI Forum was formed with the primary goal of establishing a standard interface for
message passing implementations.
Part 1 of the Message Passing Interface (MPI) was released in 1994. Part 2 (MPI-2) was released
in 1996. Both MPI specifications are available on the web at http://www-unix.mcs.anl.gov/mpi/.
MPI is now the "de facto" industry standard for message passing, replacing virtually all other
message passing implementations used for production work. Most, if not all of the popular parallel
computing platforms offer at least one implementation of MPI. A few offer a full implementation
of MPI-2.
For shared memory architectures, MPI implementations usually don't use a network for task
communications. Instead, they use shared memory (memory copies) for performance reasons.
More Information:
On shared memory architectures, all tasks may have access to the data structure through global
memory. On distributed memory architectures the data structure is split up and resides as "chunks"
in the local memory of each task.
Implementations:
Programming with the data parallel model is usually accomplished by writing a program with data
parallel constructs. The constructs can be calls to a data parallel subroutine library or, compiler
directives recognized by a data parallel compiler.
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 17∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Compiler Directives: Allow the programmer to specify the distribution and alignment of data.
Fortran implementations are available for most common parallel platforms.
Distributed memory implementations of this model usually have the compiler convert the program
into standard code with calls to a message passing library (MPI usually) to distribute the data to all
the processes. All message passing is done invisibly to the programmer.
Other Models
Other parallel programming models besides those previously mentioned certainly exist, and will
continue to evolve along with the ever changing world of computer hardware and software. Only
three of the more common ones are mentioned here.
Hybrid:
In this model, any two or more parallel programming models are combined.
Currently, a common example of a hybrid model is the combination of the message passing model
(MPI) with either the threads model (POSIX threads) or the shared memory model (OpenMP).
This hybrid model lends itself well to the increasingly common hardware environment of
networked SMP machines.
Another common example of a hybrid model is combining data parallel with message passing. As
mentioned in the data parallel model section previously, data parallel implementations (F90, HPF)
on distributed memory architectures actually use message passing to transmit data between tasks,
transparently to the programmer.
SPMD is actually a "high level" programming model that can be built upon any combination of the
previously mentioned parallel programming models.
SPMD programs usually have the necessary logic programmed into them to allow different tasks to
branch or conditionally execute only those parts of the program they are designed to execute. That
is, tasks do not necessarily have to execute the entire program - perhaps only a portion of it.
Like SPMD, MPMD is actually a "high level" programming model that can be built upon any
combination of the previously mentioned parallel programming models.
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 18∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Very often, manually developing parallel codes is a time consuming, complex, error-prone and
iterative process.
For a number of years now, various tools have been available to assist the programmer with
converting serial programs into parallel programs. The most common type of tool used to
automatically parallelize a serial program is a parallelizing compiler or pre-processor.
Fully Automatic
The compiler analyzes the source code and identifies opportunities for parallelism.
The analysis includes identifying inhibitors to parallelism and possibly a cost
weighting on whether or not the parallelism would actually improve performance.
Loops (do, for) loops are the most frequent target for automatic parallelization.
Programmer Directed
Using "compiler directives" or possibly compiler flags, the programmer explicitly tells
the compiler how to parallelize the code.
May be able to be used in conjunction with some degree of automatic parallelization
also.
If you are beginning with an existing serial code and have time or budget constraints, then
automatic parallelization may be the answer. However, there are several important caveats that
apply to automatic parallelization:
Wrong results may be produced
Performance may actually degrade
Much less flexible than manual parallelization
Limited to a subset (mostly loops) of code
May actually not parallelize code if the analysis suggests there are inhibitors or the code is
too complex
The remainder of this section applies to the manual method of developing parallel codes.
Before spending time in an attempt to develop a parallel solution for a problem, determine whether
or not the problem is one that can actually be parallelized.
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 19∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Investigate other algorithms if possible. This may be the single most important consideration when
designing a parallel application.
Partitioning
One of the first steps in designing a parallel program is to break the problem into discrete "chunks"
of work that can be distributed to multiple tasks. This is known as decomposition or partitioning.
There are two basic ways to partition computational work among parallel tasks: domain
decomposition and functional decomposition.
Domain Decomposition:
In this type of partitioning, the data associated with a problem is decomposed. Each parallel task
then works on a portion of of the data.
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 20∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Functional Decomposition:
In this approach, the focus is on the computation that is to be performed rather than on the data
manipulated by the computation. The problem is decomposed according to the work that must be
done. Each task then performs a portion of the overall work.
Functional decomposition lends itself well to problems that can be split into different tasks. For
example:
Ecosystem Modeling
Each program calculates the population of a given group, where each group's growth depends on
that of its neighbors. As time progresses, each process calculates its current state, then exchanges
information with the neighbor populations. All tasks then progress to calculate the state at the next
time step.
Signal Processing
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 21∕39
Introduction to Parallel Computing 2010/4/3 6:45p
An audio signal data set is passed through four distinct computational filters. Each filter is a
separate process. The first segment of data must pass through the first filter before progressing to
the second. When it does, the second segment of data passes through the first filter. By the time
the fourth segment of data is in the first filter, all four tasks are busy.
Climate Modeling
Each model component can be thought of as a separate task. Arrows represent exchanges of data
between components during computation: the atmosphere model generates wind velocity data that
are used by the ocean model, the ocean model generates sea surface temperature data that are used
by the atmosphere model, and so on.
Communications
Who Needs Communications?
The need for communications between tasks depends upon your problem:
Factors to Consider:
There are a number of important factors to consider when designing your program's inter-task
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 22∕39
Introduction to Parallel Computing 2010/4/3 6:45p
communications:
Cost of communications
Inter-task communication virtually always implies overhead.
Machine cycles and resources that could be used for computation are instead used to
package and transmit data.
Communications frequently require some type of synchronization between tasks, which can
result in tasks spending time "waiting" instead of doing work.
Competing communication traffic can saturate the available network bandwidth, further
aggravating performance problems.
Visibility of communications
With the Message Passing Model, communications are explicit and generally quite visible
and under the control of the programmer.
With the Data Parallel Model, communications often occur transparently to the programmer,
particularly on distributed memory architectures. The programmer may not even be able to
know exactly how inter-task communications are being accomplished.
Scope of communications
Knowing which tasks must communicate with each other is critical during the design stage
of a parallel code. Both of the two scopings described below can be implemented
synchronously or asynchronously.
Point-to-point - involves two tasks with one task acting as the sender/producer of data, and
the other acting as the receiver/consumer.
Collective - involves data sharing between more than two tasks, which are often specified as
being members in a common group, or collective. Some common variations (there are
more):
Efficiency of communications
Very often, the programmer will have a choice with regard to factors that can affect
communications performance. Only a few are mentioned here.
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 23∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Which implementation for a given model should be used? Using the Message Passing Model
as an example, one MPI implementation may be faster on a given hardware platform than
another.
What type of communication operations should be used? As mentioned previously,
asynchronous communication operations can improve overall program performance.
Network media - some platforms may offer more than one network for communications.
Which one is best?
Synchronization
Types of Synchronization:
Barrier
Usually implies that all tasks are involved
Each task performs its work until it reaches the barrier. It then stops, or "blocks".
When the last task reaches the barrier, all tasks are synchronized.
What happens from here varies. Often, a serial section of work must be done. In other cases,
the tasks are automatically released to continue their work.
Lock / semaphore
Can involve any number of tasks
Typically used to serialize (protect) access to global data or a section of code. Only one task
at a time may use (own) the lock / semaphore / flag.
The first task to acquire the lock "sets" it. This task can then safely (serially) access the
protected data or code.
Other tasks can attempt to acquire the lock but must wait until the task that owns the lock
releases it.
Can be blocking or non-blocking
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 24∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Data Dependencies
Definition:
A dependence exists between program statements when the order of statement execution affects
the results of the program.
A data dependence results from multiple use of the same location(s) in storage by different tasks.
Dependencies are important to parallel programming because they are one of the primary
inhibitors to parallelism.
Examples:
DO 500 J = MYSTART,MYEND
A(J) = A(J-1) * 2.0
500 CONTINUE
The value of A(J-1) must be computed before the value of A(J), therefore A(J) exhibits a data
dependency on A(J-1). Parallelism is inhibited.
If Task 2 has A(J) and task 1 has A(J-1), computing the correct value of A(J) necessitates:
Distributed memory architecture - task 2 must obtain the value of A(J-1) from task 1 after
task 1 finishes its computation
Shared memory architecture - task 2 must read A(J-1) after task 1 updates it
task 1 task 2
------ ------
X = 2 X = 4
. .
. .
Y = X**2 Y = X**3
As with the previous example, parallelism is inhibited. The value of Y is dependent on:
Although all data dependencies are important to identify when designing parallel programs, loop
carried dependencies are particularly important since loops are possibly the most common target of
parallelization efforts.
Load Balancing
Load balancing refers to the practice of distributing work among tasks so that all tasks are kept
busy all of the time. It can be considered a minimization of task idle time.
Load balancing is important to parallel programs for performance reasons. For example, if all tasks
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 25∕39
Introduction to Parallel Computing 2010/4/3 6:45p
are subject to a barrier synchronization point, the slowest task will determine the overall
performance.
Granularity
Computation / Communication Ratio:
Fine-grain Parallelism:
Coarse-grain Parallelism:
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 26∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Which is Best?
I/O
The Bad News:
Parallel I/O systems may be immature or not available for all platforms
In an environment where all tasks see the same file space, write operations can result in file
overwriting
Read operations can be affected by the file server's ability to handle multiple read requests at the
same time
I/O that must be conducted over the network (NFS, non-local) can cause severe bottlenecks and
even crash file servers.
The parallel I/O programming interface specification for MPI has been available since 1996 as part
of MPI-2. Vendor and "free" implementations are now commonly available.
Some options:
If you have access to a parallel file system, investigate using it. If you don't, keep reading...
Confine I/O to specific serial portions of the job, and then use parallel communications to
distribute data to parallel tasks. For example, Task 1 could read an input file and then
communicate required data to other tasks. Likewise, Task 1 could perform write operation
after receiving required data from all other tasks.
For distributed memory systems with shared filespace, perform I/O in local, non-shared
filespace. For example, each processor may have /tmp filespace which can used. This is
usually much more efficient than performing I/O over the network to one's home directory.
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 27∕39
Introduction to Parallel Computing 2010/4/3 6:45p
1
speedup = --------
1 - P
1
speedup = ------------
P + S
---
N
speedup
--------------------------------
N P = .50 P = .90 P = .99
----- ------- ------- -------
10 1.82 5.26 9.17
100 1.98 9.17 50.25
1000 1.99 9.91 90.99
10000 1.99 9.91 99.02
100000 1.99 9.99 99.90
However, certain problems demonstrate increased performance by increasing the problem size. For
example:
We can increase the problem size by doubling the grid dimensions and halving the time step. This
results in four times the number of grid points and twice the number of time steps. The timings
then look like:
Problems that increase the percentage of parallel time with their size are more scalable than
problems with a fixed percentage of parallel time.
Complexity:
In general, parallel applications are much more complex than corresponding serial applications,
perhaps an order of magnitude. Not only do you have multiple instruction streams executing at the
same time, but you also have data flowing between them.
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 28∕39
Introduction to Parallel Computing 2010/4/3 6:45p
The costs of complexity are measured in programmer time in virtually every aspect of the software
development cycle:
Design
Coding
Debugging
Tuning
Maintenance
Adhering to "good" software development practices is essential when when working with parallel
applications - especially if somebody besides you will have to work with the software.
Portability:
Thanks to standardization in several APIs, such as MPI, POSIX threads, HPF and OpenMP,
portability issues with parallel programs are not as serious as in years past. However...
All of the usual portability issues associated with serial programs apply to parallel programs. For
example, if you use vendor "enhancements" to Fortran, C or C++, portability will be a problem.
Even though standards exist for several APIs, implementations will differ in a number of details,
sometimes to the point of requiring code modifications in order to effect portability.
Hardware architectures are characteristically highly variable and can affect portability.
Resource Requirements:
The primary intent of parallel programming is to decrease execution wall clock time, however in
order to accomplish this, more CPU time is required. For example, a parallel code that runs in 1
hour on 8 processors actually uses 8 hours of CPU time.
The amount of memory required can be greater for parallel codes than serial codes, due to the need
to replicate data and for overheads associated with parallel support libraries and subsystems.
For short running parallel programs, there can actually be a decrease in performance compared to a
similar serial implementation. The overhead costs associated with setting up the parallel
environment, task creation, communications and task termination can comprise a significant
portion of the total execution time for short runs.
Scalability:
The algorithm may have inherent limits to scalability. At some point, adding more resources causes
performance to decrease. Most parallel solutions demonstrate this characteristic at some point.
Parallel support libraries and subsystems software can limit scalability independent of your
application.
A number of parallel tools for execution monitoring and program analysis are available.
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 29∕39
Introduction to Parallel Computing 2010/4/3 6:45p
A dated, but potentially useful LC whitepaper on the subject of "High Performance Tools
and Technologies" describes a large number of tools, and a number of performance related
topics applicable to code developers. Find it at:
computing.llnl.gov/tutorials/performance_tools/HighPerformanceToolsTechnologiesLC.pdf.
Performance Analysis Tools Tutorial
Parallel Examples
Array Processing
This example demonstrates calculations on 2-
dimensional array elements, with the
computation on each array element being
independent from other array elements.
do j = 1,n
do i = 1,n
a(i,j) = fcn(i,j)
end do
end do
Array Processing
Parallel Solution 1
Arrays elements are distributed so that each
processor owns a portion of an array
(subarray).
do j = mystart, myend
do i = 1,n
a(i,j) = fcn(i,j)
end do
end do
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 30∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Master process initializes array, sends info to worker processes and receives results.
Worker process receives info, performs its share of computation and sends results to master.
Using the Fortran storage scheme, perform block distribution of the array.
if I am MASTER
else if I am WORKER
receive from MASTER info on part of array I own
receive from MASTER my portion of initial array
endif
Array Processing
Parallel Solution 2: Pool of Tasks
The previous array solution demonstrated static load balancing:
Each task has a fixed amount of work to do
May be significant idle time for faster or more lightly loaded processors - slowest tasks
determines overall performance.
Static load balancing is not usually a major concern if all tasks are performing the same amount of
work on identical machines.
If you have a load balance problem (some tasks work faster than others), you may benefit by using
a "pool of tasks" scheme.
Master Process:
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 31∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Worker processes do not know before runtime which portion of array they will handle or how
many tasks they will perform.
Dynamic load balancing occurs at run time: the faster tasks will get more work to do.
if I am MASTER
else if I am WORKER
endif
Discussion:
In the above pool of tasks example, each task calculated an individual array element as a job. The
computation to communication ratio is finely granular.
Finely granular solutions incur more communication overhead in order to reduce task idle time.
A more optimal solution might be to distribute more work with each job. The "right" amount of
work is problem dependent.
Parallel Examples
PI Calculation
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 32∕39
Introduction to Parallel Computing 2010/4/3 6:45p
npoints = 10000
circle_count = 0
do j = 1,npoints
generate 2 random numbers between 0 and 1
xcoordinate = random1
ycoordinate = random2
if (xcoordinate, ycoordinate) inside circle
then circle_count = circle_count + 1
end do
PI = 4.0*circle_count/npoints
PI Calculation
Parallel Solution
Parallel strategy: break the loop into portions that can be
executed by the tasks.
npoints = 10000
circle_count = 0
p = number of tasks
num = npoints/p
do j = 1,num
generate 2 random numbers between 0 and 1
xcoordinate = random1
ycoordinate = random2
if (xcoordinate, ycoordinate) inside circle
then circle_count = circle_count + 1
end do
if I am MASTER
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 33∕39
Introduction to Parallel Computing 2010/4/3 6:45p
else if I am WORKER
endif
Parallel Examples
The heat equation describes the temperature change over time, given
initial temperature distribution and boundary conditions.
For the fully explicit problem, a time stepping algorithm is used. The
elements of a 2-dimensional array represent the temperature at points
on the square.
do iy = 2, ny - 1
do ix = 2, nx - 1
u2(ix, iy) =
u1(ix, iy) +
cx * (u1(ix+1,iy) + u1(ix-1,iy) - 2.*u1(ix,iy)) +
cy * (u1(ix,iy+1) + u1(ix,iy-1) - 2.*u1(ix,iy))
end do
end do
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 34∕39
Introduction to Parallel Computing 2010/4/3 6:45p
if I am MASTER
initialize array
send each WORKER starting info and subarray
else if I am WORKER
receive from MASTER starting info and subarray
endif
In the previous solution, neighbor tasks communicated border data, then each process updated its
portion of the array.
Each task could update the interior of its part of the solution array while the communication of
border data is occurring, and update its border after communication has completed.
Pseudo code for the second solution: red highlights changes for non-blocking communications.
if I am MASTER
initialize array
send each WORKER starting info and subarray
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 35∕39
Introduction to Parallel Computing 2010/4/3 6:45p
end do
else if I am WORKER
receive from MASTER starting info and subarray
endif
Parallel Examples
where c is a constant
Note that amplitude will depend on previous timesteps (t, t-1) and neighboring points (i-1, i+1).
Data dependence will mean that a parallel solution will involve communications.
The entire amplitude array is partitioned and distributed as subarrays to all tasks. Each task owns a
portion of the total array.
Load balancing: all points require equal work, so the points should be divided equally
A block decomposition would have the work partitioned into the number of tasks as chunks,
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 36∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Communication need only occur on data borders. The larger the block size the less the
communication.
end do
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 37∕39
Introduction to Parallel Computing 2010/4/3 6:45p
Agenda
Back to the top
A search on the WWW for "parallel programming" or "parallel computing" will yield a wide
variety of information.
Recommended reading:
"Designing and Building Parallel Programs". Ian Foster.
http://www-unix.mcs.anl.gov/dbpp/
"Introduction to Parallel Computing". Ananth Grama, Anshul Gupta, George Karypis, Vipin
Kumar.
http://www-users.cs.umn.edu/~karypis/parbook/
"Overview of Recent Supercomputers". A.J. van der Steen, Jack Dongarra.
www.phys.uu.nl/~steen/web03/overview.html
Photos/Graphics have been created by the author, created by other LLNL employees, obtained
from non-copyrighted, government or public domain (such as http://commons.wikimedia.org/)
sources, or used with the permission of authors from other presentations and web pages.
History: These materials have evolved from the following sources, which are no longer maintained
or available.
Tutorials located in the Maui High Performance Computing Center's "SP Parallel
Programming Workshop".
Tutorials located at the Cornell Theory Center's "Education and Training" web page.
https://computing.llnl.gov/tutorials/parallel_comp/
Last Modified: Mon, 01 Mar 2010 21:46:39 GMT blaiseb@llnl.gov
UCRL-MI-133316
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 38∕39
Introduction to Parallel Computing 2010/4/3 6:45p
https://computing.llnl.gov/tutorials/parallel_comp/ 頁面 39∕39