Introduction To Parallel Computing
Introduction To Parallel Computing
gov/tutorials/parallel_comp/
Tutorials | Exercises | Abstracts | LC Workshops | Comments | Search | Privacy & Legal Notice
Table of Contents
1. Abstract
2. Overview
1. What is Parallel Computing?
2. Why Use Parallel Computing?
3. Who is Using Parallel Computing?
3. Concepts and Terminology
1. von Neumann Computer Architecture
2. Flynn's Classical Taxonomy
3. Some General Parallel Terminology
4. Limits and Costs of Parallel Programming
4. Parallel Computer Memory Architectures
1. Shared Memory
2. Distributed Memory
3. Hybrid Distributed-Shared Memory
5. Parallel Programming Models
1. Overview
2. Shared Memory Model
3. Threads Model
4. Distributed Memory / Message Passing Model
5. Data Parallel Model
6. Hybrid Model
7. SPMD and MPMP
6. Designing Parallel Programs
1. Automatic vs. Manual Parallelization
2. Understand the Problem and the Program
3. Partitioning
4. Communications
5. Synchronization
6. Data Dependencies
7. Load Balancing
8. Granularity
9. I/O
10. Debugging
11. Performance Analysis and Tuning
7. Parallel Examples
1. Array Processing
2. PI Calculation
3. Simple Heat Equation
4. 1-D Wave Equation
8. References and More Information
Abstract
This tutorial is the first of eight tutorials in the 4+ day "Using LLNL's Supercomputers" workshop. It is intended to provide only a very quick overview of the
extensive and broad topic of Parallel Computing, as a lead-in for the tutorials that follow it. As such, it covers just the very basics of parallel computing, and is
intended for someone who is just becoming acquainted with the subject and who is planning to attend one or more of the other tutorials in this workshop. It is not
intended to cover Parallel Programming in depth, as this would require significantly more time. The tutorial begins with a discussion on parallel computing - what
it is and how it's used, followed by a discussion on concepts and terminology associated with parallel computing. The topics of parallel memory architectures and
programming models are then explored. These topics are followed by a series of practical discussions on a number of the complex issues related to designing and
running parallel programs. The tutorial concludes with several examples of how to parallelize simple serial programs.
Overview
For example:
Parallel Computing:
In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem:
A problem is broken into discrete parts that can be solved concurrently
Each part is further broken down to a series of instructions
Instructions from each part execute simultaneously on different processors
An overall control/coordination mechanism is employed
For example:
Parallel Computers:
Virtually all stand-alone computers today are parallel from a hardware perspective:
Multiple functional units (L1 cache, L2 cache, branch, prefetch, decode, floating-point, graphics processing (GPU), integer, etc.)
Multiple execution units/cores
Multiple hardware threads
IBM BG/Q Compute Chip with 18 cores (PU) and 16 L2 Cache units (L2)
Networks connect multiple stand-alone computers (nodes) to make larger parallel computer clusters.
For example, the schematic below shows a typical LLNL parallel computer cluster:
Each compute node is a multi-processor parallel computer in itself
Multiple compute nodes are networked together with an Infiniband network
Special purpose nodes, also multi-processor, are used for other purposes
The majority of the world's large parallel computers (supercomputers) are clusters of hardware produced by a handful of (mostly) well known vendors.
Source: Top500.org
Overview
In the natural world, many complex, interrelated events are happening at the same time, yet within a temporal
sequence.
Compared to serial computing, parallel computing is much better suited for modeling, simulating and understanding
complex, real world phenomena.
Main Reasons:
PROVIDE CONCURRENCY:
A single compute resource can only do one thing at a time. Multiple compute resources can do many things simultaneously.
Example: the Access Grid (www.accessgrid.org) provides a global collaboration network where people from around the world
can meet and conduct work "virtually".
The Future:
During the past 20+ years, the trends indicated by ever faster networks, distributed systems, and multi-processor computer
architectures (even at the desktop level) clearly show that parallelism is the future of computing.
In this same time period, there has been a greater than 500,000x increase in supercomputer performance, with no end currently in
sight.
Overview
Historically, parallel computing has been considered to be "the high end of computing", and has been used to model
difficult problems in many areas of science and engineering:
Atmosphere, Earth, Environment Mechanical Engineering - from prosthetics
Physics - applied, nuclear, particle, condensed matter, high to spacecraft
pressure, fusion, photonics Electrical Engineering, Circuit Design,
Bioscience, Biotechnology, Genetics Microelectronics
Chemistry, Molecular Sciences Computer Science, Mathematics
Geology, Seismology Defense, Weapons
Today, commercial applications provide an equal or greater driving force in the development of faster computers. These
applications require the processing of large amounts of data in sophisticated ways. For example:
Databases, data mining Financial and economic modeling
Oil exploration Management of national and multi-national corporations
Web search engines, web based business Advanced graphics and virtual reality, particularly in the
services entertainment industry
Medical imaging and diagnosis Networked video and multi-media technologies
Pharmaceutical design Collaborative work environments
Global Applications:
Parallel computing is now being used extensively around the world, in a wide variety of applications.
Source: Top500.org
Also known as "stored-program computer" - both program instructions and data are kept in electronic memory. Differs from earlier computers which were
programmed through "hard wiring".
Since then, virtually all computers have followed this basic design:
One of the more widely used classifications, in use since 1966, is called Flynn's Taxonomy.
Flynn's taxonomy distinguishes multi-processor computer architectures according to how they can be classified along the two independent dimensions of
Instruction Stream and Data Stream. Each of these dimensions can have only one of two possible states: Single or Multiple.
SISD SIMD
MISD MIMD
ILLIAC IV MasPar
Cray X-MP Cray Y-MP Thinking Machines CM-2 Cell Processor (GPU)
stream
Multiple Data: Every processor may be working with a different data stream
Execution can be synchronous or asynchronous, deterministic or non-deterministic
Currently, the most common type of parallel computer - most modern
supercomputers fall into this category.
Examples: most current supercomputers, networked parallel computer clusters and
"grids", multi-processor SMP computers, multi-core PCs.
Note: many MIMD architectures also include SIMD execution sub-components
Node
A standalone "computer in a box". Usually comprised of multiple CPUs/processors/cores, memory, network interfaces, etc. Nodes are networked
together to comprise a supercomputer.
Task
A logically discrete section of computational work. A task is typically a program or program-like set of instructions that is executed by a processor. A
parallel program consists of multiple tasks running on multiple processors.
Pipelining
Breaking a task into steps performed by different processor units, with inputs streaming through, much like an assembly line; a type of parallel
computing.
Shared Memory
From a strictly hardware point of view, describes a computer architecture where all processors have direct (usually bus based) access to common
physical memory. In a programming sense, it describes a model where parallel tasks all have the same "picture" of memory and can directly address
and access the same logical memory locations regardless of where the physical memory actually exists.
Distributed Memory
In hardware, refers to network based memory access for physical memory that is not common. As a programming model, tasks can only logically
"see" local machine memory and must use communications to access memory on other machines where other tasks are executing.
Communications
Parallel tasks typically need to exchange data. There are several ways this can be accomplished, such as through a shared memory bus or over a
network, however the actual event of data exchange is commonly referred to as communications regardless of the method employed.
Synchronization
The coordination of parallel tasks in real time, very often associated with communications. Often implemented by establishing a synchronization point
within an application where a task may not proceed further until another task(s) reaches the same or logically equivalent point.
Synchronization usually involves waiting by at least one task, and can therefore cause a parallel application's wall clock execution time to increase.
Granularity
In parallel computing, granularity is a qualitative measure of the ratio of computation to communication.
Coarse: relatively large amounts of computational work are done between communication events
Fine: relatively small amounts of computational work are done between communication events
Observed Speedup
Observed speedup of a code which has been parallelized, defined as:
One of the simplest and most widely used indicators for a parallel program's performance.
Parallel Overhead
The amount of time required to coordinate parallel tasks, as opposed to doing useful work. Parallel overhead can include factors such as:
Task start-up time
Synchronizations
Data communications
Software overhead imposed by parallel languages, libraries, operating system, etc.
Task termination time
Massively Parallel
Refers to the hardware that comprises a given parallel system - having many processors. The meaning of "many" keeps increasing, but currently, the
largest parallel computers can be comprised of processors numbering in the hundreds of thousands.
Embarrassingly Parallel
Solving many similar, but independent tasks simultaneously; little to no need for coordination between the tasks.
Scalability
Refers to a parallel system's (hardware and/or software) ability to demonstrate a proportionate increase in parallel speedup with the addition of more
1
speedup = --------
1 - P
1
speedup = ------------
P + S
---
N
speedup
--------------------------------
N P = .50 P = .90 P = .99
----- ------- ------- -------
10 1.82 5.26 9.17
100 1.98 9.17 50.25
1,000 1.99 9.91 90.99
10,000 1.99 9.91 99.02
100,000 1.99 9.99 99.90
However, certain problems demonstrate increased performance by increasing the problem size. For example:
We can increase the problem size by doubling the grid dimensions and halving the time step. This results in four times the number of grid points and twice
the number of time steps. The timings then look like:
Problems that increase the percentage of parallel time with their size are more scalable than problems with a fixed percentage of parallel time.
Complexity:
In general, parallel applications are much more complex than corresponding serial applications, perhaps an order of magnitude. Not only do you have
multiple instruction streams executing at the same time, but you also have data flowing between them.
The costs of complexity are measured in programmer time in virtually every aspect of the software development cycle:
Design
Coding
Debugging
Tuning
Maintenance
Adhering to "good" software development practices is essential when when working with parallel applications - especially if somebody besides you will
have to work with the software.
Portability:
Thanks to standardization in several APIs, such as MPI, POSIX threads, and OpenMP, portability issues with parallel programs are not as serious as in years
past. However...
All of the usual portability issues associated with serial programs apply to parallel programs. For example, if you use vendor "enhancements" to Fortran, C
or C++, portability will be a problem.
Even though standards exist for several APIs, implementations will differ in a number of details, sometimes to the point of requiring code modifications in
order to effect portability.
Hardware architectures are characteristically highly variable and can affect portability.
Resource Requirements:
The primary intent of parallel programming is to decrease execution wall clock time, however in order to accomplish this, more CPU time is required. For
example, a parallel code that runs in 1 hour on 8 processors actually uses 8 hours of CPU time.
The amount of memory required can be greater for parallel codes than serial codes, due to the need to replicate data and for overheads associated with
parallel support libraries and subsystems.
For short running parallel programs, there can actually be a decrease in performance compared to a similar serial implementation. The overhead costs
associated with setting up the parallel environment, task creation, communications and task termination can comprise a significant portion of the total
execution time for short runs.
Scalability:
The ability of a parallel program's performance to scale is a result of a number of interrelated factors. Simply adding more processors is rarely the answer.
The algorithm may have inherent limits to scalability. At some point, adding more resources causes performance to decrease. Many parallel solutions
demonstrate this characteristic at some point.
Parallel support libraries and subsystems software can limit scalability independent of your application.
Shared Memory
General Characteristics:
Shared memory parallel computers vary widely, but generally have in common
the ability for all processors to access all memory as global address space.
Multiple processors can operate independently but share the same memory
resources.
Changes in a memory location effected by one processor are visible to all other
processors.
Historically, shared memory machines have been classified as UMA and NUMA,
based upon memory access times.
Disadvantages:
Primary disadvantage is the lack of scalability between memory and CPUs. Adding more CPUs can geometrically increases traffic on the shared
memory-CPU path, and for cache coherent systems, geometrically increase traffic associated with cache/memory management.
Programmer responsibility for synchronization constructs that ensure "correct" access of global memory.
Distributed Memory
General Characteristics:
Like shared memory systems, distributed memory systems vary widely but share a common characteristic. Distributed memory systems require a
communication network to connect inter-processor memory.
Processors have their own local memory. Memory addresses in one processor do not map to another processor, so there is no concept of global address space
across all processors.
Because each processor has its own local memory, it operates independently. Changes it makes to its local memory have no effect on the memory of other
processors. Hence, the concept of cache coherency does not apply.
When a processor needs access to data in another processor, it is usually the task of the programmer to explicitly define how and when data is
communicated. Synchronization between tasks is likewise the programmer's responsibility.
The network "fabric" used for data transfer varies widely, though it can can be as simple as Ethernet.
Advantages:
Memory is scalable with the number of processors. Increase the number of processors and the size of memory increases proportionately.
Each processor can rapidly access its own memory without interference and without the overhead incurred with trying to maintain global cache coherency.
Cost effectiveness: can use commodity, off-the-shelf processors and networking.
Disadvantages:
The programmer is responsible for many of the details associated with data communication between processors.
It may be difficult to map existing data structures, based on global memory, to this memory organization.
Non-uniform memory access times - data residing on a remote node takes longer to access than node local data.
The largest and fastest computers in the world today employ both shared and distributed memory architectures.
The shared memory component can be a shared memory machine and/or graphics processing units (GPU).
The distributed memory component is the networking of multiple shared memory/GPU machines, which know only about their own memory - not the
memory on another machine. Therefore, network communications are required to move data from one machine to another.
Current trends seem to indicate that this type of memory architecture will continue to prevail and increase at the high end of computing for the foreseeable
future.
Overview
There are several parallel programming models in common use:
Shared Memory (without threads)
Threads
Distributed Memory / Message Passing
Data Parallel
Hybrid
Single Program Multiple Data (SPMD)
Multiple Program Multiple Data (MPMD)
Parallel programming models exist as an abstraction above hardware and memory architectures.
Although it might not seem apparent, these models are NOT specific to a particular type of machine or memory architecture. In fact, any of these models can
(theoretically) be implemented on any underlying hardware. Two examples from the past are discussed below.
SHARED memory model on a DISTRIBUTED memory machine: Kendall Square Research (KSR) ALLCACHE
approach.
Machine memory was physically distributed across networked machines, but appeared to the user as a single shared
memory (global address space). Generically, this approach is referred to as "virtual shared memory".
DISTRIBUTED memory model on a SHARED memory machine: Message Passing Interface (MPI) on SGI Origin
2000.
The SGI Origin 2000 employed the CC-NUMA type of shared memory architecture, where every task has direct
access to global address space spread across all machines. However, the ability to send and receive messages using
MPI, as is commonly done over a network of distributed memory machines, was implemented and commonly used.
Which model to use? This is often a combination of what is available and personal choice. There is no "best" model, although there certainly are better
implementations of some models over others.
The following sections describe each of the models mentioned above, and also discuss some of their actual implementations.
Various mechanisms such as locks / semaphores may be used to control access to the shared memory.
An advantage of this model from the programmer's point of view is that the notion of data "ownership" is lacking, so there is no need to specify explicitly the
communication of data between tasks. Program development can often be simplified.
An important disadvantage in terms of performance is that it becomes more difficult to understand and manage data locality:
Keeping data local to the processor that works on it conserves memory accesses, cache refreshes and bus traffic that occurs when multiple processors
Implementations:
On stand-alone shared memory machines, native operating systems, compilers and/or hardware provide support for shared memory programming. For
example, the POSIX standard provides an API for using shared memory.
On distributed shared memory machines, memory is physically distributed across a network of machines, but made global through specialized hardware and
software.
Threads Model
This programming model is a type of shared memory programming.
In the threads model of parallel programming, a single "heavy weight" process can have multiple "light weight", concurrent execution paths.
For example:
The main program a.out is scheduled to run by the native operating system. a.out loads and
acquires all of the necessary system and user resources to run. This is the "heavy weight"
process.
a.out performs some serial work, and then creates a number of tasks (threads) that can be
scheduled and run by the operating system concurrently.
Each thread has local data, but also, shares the entire resources of a.out. This saves the
overhead associated with replicating a program's resources for each thread ("light weight").
Each thread also benefits from a global memory view because it shares the memory space of
a.out.
A thread's work may best be described as a subroutine within the main program. Any thread can execute any subroutine at the same time as other
threads.
Threads communicate with each other through global memory (updating address locations). This requires synchronization constructs to ensure that
more than one thread is not updating the same global address at any time.
Threads can come and go, but a.out remains present to provide the necessary shared resources until the application has completed.
Implementations:
In both cases, the programmer is responsible for determining the parallelism (although compilers can sometimes help).
Threaded implementations are not new in computing. Historically, hardware vendors have implemented their own proprietary versions of threads. These
implementations differed substantially from each other making it difficult for programmers to develop portable threaded applications.
Unrelated standardization efforts have resulted in two very different implementations of threads: POSIX Threads and OpenMP.
POSIX Threads
Library based; requires parallel coding
Specified by the IEEE POSIX 1003.1c standard (1995).
C Language only
Commonly referred to as Pthreads.
Most hardware vendors now offer Pthreads in addition to their proprietary threads implementations.
Very explicit parallelism; requires significant programmer attention to detail.
OpenMP
Compiler directive based; can use serial code
Jointly defined and endorsed by a group of major computer hardware and software vendors. The OpenMP Fortran API was released October 28, 1997.
The C/C++ API was released in late 1998.
Portable / multi-platform, including Unix and Windows platforms
Available in C/C++ and Fortran implementations
Can be very easy and simple to use - provides for "incremental parallelism"
Microsoft has its own implementation for threads, which is not related to the UNIX POSIX standard or OpenMP.
More Information:
A set of tasks that use their own local memory during computation. Multiple
tasks can reside on the same physical machine and/or across an arbitrary
number of machines.
Implementations:
Historically, a variety of message passing libraries have been available since the 1980s. These implementations differed substantially from each other
making it difficult for programmers to develop portable applications.
In 1992, the MPI Forum was formed with the primary goal of establishing a standard interface for message passing implementations.
Part 1 of the Message Passing Interface (MPI) was released in 1994. Part 2 (MPI-2) was released in 1996 and MPI-3 in 2012. All MPI specifications are
available on the web at http://www.mpi-forum.org/docs/.
MPI is the "de facto" industry standard for message passing, replacing virtually all other message passing implementations used for production work. MPI
implementations exist for virtually all popular parallel computing platforms. Not all implementations include everything in MPI-1, MPI-2 or MPI-3.
More Information:
Most of the parallel work focuses on performing operations on a data set. The data
set is typically organized into a common structure, such as an array or cube.
A set of tasks work collectively on the same data structure, however, each task
works on a different partition of the same data structure.
Tasks perform the same operation on their partition of work, for example, "add 4 to
every array element".
On shared memory architectures, all tasks may have access to the data structure through
global memory.
On distributed memory architectures the data structure is split up and resides as "chunks"
in the local memory of each task.
Implementations:
Currently, there are several relatively popular, and sometimes developmental, parallel programming implementations based on the Data Parallel / PGAS
model.
Coarray Fortran: a small set of extensions to Fortran 95 for SPMD parallel programming. Compiler dependent. More information: http://www.co-
array.org/
Unified Parallel C (UPC): an extension to the C programming language for SPMD parallel programming. Compiler dependent. More information:
http://upc.lbl.gov/
Global Arrays: provides a shared memory style programming environment in the context of distributed array data structures. Public domain library with C
and Fortran77 bindings. More information: http://www.emsl.pnl.gov/docs/global/
X10: a PGAS based parallel programming language being developed by IBM at the Thomas J. Watson Research Center. More information: http://x10-
lang.org/
Chapel: an open source parallel programming language project being led by Cray. More information: http://chapel.cray.com/
Hybrid Model
A hybrid model combines more than one of the previously described programming models.
This hybrid model lends itself well to the increasingly common hardware
environment of clustered multi/many-core machines.
SPMD is actually a "high level" programming model that can be built upon any combination of the previously mentioned parallel programming models.
SINGLE PROGRAM: All tasks execute their copy of the same program simultaneously.
This program can be threads, message passing, data parallel or hybrid.
SPMD programs usually have the necessary logic programmed into them to allow different
tasks to branch or conditionally execute only those parts of the program they are designed to
execute. That is, tasks do not necessarily have to execute the entire program - perhaps only a portion of it.
The SPMD model, using message passing or hybrid programming, is probably the most commonly used parallel programming model for multi-node
clusters.
Like SPMD, MPMD is actually a "high level" programming model that can be built upon any combination of the previously mentioned parallel
programming models.
MPMD applications are not as common as SPMD applications, but may be better suited for
certain types of problems, particularly those that lend themselves better to functional decomposition than domain decomposition (discussed later under
Partioning).
Very often, manually developing parallel codes is a time consuming, complex, error-prone and iterative process.
For a number of years now, various tools have been available to assist the programmer with converting serial programs into parallel programs. The most
common type of tool used to automatically parallelize a serial program is a parallelizing compiler or pre-processor.
Fully Automatic
The compiler analyzes the source code and identifies opportunities for parallelism.
The analysis includes identifying inhibitors to parallelism and possibly a cost weighting on whether or not the parallelism would actually
improve performance.
Loops (do, for) are the most frequent target for automatic parallelization.
Programmer Directed
Using "compiler directives" or possibly compiler flags, the programmer explicitly tells the compiler how to parallelize the code.
May be able to be used in conjunction with some degree of automatic parallelization also.
The most common compiler generated parallelization is done using on-node shared memory and threads (such as OpenMP).
If you are beginning with an existing serial code and have time or budget constraints, then automatic parallelization may be the answer. However, there are
several important caveats that apply to automatic parallelization:
Wrong results may be produced
Performance may actually degrade
Much less flexible than manual parallelization
Limited to a subset (mostly loops) of code
May actually not parallelize code if the compiler analysis suggests there are inhibitors or the code is too complex
The remainder of this section applies to the manual method of developing parallel codes.
Before spending time in an attempt to develop a parallel solution for a problem, determine whether or not the problem is one that can actually be
parallelized.
Calculate the potential energy for each of several thousand independent conformations of a molecule.
When done, find the minimum energy conformation.
This problem is able to be solved in parallel. Each of the molecular conformations is independently determinable. The calculation of the minimum
energy conformation is also a parallelizable problem.
This is a non-parallelizable problem because the calculation of the Fibonacci sequence as shown would entail dependent calculations rather than
independent ones. The calculation of the F(n) value uses those of both F(n-1) and F(n-2). These three terms cannot be calculated independently and
therefore, not in parallel.
Investigate other algorithms if possible. This may be the single most important
consideration when designing a parallel application.
Take advantage of optimized third party parallel software and highly optimized math
libraries available from leading vendors (IBM's ESSL, Intel's MKL, AMD's AMCL,
etc.).
Partitioning
One of the first steps in designing a parallel program is to break the problem into discrete "chunks" of work that can be distributed to multiple tasks. This is
known as decomposition or partitioning.
There are two basic ways to partition computational work among parallel tasks: domain decomposition and functional decomposition.
Domain Decomposition:
In this type of partitioning, the data associated with a problem is decomposed. Each parallel task then works on a portion of of the data.
Functional Decomposition:
In this approach, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed
according to the work that must be done. Each task then performs a portion of the overall work.
Functional decomposition lends itself well to problems that can be split into different tasks. For example:
Ecosystem Modeling
Each program calculates the population of a given group, where each group's growth depends on that of its neighbors. As time progresses, each process
calculates its current state, then exchanges information with the neighbor populations. All tasks then progress to calculate the state at the next time step.
Signal Processing
An audio signal data set is passed through four distinct computational filters. Each filter is a separate process. The first segment of data must pass through
the first filter before progressing to the second. When it does, the second segment of data passes through the first filter. By the time the fourth segment of
data is in the first filter, all four tasks are busy.
Climate Modeling
Each model component can be thought of as a separate task. Arrows represent exchanges of data between components during computation: the atmosphere
model generates wind velocity data that are used by the ocean model, the ocean model generates sea surface temperature data that are used by the
atmosphere model, and so on.
Communications
Who Needs Communications?
The need for communications between tasks depends upon your problem:
Factors to Consider:
There are a number of important factors to consider when designing your program's inter-task communications:
Cost of communications
Visibility of communications
With the Message Passing Model, communications are explicit and generally quite visible and under the control of the programmer.
With the Data Parallel Model, communications often occur transparently to the programmer, particularly on distributed memory architectures. The
programmer may not even be able to know exactly how inter-task communications are being accomplished.
Scope of communications
Knowing which tasks must communicate with each other is critical during the design stage of a parallel code. Both of the two scopings described
below can be implemented synchronously or asynchronously.
Point-to-point - involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
Collective - involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective. Some
common variations (there are more):
Efficiency of communications
Very often, the programmer will have a choice with regard to factors that can affect communications performance. Only a few are mentioned here.
Which implementation for a given model should be used? Using the Message Passing Model as an example, one MPI implementation may be faster on
a given hardware platform than another.
What type of communication operations should be used? As mentioned previously, asynchronous communication operations can improve overall
program performance.
Network media - some platforms may offer more than one network for communications. Which one is best?
Synchronization
Managing the sequence of work and the tasks performing it is a critical design consideration for most parallel programs.
Types of Synchronization:
Barrier
Usually implies that all tasks are involved
Each task performs its work until it reaches the barrier. It then stops, or "blocks".
When the last task reaches the barrier, all tasks are synchronized.
What happens from here varies. Often, a serial section of work must be done. In other cases, the tasks are automatically released to continue their
work.
Lock / semaphore
Can involve any number of tasks
Typically used to serialize (protect) access to global data or a section of code. Only one task at a time may use (own) the lock / semaphore / flag.
The first task to acquire the lock "sets" it. This task can then safely (serially) access the protected data or code.
Other tasks can attempt to acquire the lock but must wait until the task that owns the lock releases it.
Can be blocking or non-blocking
Data Dependencies
Definition:
A dependence exists between program statements when the order of statement execution affects the results of the program.
A data dependence results from multiple use of the same location(s) in storage by different tasks.
Dependencies are important to parallel programming because they are one of the primary inhibitors to parallelism.
Examples:
DO 500 J = MYSTART,MYEND
A(J) = A(J-1) * 2.0
500 CONTINUE
The value of A(J-1) must be computed before the value of A(J), therefore A(J) exhibits a data dependency on A(J-1). Parallelism is inhibited.
If Task 2 has A(J) and task 1 has A(J-1), computing the correct value of A(J) necessitates:
Distributed memory architecture - task 2 must obtain the value of A(J-1) from task 1 after task 1 finishes its computation
Shared memory architecture - task 2 must read A(J-1) after task 1 updates it
task 1 task 2
------ ------
X = 2 X = 4
. .
. .
Y = X**2 Y = X**3
As with the previous example, parallelism is inhibited. The value of Y is dependent on:
Distributed memory architecture - if or when the value of X is communicated between the tasks.
Shared memory architecture - which task last stores the value of X.
Although all data dependencies are important to identify when designing parallel programs, loop carried dependencies are particularly important since loops
are possibly the most common target of parallelization efforts.
Load Balancing
Load balancing refers to the practice of distributing approximately equal amounts of work among tasks so that all tasks are kept busy all of the time. It can
be considered a minimization of task idle time.
Load balancing is important to parallel programs for performance reasons. For example, if all tasks are subject to a barrier synchronization point, the slowest
task will determine the overall performance.
Granularity
Computation / Communication Ratio:
Periods of computation are typically separated from periods of communication by synchronization events.
Fine-grain Parallelism:
Relatively small amounts of computational work are done between communication events
Implies high communication overhead and less opportunity for performance enhancement
If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes
longer than the computation.
Coarse-grain Parallelism:
Relatively large amounts of computational work are done between communication/synchronization events
Which is Best?
The most efficient granularity is dependent on the algorithm and the hardware environment in which it runs.
In most cases the overhead associated with communications and synchronization is high relative to execution speed so it is
advantageous to have coarse granularity.
I/O
The Bad News:
Read operations can be affected by the file server's ability to handle multiple read requests at the same time.
I/O that must be conducted over the network (NFS, non-local) can cause severe bottlenecks and even crash file servers.
The parallel I/O programming interface specification for MPI has been available since 1996 as part of MPI-2. Vendor and "free" implementations are now
commonly available.
A few pointers:
Writing large chunks of data rather than small chunks is usually significantly more efficient.
Confine I/O to specific serial portions of the job, and then use parallel communications to distribute data to parallel tasks. For example, Task 1 could
read an input file and then communicate required data to other tasks. Likewise, Task 1 could perform write operation after receiving required data from
all other tasks.
Aggregate I/O operations across tasks - rather than having many tasks perform I/O, have a subset of tasks perform it.
Debugging
Debugging parallel codes can be incredibly difficult, particularly as codes scale upwards.
The good news is that there are some excellent debuggers available to assist:
Threaded - pthreads and OpenMP
MPI
GPU / accelerator
Hybrid
Livermore Computing users have access to several parallel debugging tools installed on LC's clusters:
TotalView from RogueWave Software
DDT from Allinea
Inspector from Intel
Stack Trace Analysis Tool (STAT) - locally developed
All of these tools have a learning curve associated with them - some more than others.
Fortunately, there are a number of excellent tools for parallel program performance analysis and tuning.
Livermore Computing users have access to several such tools, most of which are available on all production clusters.
mpiP: http://mpip.sourceforge.net/
memP: http://memp.sourceforge.net/
Parallel Examples
Array Processing
This example demonstrates calculations on 2-dimensional array elements, with the computation on each
array element being independent from other array elements.
do j = 1,n
do i = 1,n
a(i,j) = fcn(i,j)
end do
end do
The calculation of elements is independent of one another - leads to an embarrassingly parallel situation.
Array Processing
Parallel Solution 1
Arrays elements are distributed so that each processor owns a portion of an array (subarray).
Independent calculation of array elements ensures there is no need for communication between tasks.
Distribution scheme is chosen by other criteria, e.g. unit stride (stride of 1) through the subarrays. Unit
stride maximizes cache/memory usage.
Since it is desirable to have unit stride through the subarrays, the choice of a distribution scheme
depends on the programming language. See the Block - Cyclic Distributions Diagram for the options.
After the array is distributed, each task executes the portion of the loop corresponding to the data it
owns. For example, with Fortran block distribution:
do j = mystart, myend
do i = 1,n
a(i,j) = fcn(i,j)
end do
end do
Notice that only the outer loop variables are different from the serial solution.
Master process initializes array, sends info to worker processes and receives results.
Worker process receives info, performs its share of computation and sends results to master.
Using the Fortran storage scheme, perform block distribution of the array.
if I am MASTER
else if I am WORKER
receive from MASTER info on part of array I own
receive from MASTER my portion of initial array
endif
Array Processing
Parallel Solution 2: Pool of Tasks
The previous array solution demonstrated static load balancing:
Each task has a fixed amount of work to do
May be significant idle time for faster or more lightly loaded processors - slowest tasks determines overall performance.
Static load balancing is not usually a major concern if all tasks are performing the same amount of work on identical machines.
If you have a load balance problem (some tasks work faster than others), you may benefit by using a "pool of tasks" scheme.
Master Process:
Worker processes do not know before runtime which portion of array they will handle or how many tasks they will perform.
Dynamic load balancing occurs at run time: the faster tasks will get more work to do.
if I am MASTER
else if I am WORKER
endif
Discussion:
In the above pool of tasks example, each task calculated an individual array element as a job. The computation to communication ratio is finely granular.
Finely granular solutions incur more communication overhead in order to reduce task idle time.
A more optimal solution might be to distribute more work with each job. The "right" amount of work is problem dependent.
Parallel Examples
PI Calculation
The value of PI can be calculated in a number of ways. Consider the following method
of approximating PI
1. Inscribe a circle in a square
2. Randomly generate points in the square
3. Determine the number of points in the square that are also in the circle
4. Let r be the number of points in the circle divided by the number of points in the
square
5. PI ~ 4 r
6. Note that the more points generated, the better the approximation
npoints = 10000
circle_count = 0
do j = 1,npoints
generate 2 random numbers between 0 and 1
xcoordinate = random1
ycoordinate = random2
if (xcoordinate, ycoordinate) inside circle
then circle_count = circle_count + 1
end do
PI = 4.0*circle_count/npoints
Note that most of the time in running this program would be spent executing the loop
PI Calculation
Parallel Solution
Parallel strategy: break the loop into portions that can be executed by the tasks.
npoints = 10000
circle_count = 0
p = number of tasks
num = npoints/p
do j = 1,num
generate 2 random numbers between 0 and 1
xcoordinate = random1
ycoordinate = random2
if (xcoordinate, ycoordinate) inside circle
then circle_count = circle_count + 1
end do
if I am MASTER
else if I am WORKER
endif
Parallel Examples
Most problems in parallel computing require communication among the tasks. A number of
common problems require communication with "neighbor" tasks.
The heat equation describes the temperature change over time, given initial temperature
distribution and boundary conditions.
The initial temperature is zero on the boundaries and high in the middle.
For the fully explicit problem, a time stepping algorithm is used. The elements of a
2-dimensional array represent the temperature at points on the square.
do iy = 2, ny - 1
do ix = 2, nx - 1
u2(ix, iy) =
u1(ix, iy) +
cx * (u1(ix+1,iy) + u1(ix-1,iy) - 2.*u1(ix,iy)) +
cy * (u1(ix,iy+1) + u1(ix,iy-1) - 2.*u1(ix,iy))
end do
end do
The entire array is partitioned and distributed as subarrays to all tasks. Each task owns a portion of the
total array.
Master process sends initial info to workers, and then waits to collect results from all workers
Worker process calculates solution within specified number of time steps, communicating as necessary
with neighbor processes
if I am MASTER
initialize array
send each WORKER starting info and subarray
receive results from each WORKER
else if I am WORKER
receive from MASTER starting info and subarray
do t = 1, nsteps
update time
send neighbors my border info
receive from neighbors their border info
end do
endif
Parallel Examples
where c is a constant
Note that amplitude will depend on previous timesteps (t, t-1) and neighboring points (i-1, i+1). Data dependence will mean that a parallel solution will
involve communications.
The entire amplitude array is partitioned and distributed as subarrays to all tasks. Each task owns a portion of the total array.
Load balancing: all points require equal work, so the points should be divided equally
A block decomposition would have the work partitioned into the number of tasks as chunks, allowing each task to own mostly contiguous data points.
Communication need only occur on data borders. The larger the block size the less the communication.
end do
Agenda
Back to the top
A search on the WWW for "parallel programming" or "parallel computing" will yield a wide variety of information.
Recommended reading:
"Designing and Building Parallel Programs". Ian Foster.
http://www-unix.mcs.anl.gov/dbpp/
"Introduction to Parallel Computing". Ananth Grama, Anshul Gupta, George Karypis, Vipin Kumar.
http://www-users.cs.umn.edu/~karypis/parbook/
"Overview of Recent Supercomputers". A.J. van der Steen, Jack Dongarra.
OverviewRecentSupercomputers.2008.pdf
Photos/Graphics have been created by the author, created by other LLNL employees, obtained from non-copyrighted, government or public domain (such as
http://commons.wikimedia.org/) sources, or used with the permission of authors from other presentations and web pages.
History: These materials have evolved from the following sources, which are no longer maintained or available.
Tutorials located in the Maui High Performance Computing Center's "SP Parallel Programming Workshop".
Tutorials located at the Cornell Theory Center's "Education and Training" web page.
https://computing.llnl.gov/tutorials/parallel_comp/
Last Modified: 07/14/2014 22:28:47 blaiseb@llnl.gov
UCRL-MI-133316