Programming On Parallel Machines
Programming On Parallel Machines
Programming On Parallel Machines
Norm Matloff
University of California, Davis
2
Authors Biographical Sketch
Dr. Norm Matloff is a professor of computer science at the University of California at Davis, and
was formerly a professor of statistics at that university. He is a former database software developer
in Silicon Valley, and has been a statistical consultant for firms such as the Kaiser Permanente
Health Plan.
Dr. Matloff was born in Los Angeles, and grew up in East Los Angeles and the San Gabriel Valley.
He has a PhD in pure mathematics from UCLA, specializing in probability theory and statistics. He
has published numerous papers in computer science and statistics, with current research interests
in parallel processing, statistical computing, and regression methodology.
Prof. Matloff is a former appointed member of IFIP Working Group 11.3, an international committee concerned with database software security, established under UNESCO. He was a founding
member of the UC Davis Department of Statistics, and participated in the formation of the UCD
Computer Science Department as well. He is a recipient of the campuswide Distinguished Teaching
Award and Distinguished Public Service Award at UC Davis.
Dr. Matloff is the author of two published textbooks, and of a number of widely-used Web tutorials
on computer topics, such as the Linux operating system and the Python programming language.
He and Dr. Peter Salzman are authors of The Art of Debugging with GDB, DDD, and Eclipse.
Prof. Matloffs book on the R programming language, The Art of R Programming, was published
in 2011. His book, Parallel Computation for Data Science, will come out in 2014. He is also the
author of several open-source textbooks, including From Algorithms to Z-Scores: Probabilistic and
Statistical Modeling in Computer Science (http://heather.cs.ucdavis.edu/probstatbook), and
Programming on Parallel Machines (http://heather.cs.ucdavis.edu/~matloff/ParProcBook.
pdf).
4
Like all my open source textbooks, this one is constantly evolving. I continue to add new topics,
new examples and so on, and of course fix bugs and improve the exposition. For that reason, it
is better to link to the latest version, which will always be at http://heather.cs.ucdavis.edu/
~matloff/158/PLN/ParProcBook.pdf, rather than to copy it.
For that reason, feedback is highly appreciated. I wish to thank Stuart Ambler, Matt Butner,
Stuart Hansen, Bill Hsu, Sameer Khan, Mikel McDaniel, Richard Minner, Lars Seeman, Marc
Sosnick, and Johan Wikstr
om for their comments. Im also very grateful to Professor Hsu for his
making available to me advanced GPU-equipped machines.
You may also be interested in my open source textbook on probability and statistics, at http:
//heather.cs.ucdavis.edu/probstatbook.
This work is licensed under a Creative Commons Attribution-No Derivative Works 3.0 United
States License. Copyright is retained by N. Matloff in all non-U.S. jurisdictions, but permission to
use these materials in teaching is still granted, provided the authorship and licensing information
here is displayed in each unit. I would appreciate being notified if you use this book for teaching,
just so that I know the materials are being put to use, but this is not required.
Contents
1 Introduction to Parallel Processing
1.1
1.2
1.1.1
Execution Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.2
Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.3
Distributed Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.4
1.2.1
Shared-Memory Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.1.1
Basic Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.1.2
Multiprocessor Topologies . . . . . . . . . . . . . . . . . . . . . . . .
1.2.1.3
Message-Passing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.2.1
Basic Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.2.2
Example: Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SIMD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.1
1.3.2
Shared-Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.2.1
1.2.2
1.2.3
1.3
Programmer View . . . . . . . . . . . . . . . . . . . . . . . . . . . .
i
ii
CONTENTS
1.3.3
1.3.4
1.3.2.2
1.3.2.3
Role of the OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.2.4
1.3.2.5
Higher-Level Threads . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.2.6
1.3.2.7
Debugging OpenMP . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Message Passing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.3.1
Programmer View . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.3.2
Scatter/Gather . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.3.4.1
R snow Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
27
2.1
Communication Bottlenecks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2
Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3
2.4
. . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3.1
2.3.2
Iterative Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Static (But Possibly Random) Task Assignment Typically Better Than Dynamic . . 30
2.4.1
2.4.2
2.4.3
2.4.4
Work Stealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4.5
Timing Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.5
2.6
2.7
CONTENTS
2.8
iii
37
3.1
What Is Shared? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2
Memory Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3
3.4
3.2.1
Interleaving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2.2
3.2.3
Interconnection Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3.1
SMP Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3.2
NUMA Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.3.3
3.6
Crossbar Interconnects . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.3.3.2
3.3.4
Comparative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.3.5
Synchronization Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.4.1
3.5
3.3.3.1
Test-and-Set Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.4.1.1
3.4.1.2
3.4.2
3.4.3
Fetch-and-Add Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Cache Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.5.1
Cache Coherency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.5.2
3.5.3
iv
CONTENTS
3.7
3.8
Multicore Chips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.9
73
4.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.2
The Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.2.2
4.2.3
Scope Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.2.4
4.2.5
4.2.6
Implicit Barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.2.7
. . . . . . . . . . . . . . . . . . . . . . . . . . 78
CONTENTS
4.3
4.3.2
Nested Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.3.3
4.3.4
4.3.5
4.4
4.5
4.6
Example: Quicksort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.6.2
4.7
4.8
4.9
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
. . . . . . . . . . . . . . . . . . 93
4.9.1
Compiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.9.2
Running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.9.3
Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.10 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.10.1 The Effect of Problem Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.10.2 Some Fine Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.10.3 OpenMP Internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.11 Example: Root Finding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.12 Example: Mutual Outlinks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.13 Example: Transforming an Adjacency Matrix . . . . . . . . . . . . . . . . . . . . . . 103
4.14 Locks with OpenMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.15 Other Examples of OpenMP Code in This Book . . . . . . . . . . . . . . . . . . . . 106
vi
CONTENTS
109
5.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.2
5.3
5.3.2
5.3.3
5.3.2.1
5.3.2.2
5.3.2.3
OS in Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.3.3.2
5.3.3.3
5.3.3.4
5.3.3.5
5.3.4
5.3.5
5.4
5.5
5.6
5.7
5.8
5.9
CONTENTS
vii
. . . . . . . . . . . . . . . . . . . . . . . . . . 144
145
6.1
6.1.2
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
. . . . . . . . . . . . 146
viii
CONTENTS
6.12 More on Use of Thrust for a CUDA Backend . . . . . . . . . . . . . . . . . . . . . . 166
6.12.1 Synchronicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
6.13 Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
6.14 Other Examples of Thrust Code in This Book . . . . . . . . . . . . . . . . . . . . . . 167
169
7.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
7.2
7.3
7.4
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
7.3.2
8 Introduction to MPI
8.1
175
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
8.1.1
History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
8.1.2
8.1.3
Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.1.4
8.2
8.3
8.3.2
8.3.3
8.3.3.2
CONTENTS
ix
8.3.3.3
8.3.3.4
MPI Recv()
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
8.4
8.5
8.6
8.6.2
8.6.3
8.6.4
8.6.5
8.6.6
8.6.7
8.6.8
8.6.9
. . . . . . . . . . . . . . . . . . . . . . . . . 190
8.7.2
Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
8.7.3
8.7.4
8.8
8.9
9 Cloud Computing
201
9.1
9.2
9.3
CONTENTS
9.4
9.5
9.6
9.7
9.8
9.9
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
215
223
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
CONTENTS
xi
11.3.2.2 Example: Matrix Multiply in CUDA . . . . . . . . . . . . . . . . . . 228
245
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
xii
CONTENTS
12.2.2 Shared-Memory Mergesort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
12.2.3 Message Passing Mergesort on a Tree Topology . . . . . . . . . . . . . . . . . 249
12.2.4 Compare-Exchange Operations . . . . . . . . . . . . . . . . . . . . . . . . . . 250
12.2.5 Bitonic Mergesort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
12.3 The Bubble Sort and Its Cousins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
12.3.1 The Much-Maligned Bubble Sort . . . . . . . . . . . . . . . . . . . . . . . . . 252
12.3.2 A Popular Variant: Odd-Even Transposition . . . . . . . . . . . . . . . . . . 253
12.3.3 Example: CUDA Implementation of Odd/Even Transposition Sort . . . . . . 253
12.4 Shearsort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
12.5 Bucket Sort with Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
12.6 Radix Sort
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
261
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
CONTENTS
xiii
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
. . . . . . . . . . . . . . . . . . . 274
279
xiv
CONTENTS
293
A.2.2.2
301
309
CONTENTS
xv
xvi
CONTENTS
Chapter 1
1.1
1.1.1
There is an ever-increasing appetite among some types of computer users for faster and faster
machines. This was epitomized in a statement by the late Steve Jobs, founder/CEO of Apple and
Pixar. He noted that when he was at Apple in the 1980s, he was always worried that some other
company would come out with a faster machine than his. But later at Pixar, whose graphics work
requires extremely fast computers, he was always hoping someone would produce faster machines,
so that he could use them!
A major source of speedup is the parallelizing of operations. Parallel operations can be either
within-processor, such as with pipelining or having several ALUs within a processor, or betweenprocessor, in which many processor work on different parts of a problem in parallel. Our focus here
is on between-processor operations.
For example, the Registrars Office at UC Davis uses shared-memory multiprocessors for processing
its on-line registration work. Online registration involves an enormous amount of database computation. In order to handle this computation reasonably quickly, the program partitions the work
to be done, assigning different portions of the database to different processors. The database field
has contributed greatly to the commercial success of large shared-memory machines.
1
As the Pixar example shows, highly computation-intensive applications like computer graphics also
have a need for these fast parallel computers. No one wants to wait hours just to generate a single
image, and the use of parallel processing machines can speed things up considerably. For example,
consider ray tracing operations. Here our code follows the path of a ray of light in a scene,
accounting for reflection and absorbtion of the light by various objects. Suppose the image is to
consist of 1,000 rows of pixels, with 1,000 pixels per row. In order to attack this problem in a
parallel processing manner with, say, 25 processors, we could divide the image into 25 squares of
size 200x200, and have each processor do the computations for its square.
Note, though, that it may be much more challenging than this implies. First of all, the computation
will need some communication between the processors, which hinders performance if it is not done
carefully. Second, if one really wants good speedup, one may need to take into account the fact
that some squares require more computation work than others. More on this below.
We are now in the era of Big Data, which requires Big Computation, thus again generating a major
need for parallel processing.
1.1.2
Memory
Yes, execution speed is the reason that comes to most peoples minds when the subject of parallel
processing comes up. But in many applications, an equally important consideration is memory
capacity. Parallel processing application often tend to use huge amounts of memory, and in many
cases the amount of memory needed is more than can fit on one machine. If we have many machines
working together, especially in the message-passing settings described below, we can accommodate
the large memory needs.
1.1.3
Distributed Processing
In the above two subsections weve hit the two famous issues in computer sciencetime (speed)
and space (memory capacity). But there is a third reason to do parallel processing, which actually
has its own name, distributed processing. In a distributed database, for instance, parts of the
database may be physically located in widely dispersed sites. If most transactions at a particular
site arise locally, then we would make more efficient use of the network, and so on.
1.1.4
1.2
This is a common scenario: Someone acquires a fancy new parallel machine, and excitedly writes a
program to run on itonly to find that the parallel code is actually slower than the original serial
version! This is due to lack of understanding of how the hardware works, at least at a high level.
This is not a hardware book, but since the goal of using parallel hardware is speed, the efficiency of
our code is a major issue. That in turn means that we need a good understanding of the underlying
hardware that we are programming. In this section, we give an overview of parallel hardware.
1.2.1
1.2.1.1
Shared-Memory Systems
Basic Architecture
Here many CPUs share the same physical memory. This kind of architecture is sometimes called
MIMD, standing for Multiple Instruction (different CPUs are working independently, and thus
typically are executing different instructions at any given instant), Multiple Data (different CPUs
are generally accessing different memory locations at any given time).
Until recently, shared-memory systems cost hundreds of thousands of dollars and were affordable
only by large companies, such as in the insurance and banking industries. The high-end machines
are indeed still quite expensive, but now multicore machines, in which two or more CPUs share
a common memory,1 are commonplace in the home and even in cell phones!
1.2.1.2
Multiprocessor Topologies
The multicore setup is effectively the same as SMP, except that the processors are all on one chip,
attached to the bus.
So-called NUMA architectures will be discussed in Chapter 3.
1
The terminology gets confusing here. Although each core is a complete processor, people in the field tend to call
the entire chip a processor, referring to the cores, as, well, cores. In this book, the term processor will generally
include cores, e.g. a dual-core chip will be considered to have two processors.
1.2.1.3
1.2.2
1.2.2.1
Message-Passing Systems
Basic Architecture
Here we have a number of independent CPUs, each with its own independent memory. The various
processors communicate with each other via networks of some kind.
Example: Clusters
Here one has a set of commodity PCs and networks them for use as a parallel processing system. The
PCs are of course individual machines, capable of the usual uniprocessor (or now multiprocessor)
applications, but by networking them together and using parallel-processing software environments,
we can form very powerful parallel systems.
One factor which can be key to the success of a cluster is the use of a fast network, fast both in terms
of hardware and network protocol. Ordinary Ethernet and TCP/IP are fine for the applications
envisioned by the original designers of the Internet, e.g. e-mail and file transfer, but is slow in the
cluster context. A good network for a cluster is, for instance, Infiniband.
Clusters have become so popular that there are now recipes on how to build them for the specific
purpose of parallel processing. The term Beowulf come to mean a cluster of PCs, usually with
a fast network connecting them, used for parallel processing. Software packages such as ROCKS
(http://www.rocksclusters.org/wordpress/) have been developed to make it easy to set up
and administer such systems.
1.2.3
SIMD
1.3
1.3.1
To explain the paradigms, we will use the term nodes, where roughly speaking one node corresponds
to one processor, and use the following example:
In all the forms of parallelism, each node could be assigned some of the rows of A, and that node
would multiply X by those rows, thus forming part of Y.
Note that in typical applications, the matrix A would be very large, say thousands of rows, possibly even millions. Otherwise the computation could be done quite satisfactorily in a serial, i.e.
nonparallel manner, making parallel processing unnecessary..
1.3.2
1.3.2.1
Shared-Memory
Programmer View
In implementing the matrix-vector multiply example of Section 1.3.1 in the shared-memory paradigm,
the arrays for A, X and Y would be held in common by all nodes. If for instance node 2 were to
execute
Y[3] = 12;
On a uniprocessor system, the threads of a program take turns executing, so that there is only an
illusion of parallelism. But on a multiprocessor system, one can genuinely have threads running
in parallel.3 Whenever a processor becomes available, the OS will assign some ready thread to it.
So, among other things, this says that a thread might actually run on different processors during
different turns.
Important note: Effective use of threads requires a basic understanding of how processes take
turns executing. See Section A.1 in the appendix of this book for this material.
One of the most popular threads systems is Pthreads, whose name is short for POSIX threads.
POSIX is a Unix standard, and the Pthreads system was designed to standardize threads programming on Unix. It has since been ported to other platforms.
1.3.2.2
// PrimesThreads.c
2
3
4
5
6
7
8
9
// Unix compilation:
10
11
// usage:
primesthreads n num_threads
12
13
14
15
#include <stdio.h>
#include <math.h>
#include <pthread.h>
16
17
18
19
20
21
22
23
24
25
26
27
28
// shared variables
int nthreads, // number of threads (not counting main())
n, // range to check for primeness
prime[MAX_N+1], // in the end, prime[i] = 1 if i prime, else 0
nextbase; // next sieve multiplier to be used
// lock for the shared variable nextbase
pthread_mutex_t nextbaselock = PTHREAD_MUTEX_INITIALIZER;
// ID structs for the threads
pthread_t id[MAX_THREADS];
3
There may be other processes running too. So the threads of our program must still take turns with other
processes running on the machine.
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
// report results
nprimes = 1;
for (i = 3; i <= n; i++)
if (prime[i]) {
nprimes++;
}
printf("the number of primes found was %d\n",nprimes);
92
93
94
95
96
97
98
99
100
To make our discussion concrete, suppose we are running this program with two threads. Suppose
also the both threads are running simultaneously most of the time. This will occur if they arent
competing for turns with other threads, say if there are no other threads, or more generally if the
number of other threads is less than or equal to the number of processors minus two. (Actually,
the original thread is main(), but it lies dormant most of the time, as youll see.)
Note the global variables:
int nthreads, // number of threads (not counting main())
n, // range to check for primeness
prime[MAX_N+1], // in the end, prime[i] = 1 if i prime, else 0
nextbase; // next sieve multiplier to be used
pthread_mutex_t nextbaselock = PTHREAD_MUTEX_INITIALIZER;
pthread_t id[MAX_THREADS];
This will require some adjustment for those whove been taught that global variables are evil.
In most threaded programs, all communication between threads is done via global variables.4 So
even if you consider globals to be evil, they are a necessary evil in threads programming.
Personally I have always thought the stern admonitions against global variables are overblown anyway; see http://heather.cs.ucdavis.edu/~matloff/globals.html. But as mentioned, those
admonitions are routinely ignored in threaded programming. For a nice discussion on this, see the
paper by a famous MIT computer scientist on an Intel Web page, at http://software.intel.
com/en-us/articles/global-variable-reconsidered/?wapkw=%28parallelism%29.
As mentioned earlier, the globals are shared by all processors.5 If one processor, for instance,
assigns the value 0 to prime[35] in the function crossout(), then that variable will have the value
4
Technically one could use locals in main() (or whatever function it is where the threads are created) for this
purpose, but this would be so unwieldy that it is seldom done.
5
Technically, we should say shared by all threads here, as a given thread does not always execute on the same
processor, but at any instant in time each executing thread is at some processor, so the statement is all right.
10
0 when accessed by any of the other processors as well. On the other hand, local variables have
different values at each processor; for instance, the variable i in that function has a different value
at each processor.
Note that in the statement
pthread_mutex_t nextbaselock = PTHREAD_MUTEX_INITIALIZER;
the right-hand side is not a constant. It is a macro call, and is thus something which is executed.
In the code
pthread_mutex_lock(&nextbaselock);
base = nextbase
nextbase += 2
pthread_mutex_unlock(&nextbaselock);
at the same time. A common term used for this is that we wish the actions in the critical section to
collectively be atomic, meaning not divisible among threads. The calls to pthread mutex lock()
and pthread mutex unlock() ensure this. If thread A is currently executing inside the critical
section and thread B tries to lock the lock by calling pthread mutex lock(), the call will block
until thread B executes pthread mutex unlock().
Here is why this is so important: Say currently nextbase has the value 11. What we want to
happen is that the next thread to read nextbase will cross out all multiples of 11. But if we
allow two threads to execute the critical section at the same time, the following may occur, in order:
thread A reads nextbase, setting its value of base to 11
thread B reads nextbase, setting its value of base to 11
thread A adds 2 to nextbase, so that nextbase becomes 13
thread B adds 2 to nextbase, so that nextbase becomes 15
Two problems would then occur:
11
Both threads would do crossing out of multiples of 11, duplicating work and thus slowing
down execution speed.
We will never cross out multiples of 13.
Thus the lock is crucial to the correct (and speedy) execution of the program.
Note that these problems could occur either on a uniprocessor or multiprocessor system. In the
uniprocessor case, thread As turn might end right after it reads nextbase, followed by a turn by
B which executes that same instruction. In the multiprocessor case, A and B could literally be
running simultaneously, but still with the action by B coming an instant after A.
This problem frequently arises in parallel database systems. For instance, consider an airline
reservation system. If a flight has only one seat left, we want to avoid giving it to two different
customers who might be talking to two agents at the same time. The lines of code in which the
seat is finally assigned (the commit phase, in database terminology) is then a critical section.
A critical section is always a potential bottleneck in a parallel program, because its code is serial
instead of parallel. In our program here, we may get better performance by having each thread
work on, say, five values of nextbase at a time. Our line
nextbase += 2;
would become
nextbase += 10;
That would mean that any given thread would need to go through the critical section only one-fifth
as often, thus greatly reducing overhead. On the other hand, near the end of the run, this may
result in some threads being idle while other threads still have a lot of work to do.
Note this code.
for (i = 0; i < nthreads; i++) {
pthread_join(id[i],&work);
printf("%d values of base done\n",work);
}
12
which would result in possibly wrong output if we start counting primes before some threads are
done.
Actually, we could have used Pthreads built-in barrier function. We need to declare a barrier
variable, e.g.
p t h r e a d b a r r i e r t barr ;
The pthread join() function actually causes the given thread to exit, so that we then join the
thread that created it, i.e. main(). Thus some may argue that this is not really a true barrier.
Barriers are very common in shared-memory programming, and will be discussed in more detail in
Chapter 3.
1.3.2.3
Role of the OS
Lets again ponder the role of the OS here. What happens when a thread tries to lock a lock:
The lock call will ultimately cause a system call, causing the OS to run.
The OS keeps track of the locked/unlocked status of each lock, so it will check that status.
Say the lock is unlocked (a 0). Then the OS sets it to locked (a 1), and the lock call returns.
The thread enters the critical section.
When the thread is done, the unlock call unlocks the lock, similar to the locking actions.
If the lock is locked at the time a thread makes a lock call, the call will block. The OS will
mark this thread as waiting for the lock. When whatever thread currently using the critical
section unlocks the lock, the OS will relock it and unblock the lock call of the waiting thread.
If several threads are waiting, of course only one will be unblock.
Note that main() is a thread too, the original thread that spawns the others. However, it is
dormant most of the time, due to its calls to pthread join().
13
Finally, keep in mind that although the globals variables are shared, the locals are not. Recall that
local variables are stored on a stack. Each thread (just like each process in general) has its own
stack. When a thread begins a turn, the OS prepares for this by pointing the stack pointer register
to this threads stack.
1.3.2.4
Most debugging tools include facilities for threads. Heres an overview of how it works in GDB.
First, as you run a program under GDB, the creation of new threads will be announced, e.g.
(gdb) r 100 2
Starting program:
[New Thread 16384
[New Thread 32769
[New Thread 16386
[New Thread 32771
/debug/primes 100 2
(LWP 28653)]
(LWP 28676)]
(LWP 28677)]
(LWP 28678)]
You can do backtrace (bt) etc. as usual. Here are some threads-related commands:
info threads (gives information on all current threads)
thread 3 (change to thread 3)
break 88 thread 3 (stop execution when thread 3 reaches source line 88)
break 88 thread 3 if x==y (stop execution when thread 3 reaches source line 88 and the
variables x and y are equal)
Of course, many GUI IDEs use GDB internally, and thus provide the above facilities with a GUI
wrapper. Examples are DDD, Eclipse and NetBeans.
1.3.2.5
Higher-Level Threads
The OpenMP library gives the programmer a higher-level view of threading. The threads are there,
but rather hidden by higher-level abstractions. We will study OpenMP in detail in Chapter 4, and
use it frequently in the succeeding chapters, but below is an introductory example.
1.3.2.6
14
1 // OpenMP i n t r o d u c t o r y example :
s a m p l i n g buc ket s o r t
2
3 // c o m p i l e :
g c c fopenmp o b s o r t b u c k e t s o r t . c
4
5 // s e t t h e number o f t h r e a d s v i a t h e environment v a r i a b l e
6 // OMP NUM THREADS, e . g . i n t h e C s h e l l
7
8 // s e t e n v OMP NUM THREADS 8
9
10 #i n c l u d e <omp . h> // r e q u i r e d
11 #i n c l u d e < s t d l i b . h>
12
13 // needed f o r c a l l t o q s o r t ( )
14 i n t cmpints ( i n t u , i n t v )
15 { i f ( u < v ) r e t u r n 1;
16
i f ( u > v ) r e t u r n 1 ;
17
return 0;
18 }
19
20 // adds x i t o t h e p a r t a r r a y , i n c r e m e n t s npart , t h e l e n g t h o f p a r t
21 v o i d grab ( i n t x i , i n t part , i n t n p a r t )
22 {
23
part [ npart ] = x i ;
24
n p a r t += 1 ;
25 }
26
27 // f i n d s t h e min and max i n y , l e n g t h ny ,
28 // p l a c i n g them i n miny and maxy
29 v o i d findminmax ( i n t y , i n t ny , i n t miny , i n t maxy )
30 { i n t i , y i ;
31
miny = maxy = y [ 0 ] ;
32
f o r ( i = 1 ; i < ny ; i ++) {
33
yi = y [ i ] ;
34
i f ( y i < miny ) miny = y i ;
35
e l s e i f ( y i > maxy ) maxy = y i ;
36
}
37 }
38
39 // s o r t t h e a r r a y x o f l e n g t h n
40 v o i d b s o r t ( i n t x , i n t n )
41 { // t h e s e a r e l o c a l t o t h i s f u n c t i o n , but s h a r e d among t h e t h r e a d s
42
f l o a t bdries ; i n t counts ;
43
#pragma omp p a r a l l e l
44
// e n t e r i n g t h i s b l o c k a c t i v a t e s t h e t h r e a d s , each e x e c u t i n g i t
45
{ // v a r i a b l e s d e c l a r e d below a r e l o c a l t o each t h r e a d
46
i n t me = omp get thread num ( ) ;
47
// have t o do t h e next c a l l w i t h i n t h e b l o c k , w h i l e t h e t h r e a d s
48
// a r e a c t i v e
49
i n t nth = omp get num threads ( ) ;
50
i n t i , x i , minx , maxx , s t a r t ;
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
i n t mypart ;
f l o a t increm ;
i n t SAMPLESIZE ;
// now d e t e r m i n e t h e bu cke t b o u n d a r i e s ; nth 1 o f them , by
// s a m p l i n g t h e a r r a y t o g e t an i d e a o f i t s r a n g e
#pragma omp s i n g l e // o n l y 1 t h r e a d d o e s t h i s , i m p l i e d b a r r i e r a t end
{
i f ( n > 1 0 0 0 ) SAMPLESIZE = 1 0 0 0 ;
e l s e SAMPLESIZE = n / 2 ;
findminmax ( x , SAMPLESIZE,&minx ,&maxx ) ;
b d r i e s = m a l l o c ( ( nth 1) s i z e o f ( f l o a t ) ) ;
increm = ( maxx minx ) / ( f l o a t ) nth ;
f o r ( i = 0 ; i < nth 1; i ++)
b d r i e s [ i ] = minx + ( i +1) increm ;
// a r r a y t o s e r v e a s t h e count o f t h e numbers o f e l e m e n t s o f x
// i n each buck et
c o u n t s = m a l l o c ( nth s i z e o f ( i n t ) ) ;
}
// now have t h i s t h r e a d grab i t s p o r t i o n o f t h e a r r a y ; t h r e a d 0
// t a k e s e v e r y t h i n g below b d r i e s [ 0 ] , t h r e a d 1 e v e r y t h i n g between
// b d r i e s [ 0 ] and b d r i e s [ 1 ] , e t c . , with t h r e a d nth1 t a k i n g
// e v e r y t h i n g o v e r b d r i e s [ nth 1]
mypart = m a l l o c ( n s i z e o f ( i n t ) ) ; i n t nummypart = 0 ;
f o r ( i = 0 ; i < n ; i ++) {
i f (me == 0 ) {
i f ( x [ i ] <= b d r i e s [ 0 ] ) grab ( x [ i ] , mypart ,&nummypart ) ;
}
e l s e i f (me < nth 1) {
i f ( x [ i ] > b d r i e s [ me1] && x [ i ] <= b d r i e s [ me ] )
grab ( x [ i ] , mypart ,&nummypart ) ;
} else
i f ( x [ i ] > b d r i e s [ me1]) grab ( x [ i ] , mypart ,&nummypart ) ;
}
// now r e c o r d how many t h i s t h r e a d g o t
c o u n t s [ me ] = nummypart ;
// s o r t my p a r t
q s o r t ( mypart , nummypart , s i z e o f ( i n t ) , cmpints ) ;
#pragma omp b a r r i e r // o t h e r t h r e a d s need t o know a l l o f c o u n t s
// copy s o r t e d chunk back t o t h e o r i g i n a l a r r a y ; f i r s t f i n d s t a r t p o i n t
start = 0;
f o r ( i = 0 ; i < me ; i ++) s t a r t += c o u n t s [ i ] ;
f o r ( i = 0 ; i < nummypart ; i ++) {
x [ s t a r t+i ] = mypart [ i ] ;
}
}
// i m p l i e d b a r r i e r h e r e ; main t h r e a d won t resume u n t i l a l l t h r e a d s
// a r e done
}
i n t main ( i n t argc , c h a r argv )
15
16
101
102
103
104
105
106
107
108
109
110
111
112
113
Details on OpenMP are presented in Chapter 4. Here is an overview of a few of the OpenMP
constructs available:
#pragma omp for
In our example above, we wrote our own code to assign specific threads to do specific parts
of the work. An alternative is to write an ordinary for loop that iterates over all the work to
be done, and then ask OpenMP to assign specific iterations to specific threads. To do this,
insert the above pragma just before the loop.
#pragma omp critical
The block that follows is implemented as a critical section. OpenMP sets up the locks etc.
for you, alleviating you of work and alleviating your code of clutter.
1.3.2.7
Debugging OpenMP
Since there are threads underlying the OpenMP execution, you should be able to use your debugging
tools threads facilities. Note, though, that this may not work perfectly well.
Some versions of GCC/GDB, for instance, do not display some local variables. Lets consider two
categories of such variables:
(a) Variables within a parallel block, such as me in bsort() in Section 1.3.2.6.
(b) Variables that are not in a parallel block, but which are still local to a function that contains
such a block. An example is counts in bsort().
You may find that when you try to use GDBs print command, GDB says there is no such variable.
17
The problem seems to arise from a combination of (i) optimzation, so that a variable is placed in a
register and basically eliminated from the namespace, and (ii) some compilers implement OpenMP
by actually making special versions of the function being debugged.
In GDB, one possible workaround is to use the -gstabs+ option when compiling, instead of -g.
But here is a more general workarounds. Lets consider variables of type (b) first.
The solution is to temporarily change these variables to globals, e.g.
i n t counts ;
v o i d b s o r t ( i n t x , i n t n )
This would still be all right in terms of program correctness, because the variables in (b) are global
to the threads anyway. (Of course, make sure not to have another global of the same name!) The
switch would only be temporary, during debugging, to be switched back later so that in the end
bsort() is self-contained.
The same solution works for category (a) variables, with an added line:
i n t me ;
#pragma omp t h r e a d p r i v a t e (me)
v o i d b s o r t ( i n t x , i n t n )
What this does is make separate copies of me as global variables, one for each thread. As globals,
GCC wont engage in any shenanigans with them. :-) One does have to keep in mind that they
will retain there values upon exit from a parallel block etc., but the workaround does work.
1.3.3
1.3.3.1
Message Passing
Programmer View
Again consider the matrix-vector multiply example of Section 1.3.1. In contrast to the sharedmemory case, in the message-passing paradigm all nodes would have separate copies of A, X and
Y. Our example in Section 1.3.2.1 would now change. in order for node 2 to send this new value of
Y[3] to node 15, it would have to execute some special function, which would be something like
send(15,12,"Y[3]");
In a more refined version, X would be parceled out to the nodes, just as the rows of A are.
18
assigned rows of A, and then send the result back to node 0. The latter would collect those results,
and store them in Y.
1.3.3.2
Here we use the MPI system, with our hardware being a cluster.
MPI is a popular public-domain set of interface functions, callable from C/C++, to do message
passing. We are again counting primes, though in this case using a pipelining method. It is
similar to hardware pipelines, but in this case it is done in software, and each stage in the pipe
is a different computer.
The program is self-documenting, via the comments.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
uses a pipeline approach: node 0 looks at all the odd numbers (i.e.
has already done filtering out of multiples of 2) and filters out
those that are multiples of 3, passing the rest to node 1; node 1
filters out the multiples of 5, passing the rest to node 2; node 2
then removes the multiples of 7, and so on; the last node must check
whatever is left
18
19
20
21
22
23
24
25
26
27
note that we should NOT have a node run through all numbers
before passing them on to the next node, since we would then
have no parallelism at all; on the other hand, passing on just
one number at a time isnt efficient either, due to the high
overhead of sending a message if it is a network (tens of
microseconds until the first bit reaches the wire, due to
software delay); thus efficiency would be greatly improved if
each node saved up a chunk of numbers before passing them to
the next node */
28
29
#include <mpi.h>
// mandatory
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
void Node0()
{ int I,ToCheck,Dummy,Error;
for (I = 1; I <= N/2; I++) {
ToCheck = 2 * I + 1; // latest number to check for div3
if (ToCheck > N) break;
if (ToCheck % 3 > 0) // not divis by 3, so send it down the pipe
// send the string at ToCheck, consisting of 1 MPI integer, to
// node 1 among MPI_COMM_WORLD, with a message type PIPE_MSG
Error = MPI_Send(&ToCheck,1,MPI_INT,1,PIPE_MSG,MPI_COMM_WORLD);
// error not checked in this code
}
// sentinel
MPI_Send(&Dummy,1,MPI_INT,1,END_MSG,MPI_COMM_WORLD);
}
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
void NodeBetween()
{ int ToCheck,Dummy,Divisor;
MPI_Status Status;
// first received item gives us our prime divisor
// receive into Divisor 1 MPI integer from node Me-1, of any message
// type, and put information about the message in Status
MPI_Recv(&Divisor,1,MPI_INT,Me-1,MPI_ANY_TAG,MPI_COMM_WORLD,&Status);
while (1) {
MPI_Recv(&ToCheck,1,MPI_INT,Me-1,MPI_ANY_TAG,MPI_COMM_WORLD,&Status);
// if the message type was END_MSG, end loop
if (Status.MPI_TAG == END_MSG) break;
if (ToCheck % Divisor > 0)
MPI_Send(&ToCheck,1,MPI_INT,Me+1,PIPE_MSG,MPI_COMM_WORLD);
}
MPI_Send(&Dummy,1,MPI_INT,Me+1,END_MSG,MPI_COMM_WORLD);
}
91
92
93
94
95
96
NodeEnd()
{ int ToCheck,PrimeCount,I,IsComposite,StartDivisor;
MPI_Status Status;
MPI_Recv(&StartDivisor,1,MPI_INT,Me-1,MPI_ANY_TAG,MPI_COMM_WORLD,&Status);
PrimeCount = Me + 2; /* must account for the previous primes, which
19
20
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
The set of machines can be heterogeneous, but MPI translates for you automatically. If say
one node has a big-endian CPU and another has a little-endian CPU, MPI will do the proper
conversion.
1.3.4
21
Scatter/Gather
Technically, the scatter/gather programmer world view is a special case of message passing.
However, it has become so pervasive as to merit its own section here.
In this paradigm, one node, say node 0, serves as a manager, while the others serve as workers.
The parcels out work to the workers, who process their respective chunks of the data and return the
results to the manager. The latter receives the results and combines them into the final product.
The matrix-vector multiply example in Section 1.3.3.1 is an example of scatter/gather.
As noted, scatter/gather is very popular. Here are some examples of packages that use it:
MPI includes scatter and gather functions (Section 7.4).
Hadoop/MapReduce Computing (Chapter 9) is basically a scatter/gather operation.
The snow package (Section 1.3.4.1) for the R language is also a scatter/gather operation.
1.3.4.1
R snow Package
Base R does not include parallel processing facilities, but includes the parallel library for this
purpose, and a number of other parallel libraries are available as well. The parallel package
arose from the merger (and slight modifcation) of two former user-contributed libraries, snow and
multicore. The former (and essentially the latter) uses the scatter/gather paradigm, and so will
be introduced in this section; see Section 1.3.4.1 for further details. for convenience, Ill refer to
the portion of parallel that came from snow simply as snow.
Lets use matrix-vector multiply as an example to learn from:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
22
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
[3 ,]
44
23
[4 ,]
7
29
[5 ,]
6
24
[6 ,]
28
29
[7 ,]
21
1
[8 ,]
38
30
> b < c (5 , 2)
> b
[ 1 ] 5 2
> a %% b # s e r i a l m u l t i p l y
[ ,1]
[1 ,]
88
[2 ,]
6
[ 3 , ] 174
[ 4 , ] 23
[ 5 , ] 18
[6 ,]
82
[ 7 , ] 103
[ 8 , ] 130
> c l u s t e r E x p o r t ( c2 , c ( a , b ) ) # send a , b t o w o r k e r s
> c l u s t e r E v a l Q ( c2 , a ) # check t h a t they have i t
[[1]]
[ ,1] [ ,2]
[1 ,]
34
41
[2 ,]
10
28
[3 ,]
44
23
[4 ,]
7
29
[5 ,]
6
24
[6 ,]
28
29
[7 ,]
21
1
[8 ,]
38
30
[[2]]
[ ,1] [ ,2]
[1 ,]
34
41
[2 ,]
10
28
[3 ,]
44
23
[4 ,]
7
29
[5 ,]
6
24
[6 ,]
28
29
[7 ,]
21
1
[8 ,]
38
30
> mmul( c2 , a , b ) # t e s t our p a r a l l e l code
[ 1 ] 88 6 174 23 18 82 103 130
23
# send a , b t o w o r k e r s
Note that this function assumes that a and b are global variables at the invoking node, i.e. the
manager, and it will place copies of them in the global workspace of the worker nodes.
Note that the copies are independent of the originals; if a worker changes, say, b[3], that change
wont be made at the manager or at the other worker. This is a message-passing system, indeed.
So, how does the mmul code work? Heres a handy copy:
1
2
3
4
5
6
mmul < f u n c t i o n ( c l s , u , v ) {
rowgrps < s p l i t I n d i c e s ( nrow ( u ) , l e n g t h ( c l s ) )
grpmul < f u n c t i o n ( grp ) u [ grp , ] %% v
mout < c l u s t e r A p p l y ( c l s , rowgrps , grpmul )
Reduce ( c , mout )
}
As discussed in Section 1.3.1, our strategy will be to partition the rows of the matrix, and then
have different workers handle different groups of rows. Our call to splitIndices() sets this up for
us.
That function does what its name implies, e.g.
> s p l i t I n d i c e s (12 ,5)
[[1]]
[1] 1 2 3
[[2]]
[1] 4 5
[[3]]
[1] 6 7
[[4]]
[1] 8 9
24
[[5]]
[ 1 ] 10 11 12
Here we asked the function to partition the numbers 1,...,12 into 5 groups, as equal-sized as possible,
which you can see is what it did. Note that the type of the return value is an R list.
So, after executing that function in our mmul() code, rowgrps will be an R list consisting of a
partitioning of the row numbers of u, exactly what we need.
The call to clusterApply() is then where the actual work is assigned to the workers. The code
mout < c l u s t e r A p p l y ( c l s , rowgrps , grpmul )
instructs snow to have the first worker process the rows in rowgrps[[1]], the second worker to
work on rowgrps[[2]], and so on. The clusterApply() function expects its second argument to
be an R list, which is the case here.
Each worker will then multiply v by its row group, and return the product to the manager. However,
the product will again be a list, one component for each worker, so we need Reduce() to string
everything back together.
Note that R does allow functions defined within functions, which the locals and arguments of the
outer function becoming global to the inner function.
Note that a here could have been huge, in which case the export action could slow down our
program. If a were not needed at the workers other than for this one-time matrix multiply, we may
wish to change to code so that we send each worker only the rows of a that we need:
1
2
3
4
5
6
7
mmul1 < f u n c t i o n ( c l s , u , v ) {
rowgrps < s p l i t I n d i c e s ( nrow ( u ) , l e n g t h ( c l s ) )
uchunks < Map( f u n c t i o n ( grp ) u [ grp , ] , rowgrps )
mulchunk < f u n c t i o n ( uc ) uc %% v
mout < c l u s t e r A p p l y ( c l s , uchunks , mulchunk )
Reduce ( c , mout )
}
11 [ 6 , ]
2
30
12 [ 7 , ]
33
23
13 [ 8 , ]
44
5
14 > a %% b
15
[ ,1]
16 [ 1 , ]
2
17 [ 2 , ] 63
18 [ 3 , ] 185
19 [ 4 , ] 113
20 [ 5 , ]
32
21 [ 6 , ] 50
22 [ 7 , ] 119
23 [ 8 , ] 210
24 > mmul1 ( c2 , a , b )
25 [ 1 ] 2 63 185 113
25
32 50 119 210
Note that we did not need to use clusterExport() to send the chunks of a to the workers, as the
call to clusterApply() does this, since it sends the arguments,
26
Chapter 2
2.1
Communication Bottlenecks
28
2.2
Load Balancing
Arguably the most central performance issue is load balancing, i.e. keeping all the processors
busy as much as possible. This issue arises constantly in any discussion of parallel processing.
A nice, easily understandable example is shown in Chapter 7 of the book, Multicore Application
Programming: for Windows, Linux and Oracle Solaris, Darryl Gove, 2011, Addison-Wesley. There
the author shows code to compute the Mandelbrot set, defined as follows.
Start with any number c in the complex plane, and initialize z to 0. Then keep applying the
transformation
z z2 + c
(2.1)
If the resulting sequence remains bounded (say after a certain number of iterations), we say that c
belongs to the Mandelbrot set.
Gove has a rectangular grid of points in the plane, and wants to determine whether each point is
in the set or not; a simple but time-consuming computation is used for this determination.1
Gove sets up two threads, one handling all the points in the left half of the grid and the other
handling the right half. He finds that the latter thread is very often idle, while the former thread
is usually busyextremely poor load balance. Well return to this issue in Section 2.4.
2.3
The term embarrassingly parallel is heard often in talk about parallel programming.
2.3.1
Consider a matrix multiplication application, for instance, in which we compute AX for a matrix
A and a vector X. One way to parallelize this problem would be to have each processor handle a
group of rows of A, multiplying each by X in parallel with the other processors, which are handling
other groups of rows. We call the problem embarrassingly parallel, with the word embarrassing
meaning that the problem is too easy, i.e. there is no intellectual challenge involved. It is pretty
obvious that the computation Y = AX can be parallelized very easily by splitting the rows of A
into groups.
1
29
By contrast, most parallel sorting algorithms require a great deal of interaction. For instance,
consider Mergesort. It breaks the vector to be sorted into two (or more) independent parts, say
the left half and right half, which are then sorted in parallel by two processes. So far, this is
embarrassingly parallel, at least after the vector is broken in half. But then the two sorted halves
must be merged to produce the sorted version of the original vector, and that process is not
embarrassingly parallel; it can be parallelized, but in a more complex, less obvious manner.
Of course, its no shame to have an embarrassingly parallel problem! On the contrary, except for
showoff academics, having an embarrassingly parallel application is a cause for celebration, as it is
easy to program.
In recent years, the term embarrassingly parallel has drifted to a somewhat different meaning.
Algorithms that are embarrassingly parallel in the above sense of simplicity tend to have very low
communication between processes, key to good performance. That latter trait is the center of
attention nowadays, so the term embarrassingly parallel generally refers to an algorithm with
low communication needs.
For that reason, many people would NOT considered even our prime finder example in Section
1.3.2.2 to be embarrassingly parallel. Yes, it was embarrassingly easy to write, but it has high
communication costs, as both its locks and its global array are accessed quite often.
On the other hand, the Mandelbrot computation described in Section 2.2 is truly embarrassingly
parallel, in both the old and new sense of the term. There the author Gove just assigned the
points on the left to one thread and the rest to the other threadvery simpleand there was no
communication between them.
2.3.2
Iterative Algorithms
Many parallel algorithms involve iteration, with a rendezvous of the tasks after each iteration.
Within each iteration, the nodes act entirely independently of each other, which makes the problem
seem embarrassingly parallel.
But unless the granularity of the problem is coarse, i.e. there is a large amount of work to do
in each iteration, the communication overhead will be significant, and the algorithm may not be
considered embarrassingly parallel.
30
2.4
Static (But Possibly Random) Task Assignment Typically Better Than Dynamic
Say an algorithm generates t independent2 tasks and we have p processors to handle them. In our
matrix-times-vector example of Section 1.3.1, say, each row of the matrix might be considered one
task. A processors work would then be to multiply the vector by this processors assigned rows of
the matrix.
How do we decide which tasks should be done by which processors? In static assignment, our code
would decide at the outset which processors will handle which tasks. The alternative, dynamic
assignment, would have processors determine their tasks as the computation proceeds.
In the matrix-times-vector example, say we have 10000 rows and 10 processors. In static task
assignment, we could pre-assign processor 0 rows 0-999, processor 1 rows 1000-1999 and so on. On
the other hand, we could set up a task farm, a queue consisting here of the numbers 0-9999. Each
time a processor finished handling one row, it would remove the number at the head of the queue,
and then process the row with that index.
It would at first seem that dynamic assignment is more efficient, as it is more flexible. However,
accessing the task farm, for instance, entails communication costs, which might be very heavy. In
this section, we will show that its typically better to use the static approach, though possibly
randomized.3
2.4.1
Consider again the problem of multiplying a vector X by a large matrix A, yielding a vector Y. Say
A has 10000 rows and we have 10 threads. Lets look at little closer at the static/dynamic tradeoff
outlined above. For concreteness, assume the shared-memory setting.
There are several possibilities here:
Method A: We could simply divide the 10000 rows into chunks of 10000/10 = 1000, and
parcel them out to the threads. We would pre-assign thread 0 to work on rows 0-999 of A,
thread 1 to work on rows 1000-1999 and so on.
This is essentially OpenMPs static scheduling policy, with default chunk size.4
There would be no communication between the threads this way, but there could be a problem
of load imbalance. Say for instance that by chance thread 3 finishes well before the others.
2
2.4. STATIC (BUT POSSIBLY RANDOM) TASK ASSIGNMENT TYPICALLY BETTER THAN DYNAMIC31
Then it will be idle, as all the work had been pre-allocated.
Method B: We could take the same approach as in Method A, but with a chunk size of, say,
100 instead of 1000. This is OpenMPs static policy again, but with a chunk size of 100.
If we didnt use OpenMP (which would internally do the following anyway, in essence), we
would have a shared variable named,say, nextchunk similar to nextbase in our prime-finding
program in Section 1.3.2.2. Each time a thread would finish a chunk, it would obtain a new
chunk to work on, by recording the value of nextchunk and incrementing that variable by 1
(all atomically, of course).
This approach would have better load balance, because the first thread to find there is no
work left to do would be idle for at most 100 rows amount of computation time, rather than
1000 as above. Meanwhile, though, communication would increase, as access to the locks
around nextchunk would often make one thread wait for another.5
Method C: So, Method A above minimizes communication at the possible expense of load
balance, while the Method B does the opposite.
OpenMP also offers the guided policy, which is like dynamic except the chunk size decreases over
time.
I will now show that in typical settings, the Method A above (or a slight modification) works well.
To this end, consider a chunk consisting of m tasks, such as m rows in our matrix example above,
with times T1 , T2 , ..., Tm . The total time needed to process the chunk is then T1 + ..., Tm .
The Ti can be considered random variables; some tasks take a long time to perform, some take
a short time, and so on. As an idealized model, lets treat them as independent and identically
distributed random variables. Under that assumption (if you dont have the probability background,
follow as best you can), we have that the mean (expected value) and variance of total task time
are
E(T1 + ..., Tm ) = mE(T1 )
and
V ar(T1 + ..., Tm ) = mV ar(T1 )
Thus
5
Why are we calling it communication here? Recall that in shared-memory programming, the threads communicate through shared variables. When one thread increments nextchunk, it communicates that new value to the
other threads by placing it in shared memory where they will see it, and as noted earlier contention among threads
to shared memory is a major source of potential slowdown.
32
In other words:
run time for a chunk is essentially constant if m is large, and
there is essentially no load imbalance in Method A
Since load imbalance was the only drawback to Method A and we now see its not a problem after
all, then Method A is best.
For more details and timing examples, see N. Matloff, Efficient Parallel R Loops on Long-Latency
Platforms, Proceedings of the 42nd Interface between Statistics and Computer Science, Rice University, June 2012.6
2.4.2
But what about the assumptions behind that reasoning? Consider for example the Mandelbrot
problem in Section 2.2. There were two threads, thus two chunks, with the tasks for a given chunk
being computations for all the points in the chunks assigned region of the picture.
Gove noted there was fairly strong load imbalance here, and that the reason was that most of the
Mandelbrot points turned out to be in the left half of the picture! The computation for a given
point is iterative, and if a point is not in the set, it tends to take only a few iterations to discover
this. Thats why the thread handling the right half of the picture was idle so often.
So Method A would not work well here, and upon reflection one can see that the problem was that
the tasks within a chunk were not independent, but were instead highly correlated, thus violating
our mathematical assumptions above. Of course, before doing the computation, Gove didnt know
that it would turn out that most of the set would be in the left half of the picture. But, one could
certainly anticipate the correlated nature of the points; if one point is not in the Mandelbrot set,
its near neighbors are probably not in it either.
But Method A can still be made to work well, via a simple modification: Simply form the chunks
randomly. In the matrix-multiply example above, with 10000 rows and chunk size 1000, do NOT
assign the chunks contiguously. Instead, generate a random permutation of the numbers 0,1,...,9999,
naming them i0 , i1 , ..., i9999 . Then assign thread 0 rows i0 i999 , thread 1 rows i1000 i1999 , etc.
6
As noted in the Preface to this book, I occasionally refer here to my research, to illustrate for students the
beneficial interaction between teaching and research.
2.4. STATIC (BUT POSSIBLY RANDOM) TASK ASSIGNMENT TYPICALLY BETTER THAN DYNAMIC33
In the Mandelbrot example, we could randomly assign rows of the picture, in the same way, and
avoid load imbalance.
So, actually, Method A, or lets call it Method A, will still typically work well.
2.4.3
sum = 0
f o r i = 0 . . . n2
f o r j = i + 1 . . . n1
count = 0
f o r k = 0 . . . n1 count += a [ i ] [ k ] a [ j ] [ k ]
mean = sum / ( n ( n 1)/2)
Say again n = 10000 and we have 10 threads. We should not simply assign work to the
threads by dividing up the i loop, with thread 0 taking the cases i = 0,...,999, thread
1 the cases 1000,...,1999 and so on. This would give us a real load balance problem.
Thread 8 would have much less work to do than thread 3, say.
We could randomize as discussed earlier, but there is a much better solution: Just pair
the rows of A. Thread 0 would handle rows 0,...,499 and 9500,...,9999, thread 1 would
handle rows 500,999 and 9000,...,9499 etc. This approach is taken in our OpenMP
implementation, Section 4.12.
In other words, Method A still works well.
In the mutual outlinks problem, we have a good idea beforehand as to how much time each task
needs, but this may not be true in general. An alternative would be to do random pre-assignment
of tasks to processors.
On the other hand, if we know beforehand that all of the tasks should take about the same time,
we should use static scheduling, as it might yield better cache and virtual memory performance.
34
2.4.4
Work Stealing
There is another variation to Method A that is of interest today, called work stealing. Here a
thread that finishes its assigned work and has thus no work left to do will raid the work queue
of some other thread. This is the approach taken, for example, by the elegant Cilk language.
Needless to say, accessing the other work queue is going to be expensive in terms of time and
memory contention overhead.
2.4.5
Timing Example
I ran the Mandelbrot example on a shared memory machine with four cores, two threads per core,
with the following results for eight threads, on an 8000x8000 grid:
policy
static
dynamic
guided
random
time
47.8
21.4
29.6
15.7
Default values were used for chunk size in the first three cases. I did try other chunk sizes for the
dynamic policy, but it didnt make much difference. See Section 4.4 for the code.
Needless to say, one shouldnt overly extrapolate from the above timings, but it does illustrate the
issues.
2.5
Weve been speaking of communications delays so far as being monolithic, but they are actually
(at least) two-dimensional. The key measures are latency and bandwidth:
Latency is the time it takes for one bit to travel for source to destination, e.g. from a CPU
to memory in a shared memory system, or from one computer to another in a cluster.
Bandwidth is the number of bits per unit time that can be input into the communications
channel. This can be affected by factors such as bus width in a shared memory system and
number of parallel network paths in a message passing system, and also by the speed of the
links.
Its helpful to think of a bridge, with toll booths at its entrance. Latency is the time needed for one
car to get from one end of the bridge to the other. Bandwidth is the number of cars that can enter
2.6
My own preference is shared-memory, but there are pros and cons to each paradigm.
It is generally believed in the parallel processing community that the shared-memory paradigm
produces code that is easier to write, debug and maintain than message-passing. See for instance
R. Chandra, Parallel Programming in OpenMP, MKP, 2001, pp.10ff (especially Table 1.1), and
M. Hess et al, Experiences Using OpenMP Based on Compiler Directive Software DSM on a PC
Cluster, in OpenMP Shared Memory Parallel Programming: International Workshop on OpenMP
Applications and Tools, Michael Voss (ed.), Springer, 2003, p.216.
On the other hand, in some cases message-passing can produce faster code. Consider the Odd/Even
Transposition Sort algorithm, for instance. Here pairs of processes repeatedly swap sorted arrays
with each other. In a shared-memory setting, this might produce a bottleneck at the shared memory,
slowing down the code. Of course, the obvious solution is that if you are using a shared-memory
machine, you should just choose some other sorting algorithm, one tailored to the shared-memory
setting.
There used to be a belief that message-passing was more scalable, i.e. amenable to very large
systems. However, GPU has demonstrated that one can achieve extremely good scalability with
shared-memory.
As will be seen, though, GPU is hardly a panacea. Where, then, are people to get access to largescale parallel systems? Most people do not (currently) have access to large-scale multicore machines,
while most do have access to large-scale message-passing machines, say in cloud computing venues.
Thus message-passing plays a role even for those of us who preferred the shared-memory paradigm.
Also, hybrid systems are common, in which a number of shared-memory systems are tied together
36
2.7
Many algorithms require large amounts of memory for intermediate storage of data. It may be
prohibitive to allocate this memory statically, i.e. at compile time. Yet dynamic allocation, say via
malloc() or C++s new (which probably produces a call to malloc() anyway, is very expensive
in time.
Using large amounts of memory also can be a major source of overhead due to cache misses and
page faults.
One way to avoid malloc(), of course, is to set up static arrays whenever possible.
There are no magic solutions here. One must simply be aware of the problem, and tweak ones code
accordingly, say by adjusting calls to malloc() so that one achieves a balance between allocating
too much memory and making too many calls.
2.8
This topic is covered in detail in Chapter 3, but is so important that the main points should be
mentioned here.
Memory is typically divided into banks. If more than one thread attempts to access the
same bank at the same time, that effectively serializes the program.
There is typically a cache at each processor. Keeping the contents of these caches consistent
with each other, and with the memory itself, adds a lot of overhead, causing slowdown.
In both cases, awareness of these issues should impact how you write your code.
See Sections 3.2 and 3.5.
Chapter 3
3.1
What Is Shared?
The term shared memory means that the processors all share a common address space. Say this
is occurring at the hardware level, and we are using Intel Pentium CPUs. Suppose processor P3
issues the instruction
movl 200, %ebx
which reads memory location 200 and places the result in the EAX register in the CPU. If processor
P4 does the same, they both will be referring to the same physical memory cell. (Note, however,
that each CPU has a separate register set, so each will have its own independent EAX.) In nonshared-memory machines, each processor has its own private memory, and each one will then have
its own location 200, completely independent of the locations 200 at the other processors memories.
Say a program contains a global variable X and a local variable Y on share-memory hardware
(and we use shared-memory software). If for example the compiler assigns location 200 to the
variable X, i.e. &X = 200, then the point is that all of the processors will have that variable in
common, because any processor which issues a memory operation on location 200 will access the
same physical memory cell.
37
38
On the other hand, each processor will have its own separate run-time stack. All of the stacks are
in shared memory, but they will be accessed separately, since each CPU has a different value in its
SP (Stack Pointer) register. Thus each processor will have its own independent copy of the local
variable Y.
To make the meaning of shared memory more concrete, suppose we have a bus-based system,
with all the processors and memory attached to the bus. Let us compare the above variables X and
Y here. Suppose again that the compiler assigns X to memory location 200. Then in the machine
language code for the program, every reference to X will be there as 200. Every time an instruction
that writes to X is executed by a CPU, that CPU will put 200 into its Memory Address Register
(MAR), from which the 200 flows out on the address lines in the bus, and goes to memory. This
will happen in the same way no matter which CPU it is. Thus the same physical memory location
will end up being accessed, no matter which CPU generated the reference.
By contrast, say the compiler assigns a local variable Y to something like ESP+8, the third item
on the stack (on a 32-bit machine), 8 bytes past the word pointed to by the stack pointer, ESP.
The OS will assign a different ESP value to each thread, so the stacks of the various threads will
be separate. Each CPU has its own ESP register, containing the location of the stack for whatever
thread that CPU is currently running. So, the value of Y will be different for each thread.
3.2
Memory Modules
3.2.1
Interleaving
There is a question of how to divide up the memory into banks. There are two main ways to do
this:
39
(a) High-order interleaving: Here consecutive words are in the same bank (except at boundaries). For example, suppose for simplicity that our memory consists of word-addresses 0
through 1023, and that there are four banks, M0 through M3. Then M0 would contain
word-addresses 0-255, M1 would have 256-511, M2 would have 512-767, and M3 would have
768-1023.
(b) Low-order interleaving: Here consecutive addresses are in consecutive banks (except when
we get to the right end). In the example above, if we used low-order interleaving, then wordaddress 0 would be in M0, 1 would be in M1, 2 would be in M2, 3 would be in M3, 4 would
be back in M0, 5 in M1, and so on.
Say we have eight banks. Then under high-order interleaving, the first three bits of a word-address
would be taken to be the bank number, with the remaining bits being address within bank. Under
low-order interleaving, the three least significant bits would be used to determine bank number.
Low-order interleaving has often been used for vector processors. On such a machine, we might
have both a regular add instruction, ADD, and a vector version, VADD. The latter would add two
vectors together, so it would need to read two vectors from memory. If low-order interleaving is
used, the elements of these vectors are spread across the various banks, so fast access is possible.
A more modern use of low-order interleaving, but with the same motivation as with the vector
processors, is in GPUs (Chapter 5).
High-order interleaving might work well in matrix applications, for instance, where we can partition
the matrix into blocks, and have different processors work on different blocks. In image processing
applications, we can have different processors work on different parts of the image. Such partitioning
almost never works perfectlye.g. computation for one part of an image may need information
from another partbut if we are careful we can get good results.
3.2.2
Consider an array x of 16 million elements, whose sum we wish to compute, say using 16 threads.
Suppose we have four memory banks, with low-order interleaving.
A naive implementation of the summing code might be
1
2
3
4
5
In other words, thread 0 would sum the first million elements, thread 1 would sum the second
million, and so on. After summing its portion of the array, a thread would then add its sum to a
40
grand total. (The threads could of course add to grandsum directly in each iteration of the loop,
but this would cause too much traffic to memory, thus causing slowdowns.)
Suppose for simplicity that there is one address per word (it is usually one address per byte).
Suppose also for simplicity that the threads run in lockstep, so that they all attempt to access
memory at once. On a multicore/multiprocessor machine, this may not occur, but it in fact
typically will occur in a GPU setting.
A problem then arises. To make matters simple, suppose that x starts at an address that is a
multiple of 4, thus in bank 0. (The reader should think about how to adjust this to the other
three cases.) On the very first memory access, thread 0 accesses x[0] in bank 0, thread 1 accesses
x[1000000], also in bank 0, and so onand these will all be in memory bank 0! Thus there will
be major conflicts, hence major slowdown.
A better approach might be to have any given thread work on every sixteenth element of x, instead
of on contiguous elements. Thread 0 would work on x[1000000], x[1000016], x[10000032,...;
thread 1 would handle x[1000001], x[1000017], x[10000033,...; and so on:
1
2
3
4
5
Here, consecutive threads work on consecutive elements in x.1 That puts them in separate banks,
thus no conflicts, hence speedy performance.
In general, avoiding bank conflicts is an art, but there are a couple of approaches we can try.
We can rewrite our algorithm, e.g. use the second version of the above code instead of the
first.
We can add padding to the array. For instance in the first version of our code above, we
could lengthen the array from 16 million to 16000016, placing padding in words 1000000,
2000001 and so on. Wed tweak our array indices in our code accordingly, and eliminate bank
conflicts that way.
In the first approach above, the concept of stride often arises. It is defined to be the distance
betwwen array elements in consecutive accesses by a thread. In our original code to compute
grandsum, the stride was 1, since each array element accessed by a thread is 1 past the last access
by that thread. In our second version, the stride was 16.
1
41
Strides of greater than 1 often arise in code that deals with multidimensional arrays. Say for
example we have two-dimensional array with 16 columns. In C/C++, which uses row-major order,
access of an entire column will have a stride of 16. Access down the main diagonal will have a
stride of 17.
Suppose we have b banks, again with low-order interleaving. You should experiment a bit to see
that an array access with a stride of s will access s different banks if and only if s and b are relatively
prime, i.e. the greatest common divisor of s and b is 1. This can be proven with group theory.
Another strategy, useful for collections of complex objects, is to set up structs of arrays rather
than arrays of structs. Say for instance we are working with data on workers, storing for each
worker his name, salary and number of years with the firm. We might naturally write code like
this:
1
2
3
4
5
struct {
c h a r name [ 2 5 ] ;
float salary ;
f l o a t yrs ;
} x[100];
That gives a 100 structs for 100 workers. Again, this is very natural, but it may make for poor
memory access patterns. Salary values for the various workers will no longer be contiguous, for
instance, even though the structs are contiguous. This could cause excessive cache misses.
One solution would be to add padding to each struct, so that the salary values are a word apart
in memory. But another approach would be to replace the above arrays of structs by a struct of
arrays:
1
2
3
4
5
struct {
c h a r name [ ] 1 0 0 ;
float salary [100];
f l o a t yrs [ 1 0 0 ] ;
}
3.2.3
As discussed above, array padding is used to try to get better parallel access to memory banks. The
code below is aimed to provide utilities to assist in this. Details are explained in the comments.
1
2
3
4
5
6
7
//
//
//
//
//
//
r o u t i n e s t o i n i t i a l i z e , r e a d and w r i t e
padded v e r s i o n s o f a matrix o f f l o a t s ;
t h e matrix i s n o m i n a l l y mxn , but i t s
rows w i l l be padded on t h e r i g h t ends ,
s o a s t o e n a b l e a s t r i d e o f s down each
column ; i t i s assumed t h a t s >= n
42
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
// a l l o c a t e s p a c e f o r t h e padded matrix ,
// i n i t i a l l y empty
f l o a t padmalloc ( i n t m, i n t n , i n t s ) {
r e t u r n ( m a l l o c (m s s i z e o f ( f l o a t ) ) ) ;
}
// s t o r e t h e v a l u e t o s t o r e i n t h e matrix q ,
// a t row i , column j ; m, n and
// s a r e a s i n padmalloc ( ) above
v o i d s e t t e r ( f l o a t q , i n t m, i n t n , i n t s ,
int i , int j , float tostore ) {
( q + i s+j ) = t o s t o r e ;
}
// f e t c h t h e v a l u e i n t h e matrix q ,
// a t row i , column j ; m, n and s a r e
// a s i n padmalloc ( ) above
f l o a t g e t t e r ( f l o a t q , i n t m, i n t n , i n t s ,
int i , int j ) {
r e t u r n ( q + i s+j ) ;
}
3.3
3.3.1
Interconnection Topologies
SMP Systems
43
3.3.2
NUMA Systems
In a Nonuniform Memory Access (NUMA) architecture, each CPU has a memory module
physically next to it, and these processor/memory (P/M) pairs are connected by some kind of
network.
Here is a simple version:
Each P/M/R set here is called a processing element (PE). Note that each PE has its own local
bus, and is also connected to the global bus via R, the router.
Suppose for example that P3 needs to access location 200, and suppose that high-order interleaving
is used. If location 200 is in M3, then P3s request is satisfied by the local bus.2 On the other hand,
suppose location 200 is in M8. Then the R3 will notice this, and put the request on the global bus,
where it will be seen by R8, which will then copy the request to the local bus at PE8, where the
request will be satisfied. (E.g. if it was a read request, then the response will go back from M8 to
R8 to the global bus to R3 to P3.)
It should be obvious now where NUMA gets its name. P8 will have much faster access to M8 than
P3 will to M8, if none of the buses is currently in useand if say the global bus is currently in use,
P3 will have to wait a long time to get what it wants from M8.
Today almost all high-end MIMD systems are NUMAs. One of the attractive features of NUMA is
that by good programming we can exploit the nonuniformity. In matrix problems, for example, we
can write our program so that, for example, P8 usually works on those rows of the matrix which are
stored in M8, P3 usually works on those rows of the matrix which are stored in M3, etc. In order
to do this, we need to make use of the C languages & address operator, and have some knowledge
2
This sounds similar to the concept of a cache. However, it is very different. A cache contains a local copy of
some data stored elsewhere. Here it is the data itself, not a copy, which is being stored locally.
44
3.3.3
The problem with a bus connection, of course, is that there is only one pathway for communication,
and thus only one processor can access memory at the same time. If one has more than, say, two
dozen processors are on the bus, the bus becomes saturated, even if traffic-reducing methods such
as adding caches are used. Thus multipathway topologies are used for all but the smallest systems.
In this section we look at two alternatives to a bus topology.
3.3.3.1
Crossbar Interconnects
Consider a shared-memory system with n processors and n memory modules. Then a crossbar
connection would provide n2 pathways. E.g. for n = 8:
45
Generally serial communication is used from node to node, with a packet containing information on
both source and destination address. E.g. if P2 wants to read from M5, the source and destination
will be 3-bit strings in the packet, coded as 010 and 101, respectively. The packet will also contain
bits which specify which word within the module we wish to access, and bits which specify whether
we wish to do a read or a write. In the latter case, additional bits are used to specify the value to
be written.
Each diamond-shaped node has two inputs (bottom and right) and two outputs (left and top), with
buffers at the two inputs. If a buffer fills, there are two design options: (a) Have the node from
which the input comes block at that output. (b) Have the node from which the input comes discard
the packet, and retry later, possibly outputting some other packet for now. If the packets at the
heads of the two buffers both need to go out the same output, the one (say) from the bottom input
will be given priority.
46
There could also be a return network of the same type, with this one being memory processor,
to return the result of the read requests.3
Another version of this is also possible. It is not shown here, but the difference would be that at
the bottom edge we would have the PEi and at the left edge the memory modules Mi would be
replaced by lines which wrap back around to PEi, similar to the Omega network shown below.
Crossbar switches are too expensive for large-scale systems, but are useful in some small systems.
The 16-CPU Sun Microsystems Enterprise 10000 system includes a 16x16 crossbar.
3.3.3.2
These are multistage networks similar to crossbars, but with fewer paths. Here is an example of a
NUMA 8x8 system:
Recall that each PE is a processor/memory pair. PE3, for instance, consists of P3 and M3.
Note the fact that at the third stage of the network (top of picture), the outputs are routed back
to the PEs, each of which consists of a processor and a memory module.4
At each network node (the nodes are the three rows of rectangles), the output routing is done by
destination bit. Lets number the stages here 0, 1 and 2, starting from the bottom stage, number
the nodes within a stage 0, 1, 2 and 3 from left to right, number the PEs from 0 to 7, left to right,
and number the bit positions in a destination address 0, 1 and 2, starting from the most significant
bit. Then at stage i, bit i of the destination address is used to determine routing, with a 0 meaning
routing out the left output, and 1 meaning the right one.
Say P2 wishes to read from M5. It sends a read-request packet, including 5 = 101 as its destination
address, to the switch in stage 0, node 1. Since the first bit of 101 is 1, that means that this switch
will route the packet out its right-hand output, sending it to the switch in stage 1, node 3. The
latter switch will look at the next bit in 101, a 0, and thus route the packet out its left output, to
the switch in stage 2, node 2. Finally, that switch will look at the last bit, a 1, and output out
3
For safetys sake, i.e. fault tolerance, even writes are typically acknowledged in multiprocessor systems.
The picture may be cut off somewhat at the top and left edges. The upper-right output of the rectangle in the top
row, leftmost position should connect to the dashed line which leads down to the second PE from the left. Similarly,
the upper-left output of that same rectangle is a dashed lined, possibly invisible in your picture, leading down to the
leftmost PE.
4
47
its right-hand output, sending it to PE5, as desired. M5 will process the read request, and send a
packet back to PE2, along the same
Again, if two packets at a node want to go out the same output, one must get priority (lets say it
is the one from the left input).
Here is how the more general case of N = 2n PEs works. Again number the rows of switches, and
switches within a row, as above. So, Sij will denote the switch in the i-th row from the bottom and
j-th column from the left (starting our numbering with 0 in both cases). Row i will have a total
of N input ports Iik and N output ports Oik , where k = 0 corresponds to the leftmost of the N in
each case. Then if row i is not the last row (i < n 1), Oik will be connected to Ijm , where j =
i+1 and
m = (2k + b(2k)/N c) mod N
(3.1)
3.3.4
Comparative Analysis
In the world of parallel architectures, a key criterion for a proposed feature is scalability, meaning
how well the feature performs as we go to larger and larger systems. Let n be the system size, either
the number of processors and memory modules, or the number of PEs. Then we are interested in
how fast the latency, bandwidth and cost grow with n:
criterion
latency
bandwidth
cost
bus
O(1)
O(1)
O(1)
Omega
O(log2 n)
O(n)
O(n log2 n)
crossbar
O(n)
O(n)
O(n2 )
Let us see where these expressions come from, beginning with a bus: No matter how large n is, the
time to get from, say, a processor to a memory module will be the same, thus O(1). Similarly, no
matter how large n is, only one communication can occur at a time, thus again O(1).5
Again, we are interested only in O( ) measures, because we are only interested in growth rates
as the system size n grows. For instance, if the system size doubles, the cost of a crossbar will
quadruple; the O(n2 ) cost measure tells us this, with any multiplicative constant being irrelevant.
For Omega networks, it is clear that log2 n network rows are needed, hence the latency value given.
Also, each row will have n/2 switches, so the number of network nodes will be O(n log2 n). This
5
Note that the 1 in O(1) does not refer to the fact that only one communication can occur at a time. If we
had, for example, a two-bus system, the bandwidth would still be O(1), since multiplicative constants do not matter.
What O(1) means, again, is that as n grows, the bandwidth stays at a multiple of 1, i.e. stays constant.
48
figure then gives the cost (in terms of switches, the main expense here). It also gives the bandwidth,
since the maximum number of simultaneous transmissions will occur when all switches are sending
at once.
Similar considerations hold for the crossbar case.
The crossbars big advantage is that it is guaranteed that n packets can be sent simultaneously,
providing they are to distinct destinations.
That is not true for Omega-networks. If for example, PE0 wants to send to PE3, and at the same
time PE4 wishes to sent to PE2, the two packets will clash at the leftmost node of stage 1, where
the packet from PE0 will get priority.
On the other hand, a crossbar is very expensive, and thus is dismissed out of hand in most modern
systems. Note, though, that an equally troublesom aspect of crossbars is their high latency value;
this is a big drawback when the system is not heavily loaded.
The bottom line is that Omega-networks amount to a compromise between buses and crossbars,
and for this reason have become popular.
3.3.5
In the shared-memory case, the Ms collectively form the entire shared address space, but with the
addresses being assigned to the Ms in one of two ways:
(a)
High-order interleaving. Here consecutive addresses are in the same M (except at boundaries).
For example, suppose for simplicity that our memory consists of addresses 0 through 1023,
and that there are four Ms. Then M0 would contain addresses 0-255, M1 would have 256-511,
M2 would have 512-767, and M3 would have 768-1023.
(b)
Low-order interleaving. Here consecutive addresses are in consecutive Ms (except when we
get to the right end). In the example above, if we used low-order interleaving, then address
0 would be in M0, 1 would be in M1, 2 would be in M2, 3 would be in M3, 4 would be back
in M0, 5 in M1, and so on.
The idea is to have several modules busy at once, say in conjunction with a split-transaction
bus. Here, after a processor makes a memory request, it relinquishes the bus, allowing others to
use it while the memory does the requested work. Without splitting the memory into modules, this
wouldnt achieve parallelism. The bus does need extra lines to identify which processor made the
request.
3.4
49
Synchronization Hardware
Avoidance of race conditions, e.g. implementation of locks, plays such a crucial role in sharedmemory parallel processing that hardware assistance is a virtual necessity. Recall, for instance,
that critical sections can effectively serialize a parallel program. Thus efficient implementation is
crucial.
3.4.1
Test-and-Set Instructions
Consider a bus-based system. In addition to whatever memory read and memory write instructions
the processor included, there would also be a TAS instruction.6 This instruction would control a
TAS pin on the processor chip, and the pin in turn would be connected to a TAS line on the bus.
Applied to a location L in memory and a register R, say, TAS does the following:
copy L to R
if R is 0 then write 1 to L
And most importantly, these operations are done in an atomic manner; no bus transactions by
other processors may occur between the two steps.
The TAS operation is applied to variables used as locks. Lets say that 1 means locked and 0
unlocked. Then the guarding of a critical section C by a lock variable L, using a register R, would
be done by having the following code in the program being run:
TRY:
C:
TAS R,L
JNZ TRY
...
; start of critical section
...
...
; end of critical section
MOV 0,L ; unlock
where of course JNZ is a jump-if-nonzero instruction, and we are assuming that the copying from
the Memory Data Register to R results in the processor N and Z flags (condition codes) being
affected.
3.4.1.1
On Pentium machines, the LOCK prefix can be used to get atomicity for certain instructions:
ADD, ADC, AND, BTC, BTR, BTS, CMPXCHG, DEC, INC, NEG, NOT, OR, SBB, SUB, XOR,
6
This discussion is for a mythical machine, but any real system works in this manner.
50
XADD. The bus will be locked for the duration of the execution of the instruction, thus setting up
atomic operations. There is a special LOCK line in the control bus for this purpose. (Locking thus
only applies to these instructions in forms in which there is an operand in memory.) By the way,
XCHG asserts this LOCK# bus signal even if the LOCK prefix is not specified.
For example, consider our count-the-2s example on page ??. If we store mycount in a register,
say EDX, then
l o c k add %edx , o v e r a l l c o u n t
without locks!
Here is how we could implement a lock if needed. The lock would be in a variable named, say,
lockvar.
movl $ l o c k v a r , %ebx
movl $1 , %ecx
top :
movl $0 , %eax
l o c k cmpxchg (%ebx ) , %ecx
j z top # e l s e l e a v e t h e l o o p and e n t e r t h e c r i t i c a l s e c t i o n
The operation CMPXCHG has EAX as an unnamed operand. The instruction basically does (here
source is ECX and destination is lockvar)
i f c (EAX) != c ( d e s t i n a t i o n ) # s o r r y , l o c k i s a l r e a d y l o c k e d
c (EAX) < c ( d e s t i n a t i o n )
ZF < 0 # t h e Zero Flag i n t h e EFLAGS r e g i s t e r
else
c ( d e s t i n a t i o n ) < c ( s o u r c e ) # l o c k t h e l o c k
ZF < 1
The LOCK prefix locks the bus for the entire duration of the instruction. Note
instruction here involves two memory transactionsone to read the old value of
and the second the write the new, incremented value back to overallcount. So,
for a rather long time, potentially compromising performance when other threads
memory, but the benefits can be huge.
51
In crossbar or -network systems, some 2-bit field in the packet must be devoted to transaction
type, say 00 for Read, 01 for Write and 10 for TAS. In a sytem with 16 CPUs and 16 memory
modules, say, the packet might consist of 4 bits for the CPU number, 4 bits for the memory module
number, 2 bits for the transaction type, and 32 bits for the data (for a write, this is the data to
be written, while for a read, it would be the requested value, on the trip back from the memory to
the CPU).
But note that the atomicity here is best done at the memory, i.e. some hardware should be added
at the memory so that TAS can be done; otherwise, an entire processor-to-memory path (e.g. the
bus in a bus-based system) would have to be locked up for a fairly long time, obstructing even the
packets which go to other memory modules.
3.4.2
Note carefully that in many settings it may not be crucial to get the most up-to-date value of
a variable. For example, a program may have a data structure showing work to be done. Some
processors occasionally add work to the queue, and others take work from the queue. Suppose the
queue is currently empty, and a processor adds a task to the queue, just as another processor is
checking the queue for work. As will be seen later, it is possible that even though the first processor
has written to the queue, the new value wont be visible to other processors for some time. But the
point is that if the second processor does not see work in the queue (even though the first processor
has put it there), the program will still work correctly, albeit with some performance loss.
3.4.3
Fetch-and-Add Instructions
Suppose our architectures instruction set included an F&A instruction. It would add 1 to the
specified location in memory, and return the old value (to Y) that had been in that location before
being incremented. And all this would be an atomic operation.
We would then replace the code above by a library call, say,
FETCH_AND_ADD(X,1);
52
where R is the register into which the old (pre-incrementing) value of X would be returned.
There would be hardware adders placed at each memory module. That means that the whole
operation could be done in one round trip to memory. Without F&A, we would need two round
trips to memory just for the
X++;
(we would load X into a register in the CPU, increment the register, and then write it back to X
in memory), and then the LOCK() and UNLOCK() would need trips to memory too. This could
be a huge time savings, especially for long-latency interconnects.
3.5
Cache Issues
If you need a review of cache memories or dont have background in that area at all, read Section
A.2.1 in the appendix of this book before continuing.
3.5.1
Cache Coherency
Consider, for example, a bus-based system. Relying purely on TAS for interprocessor synchronization would be unthinkable: As each processor contending for a lock variable spins in the loop shown
above, it is adding tremendously to bus traffic.
An answer is to have caches at each processor.7 These will store copies of the values of lock
variables. (Of course, non-lock variables are stored too. However, the discussion here will focus
on effects on lock variables.) The point is this: Why keep looking at a lock variable L again and
again, using up the bus bandwidth? L may not change value for a while, so why not keep a copy
in the cache, avoiding use of the bus?
The answer of course is that eventually L will change value, and this causes some delicate problems.
Say for example that processor P5 wishes to enter a critical section guarded by L, and that processor
P2 is already in there. During the time P2 is in the critical section, P5 will spin around, always
getting the same value for L (1) from C5, P5s cache. When P2 leaves the critical section, P2 will
7
The reader may wish to review the basics of caches. See for example http://heather.cs.ucdavis.edu/~matloff/
50/PLN/CompOrganization.pdf.
53
set L to 0and now C5s copy of L will be incorrect. This is the cache coherency problem,
inconsistency between caches.
A number of solutions have been devised for this problem. For bus-based systems, snoopy protocols
of various kinds are used, with the word snoopy referring to the fact that all the caches monitor
(snoop on) the bus, watching for transactions made by other caches.
The most common protocols are the invalidate and update types. This relation between these
two is somewhat analogous to the relation between write-back and write-through protocols for
caches in uniprocessor systems:
Under an invalidate protocol, when a processor writes to a variable in a cache, it first (i.e.
before actually doing the write) tells each other cache to mark as invalid its cache line (if
any) which contains a copy of the variable.8 Those caches will be updated only later, the
next time their processors need to access this cache line.
For an update protocol, the processor which writes to the variable tells all other caches to
immediately update their cache lines containing copies of that variable with the new value.
Lets look at an outline of how one implementation (many variations exist) of an invalidate protocol
would operate:
In the scenario outlined above, when P2 leaves the critical section, it will write the new value 0 to
L. Under the invalidate protocol, P2 will post an invalidation message on the bus. All the other
caches will notice, as they have been monitoring the bus. They then mark their cached copies of
the line containing L as invalid.
Now, the next time P5 executes the TAS instructionwhich will be very soon, since it is in the loop
shown aboveP5 will find that the copy of L in C5 is invalid. It will respond to this cache miss by
going to the bus, and requesting P2 to supply the real (and valid) copy of the line containing L.
But theres more. Suppose that all this time P6 had also been executing the loop shown above,
along with P5. Then P5 and P6 may have to contend with each other. Say P6 manages to grab
possession of the bus first.9 P6 then executes the TAS again, which finds L = 0 and changes L
back to 1. P6 then relinquishes the bus, and enters the critical section. Note that in changing L to
1, P6 also sends an invalidate signal to all the other caches. So, when P5 tries its execution of the
TAS again, it will have to ask P6 to send a valid copy of the block. P6 does so, but L will be 1,
so P5 must resume executing the loop. P5 will then continue to use its valid local copy of L each
8
We will follow commonly-used terminology here, distinguishing between a cache line and a memory block. Memory
is divided in blocks, some of which have copies in the cache. The cells in the cache are called cache lines. So, at any
given time, a given cache line is either empty or contains a copy (valid or not) of some memory block.
9
Again, remember that ordinary bus arbitration methods would be used.
54
time it does the TAS, until P6 leaves the critical section, writes 0 to L, and causes another cache
miss at P5, etc.
At first the update approach seems obviously superior, and actually, if our shared, cacheable10
variables were only lock variables, this might be true.
But consider a shared, cacheable vector. Suppose the vector fits into one block, and that we write
to each vector element sequentially. Under an update policy, we would have to send a new message
on the bus/network for each component, while under an invalidate policy, only one message (for the
first component) would be needed. If during this time the other processors do not need to access
this vector, all those update messages, and the bus/network bandwidth they use, would be wasted.
Or suppose for example we have code like
Sum += X[I];
in the middle of a for loop. Under an update protocol, we would have to write the value of Sum
back many times, even though the other processors may only be interested in the final value when
the loop ends. (This would be true, for instance, if the code above were part of a critical section.)
Thus the invalidate protocol works well for some kinds of code, while update works better for
others. The CPU designers must try to anticipate which protocol will work well across a broad mix
of applications.11
Now, how is cache coherency handled in non-bus shared-memory systems, say crossbars? Here
the problem is more complex. Think back to the bus case for a minute: The very feature which
was the biggest negative feature of bus systemsthe fact that there was only one path between
components made bandwidth very limitedis a very positive feature in terms of cache coherency,
because it makes broadcast very easy: Since everyone is attached to that single pathway, sending a
message to all of them costs no more than sending it to just onewe get the others for free. Thats
no longer the case for multipath systems. In such systems, extra copies of the message must be
created for each path, adding to overall traffic.
A solution is to send messages only to interested parties. In directory-based protocols, a list is
kept of all caches which currently have valid copies of all blocks. In one common implementation, for
example, while P2 is in the critical section above, it would be the owner of the block containing L.
(Whoever is the latest node to write to L would be considered its current owner.) It would maintain
a directory of all caches having valid copies of that block, say C5 and C6 in our story here. As
soon as P2 wrote to L, it would then send either invalidate or update packets (depending on which
type was being used) to C5 and C6 (and not to other caches which didnt have valid copies).
10
Many modern processors, including Pentium and MIPS, allow the programmer to mark some blocks as being
noncacheable.
11
Some protocols change between the two modes dynamically.
55
There would also be a directory at the memory, listing the current owners of all blocks. Say for
example P0 now wishes to join the club, i.e. tries to access L, but does not have a copy of that
block in its cache C0. C0 will thus not be listed in the directory for this block. So, now when it
tries to access L and it will get a cache miss. P0 must now consult the home of L, say P14. The
home might be determined by Ls location in main memory according to high-order interleaving;
it is the place where the main-memory version of L resides. A table at P14 will inform P0 that
P2 is the current owner of that block. P0 will then send a message to P2 to add C0 to the list of
caches having valid copies of that block. Similarly, a cache might resign from the club, due to
that cache line being replaced, e.g. in a LRU setting, when some other cache miss occurs.
3.5.2
Many types of cache coherency protocols have been proposed and used, some of them quite complex.
A relatively simple one for snoopy bus systems which is widely used is MESI, which for example is
the protocol used in the Pentium series.
MESI is an invalidate protocol for bus-based systems. Its name stands for the four states a given
cache line can be in for a given CPU:
Modified
Exclusive
Shared
Invalid
Note that each memory block has such a state at each cache. For instance, block 88 may be in state
S at P5s and P12s caches but in state I at P1s cache.
Here is a summary of the meanings of the states:
state
M
E
S
I
meaning
written to more than once; no other copy valid
valid; no other cache copy valid; memory copy valid
valid; at least one other cache copy valid
invalid (block either not in the cache or present but incorrect)
Following is a summary of MESI state changes.12 When reading it, keep in mind again that there
is a separate state for each cache/memory block combination.
12
See Pentium Processor System Architecture, by D. Anderson and T. Shanley, Addison-Wesley, 1995. We have
simplified the presentation here, by eliminating certain programmable options.
56
In addition to the terms read hit, read miss, write hit, write miss, which you are already
familiar with, there are also read snoop and write snoop. These refer to the case in which our
CPU observes on the bus a block request by another CPU that has attempted a read or write
action but encountered a miss in its own cache; if our cache has a valid copy of that block, we must
provide it to the requesting CPU (and in some cases to memory).
So, here are various events and their corresponding state changes:
If our CPU does a read:
present state
M
E
S
I
I
event
read hit
read hit
read hit
read miss; no valid cache copy at any other CPU
read miss; at least one valid cache copy in some other CPU
new state
M
E
S
E
S
event
write hit; do not put invalidate signal on bus; do not update memory
same as M above
write hit; put invalidate signal on bus; update memory
write miss; update memory but do nothing else
new state
M
M
E
I
event
read snoop; write line back to memory, picked up by other CPU
write snoop; write line back to memory, signal other CPU now OK to do its write
read snoop; put shared signal on bus; no memory action
write snoop; no memory action
read snoop
write snoop
any snoop
Note that a write miss does NOT result in the associated block being brought in from memory.
Example: Suppose a given memory block has state M at processor A but has state I at processor
B, and B attempts to write to the block. B will see that its copy of the block is invalid, so it notifies
the other CPUs via the bus that it intends to do this write. CPU A sees this announcement, tells
B to wait, writes its own copy of the block back to memory, and then tells B to go ahead with its
write. The latter action means that As copy of the block is not correct anymore, so the block now
newstate
S
I
S
I
S
I
I
57
has state I at A. Bs action does not cause loading of that block from memory to its cache, so the
block still has state I at B.
3.5.3
Since W and Z are declared adjacently, most compilers will assign them contiguous memory addresses. Thus, unless one of them is at a memory block boundary, when they are cached they
will be stored in the same cache line. Suppose the program writes to Z, and our system uses an
invalidate protocol. Then W will be considered invalid at the other processors, even though its
values at those processors caches are correct. This is the false sharing problem, alluding to the
fact that the two variables are sharing a cache line even though they are not related.
This can have very adverse impacts on performance. If for instance our variable W is now written
to, then Z will suffer unfairly, as its copy in the cache will be considered invalid even though it is
perfectly valid. This can lead to a ping-pong effect, in which alternate writing to two variables
leads to a cyclic pattern of coherency transactions.
One possible solution is to add padding, e.g. declaring W and Z like this:
int W,U[1000],Z;
to separate W and Z so that they wont be in the same cache block. Of course, we must take block
size into account, and check whether the compiler really has placed the two variables are in widely
separated locations. To do this, we could for instance run the code
printf("%x %x\n,&W,&Z);
3.6
Though the word consistency in the title of this section may seem to simply be a synonym for
coherency from the last section, and though there actually is some relation, the issues here are
quite different. In this case, it is a timing issue: After one processor changes the value of a shared
variable, when will that value be visible to the other processors?
58
There are various reasons why this is an issue. For example, many processors, especially in multiprocessor systems, have write buffers, which save up writes for some time before actually sending
them to memory. (For the time being, lets suppose there are no caches.) The goal is to reduce
memory access costs. Sending data to memory in groups is generally faster than sending one at a
time, as the overhead of, for instance, acquiring the bus is amortized over many accesses. Reads
following a write may proceed, without waiting for the write to get to memory, except for reads to
the same address. So in a multiprocessor system in which the processors use write buffers, there
will often be some delay before a write actually shows up in memory.
A related issue is that operations may occur, or appear to occur, out of order. As noted above, a
read which follows a write in the program may execute before the write is sent to memory. Also, in
a multiprocessor system with multiple paths between processors and memory modules, two writes
might take different paths, one longer than the other, and arrive out of order. In order to simplify
the presentation here, we will focus on the case in which the problem is due to write buffers, though.
The designer of a multiprocessor system must adopt some consistency model regarding situations
like this. The above discussion shows that the programmer must be made aware of the model,
or risk getting incorrect results. Note also that different consistency models will give different
levels of performance. The weaker consistency models make for faster machines but require the
programmer to do more work.
The strongest consistency model is Sequential Consistency. It essentially requires that memory
operations done by one processor are observed by the other processors to occur in the same order
as executed on the first processor. Enforcement of this requirement makes a system slow, and it
has been replaced on most systems by weaker models.
One such model is release consistency. Here the processors instruction sets include instructions
ACQUIRE and RELEASE. Execution of an ACQUIRE instruction at one processor involves telling
all other processors to flush their write buffers. However, the ACQUIRE wont execute until pending
RELEASEs are done. Execution of a RELEASE basically means that you are saying, Im done
writing for the moment, and wish to allow other processors to see what Ive written. An ACQUIRE
waits for all pending RELEASEs to complete before it executes.13
A related model is scope consistency. Say a variable, say Sum, is written to within a critical
section guarded by LOCK and UNLOCK instructions. Then under scope consistency any changes
made by one processor to Sum within this critical section would then be visible to another processor
when the latter next enters this critical section. The point is that memory update is postponed
until it is actually needed. Also, a barrier operation (again, executed at the hardware level) forces
all pending memory writes to complete.
All modern processors include instructions which implement consistency operations. For example,
13
There are many variants of all of this, especially in the software distibuted shared memory realm, to be discussed
later.
59
Sun Microsystems SPARC has a MEMBAR instruction. If used with a STORE operand, then all
pending writes at this processor will be sent to memory. If used with the LOAD operand, all writes
will be made visible to this processor.
Now, how does cache coherency fit into all this? There are many different setups, but for example
lets consider a design in which there is a write buffer between each processor and its cache. As the
processor does more and more writes, the processor saves them up in the write buffer. Eventually,
some programmer-induced event, e.g. a MEMBAR instruction,14 will cause the buffer to be flushed.
Then the writes will be sent to memoryactually meaning that they go to the cache, and then
possibly to memory.
The point is that (in this type of setup) before that flush of the write buffer occurs, the cache
coherency system is quite unaware of these writes. Thus the cache coherency operations, e.g. the
various actions in the MESI protocol, wont occur until the flush happens.
To make this notion concrete, again consider the example with Sum above, and assume release or
scope consistency. The CPU currently executing that code (say CPU 5) writes to Sum, which is a
memory operationit affects the cache and thus eventually the main memorybut that operation
will be invisible to the cache coherency protocol for now, as it will only be reflected in this processors
write buffer. But when the unlock is finally done (or a barrier is reached), the write buffer is flushed
and the writes are sent to this CPUs cache. That then triggers the cache coherency operation
(depending on the state). The point is that the cache coherency operation would occur only now,
not before.
What about reads? Suppose another processor, say CPU 8, does a read of Sum, and that page
is marked invalid at that processor. A cache coherency operation will then occur. Again, it will
depend on the type of coherency policy and the current state, but in typical systems this would
result in Sums cache block being shipped to CPU 8 from whichever processor the cache coherency
system thinks has a valid copy of the block. That processor may or may not be CPU 5, but even
if it is, that block wont show the recent change made by CPU 5 to Sum.
The analysis above assumed that there is a write buffer between each processor and its cache. There
would be a similar analysis if there were a write buffer between each cache and memory.
Note once again the performance issues. Instructions such as ACQUIRE or MEMBAR will use
a substantial amount of interprocessor communication bandwidth. A consistency model must be
chosen carefully by the system designer, and the programmer must keep the communication costs
in mind in developing the software.
The recent Pentium models use Sequential Consistency, with any write done by a processor being
immediately sent to its cache as well.
14
We call this programmer-induced, since the programmer will include some special operation in her C/C++
code which will be translated to MEMBAR.
60
3.7
In addition to read and write operations being specifiable in a network packet, an F&A operation
could be specified as well (a 2-bit field in the packet would code which operation was desired).
Again, there would be adders included at the memory modules, i.e. the addition would be done at
the memory end, not at the processors. When the F&A packet arrived at a memory module, our
variable X would have 1 added to it, while the old value would be sent back in the return packet
(and put into R).
Another possibility for speedup occurs if our system uses a multistage interconnection network
such as a crossbar. In that situation, we can design some intelligence into the network nodes to do
packet combining: Say more than one CPU is executing an F&A operation at about the same
time for the same variable X. Then more than one of the corresponding packets may arrive at the
same network node at about the same time. If each one requested an incrementing of X by 1,
the node can replace the two packets by one, with an increment of 2. Of course, this is a delicate
operation, and we must make sure that different CPUs get different return values, etc.
3.8
Multicore Chips
A recent trend has been to put several CPUs on one chip, termed a multicore chip. As of March
2008, dual-core chips are common in personal computers, and quad-core machines are within reach
of the budgets of many people. Just as the invention of the integrated circuit revolutionized the
computer industry by making computers affordable for the average person, multicore chips will
undoubtedly revolutionize the world of parallel programming.
A typical dual-core setup might have the two CPUs sharing a common L2 cache, with each CPU
having its own L1 cache. The chip may interface to the bus or interconnect network of via an L3
cache.
Multicore is extremely important these days. However, they are just SMPs, for the most part, and
thus should not be treated differently.
3.9
A common question involves the best number of threads to run in a shared-memory setting. Clearly
there is no general magic answer, but here are some considerations:15
15
As with many aspects of parallel programming, a good basic knowledge of operating systems is key. See the
reference on page 7.
61
If your application does a lot of I/O, CPUs or cores may stay idle while waiting for I/O
events. It thus makes sense to have many threads, so that computation threads can run when
the I/O threads are tied up.
In a purely computational application, one generally should not have more threads than cores.
However, a program with a lot of virtual memory page faults may benefit from setting up
extra threads, as page replacement involves (disk) I/O.
Applications in which there is heavy interthread communication, say due to having a lot of
lock variable, access, may benefit from setting up fewer threads than the number of cores.
Many Intel processors include hardware for hypertheading. These are not full threads in the
sense of having separate cores, but rather involve a limited amount of resource duplication
within a core. The performance gain from this is typically quite modest. In any case, be
aware of it; some software systems count these as threads, and assume for instance that there
are 8 cores when the machine is actually just quad core.
With GPUs (Chapter 5), most memory accesses have long latency and thus are I/O-like.
Typically one needs very large numbers of threads for good performance.
3.10
Processor Affinity
With a timesharing OS, a given thread may run on different cores during different timeslices. If
so, the cache for a given core may need a lot of refreshing, each time a new thread runs on that
core. To avoid this slowdown, one might designate a preferred core for each thread, in the hope
of reusing cache contents. Setting this up is dependent on the chip and the OS. OpenMP 3.1 has
some facility for this.
3.11
3.11.0.1
There are also various shared-memory software packages that run on message-passing hardware such
as NOWs, called software distributed shared memory (SDSM) systems. Since the platforms
do not have any physically shared memory, the shared-memory view which the programmer has
is just an illusion. But that illusion is very useful, since the shared-memory paradigm is believed
to be the easier one to program in. Thus SDSM allows us to have the best of both worldsthe
convenience of the shared-memory world view with the inexpensive cost of some of the messagepassing hardware systems, particularly networks of workstations (NOWs).
62
SDSM itself is divided into two main approaches, the page-based and object-based varieties.
The page-based approach is generally considered clearer and easier to program in, and provides the
programmer the look and feel of shared-memory programming better than does the object-based
type.16 We will discuss only the page-based approach here. The most popular SDSM system today
is the page-based Treadmarks (Rice University). Another excellent page-based system is JIAJIA
(Academy of Sciences, China).
To illustrate how page-based SDSMs work, consider the line of JIAJIA code
Prime = (int *) jia_alloc(N*sizeof(int));
The function jia alloc() is part of the JIAJIA library, libjia.a, which is linked to ones application
program during compilation.
At first this looks a little like a call to the standard malloc() function, setting up an array Prime
of size N. In fact, it does indeed allocate some memory. Note that each node in our JIAJIA group
is executing this statement, so each node allocates some memory at that node. Behind the scenes,
not visible to the programmer, each node will then have its own copy of Prime.
However, JIAJIA sets things up so that when one node later accesses this memory, for instance in
the statement
Prime[I] = 1;
this action will eventually trigger a network transaction (not visible to the programmer) to the
other JIAJIA nodes.17 This transaction will then update the copies of Prime at the other nodes.18
How is all of this accomplished? It turns out that it relies on a clever usage of the nodes virtual
memory (VM) systems. To understand this, you need a basic knowledge of how VM systems work.
If you lack this, or need review, read Section A.2.2 in the appendix of this book before continuing.
Here is how VM is exploited to develop SDSMs on Unix systems. The SDSM will call a system
function such as mprotect(). This allows the SDSM to deliberately mark a page as nonresident
(even if the page is resident). Basically, anytime the SDSM knows that a nodes local copy of a
variable is invalid, it will mark the page containing that variable as nonresident. Then, the next
time the program at this node tries to access that variable, a page fault will occur.
As mentioned in the review above, normally a page fault causes a jump to the OS. However,
technically any page fault in Unix is handled as a signal, specifically SIGSEGV. Recall that Unix
allows the programmer to write his/her own signal handler for any signal type. In this case, that
16
63
means that the programmermeaning the people who developed JIAJIA or any other page-based
SDSMwrites his/her own page fault handler, which will do the necessary network transactions
to obtain the latest valid value for X.
Note that although SDSMs are able to create an illusion of almost all aspects of shared memory,
it really is not possible to create the illusion of shared pointer variables. For example on shared
memory hardware we might have a variable like P:
int Y,*P;
...
...
P = &Y;
...
There is no simple way to have a variable like P in an SDSM. This is because a pointer is an
address, and each node in an SDSM has its own memory separate address space. The problem is
that even though the underlying SDSM system will keep the various copies of Y at the different
nodes consistent with each other, Y will be at a potentially different address on each node.
All SDSM systems must deal with a software analog of the cache coherency problem. Whenever one
node modifies the value of a shared variable, that node must notify the other nodes that a change
has been made. The designer of the system must choose between update or invalidate protocols,
just as in the hardware case.19 Recall that in non-bus-based shared-memory multiprocessors, one
needs to maintain a directory which indicates at which processor a valid copy of a shared variable
exists. Again, SDSMs must take an approach similar to this.
Similarly, each SDSM system must decide between sequential consistency, release consistency etc.
More on this later.
Note that in the NOW context the internode communication at the SDSM level is typically done
by TCP/IP network actions. Treadmarks uses UDP, which is faster than TCP. but still part of the
slow TCP/IP protocol suite. TCP/IP was simply not designed for this kind of work. Accordingly,
there have been many efforts to use more efficient network hardware and software. The most
popular of these is the Virtual Interface Architecture (VIA).
Not only are coherency actions more expensive in the NOW SDSM case than in the shared-memory
hardware case due to network slowness, there is also expense due to granularity. In the hardware
case we are dealing with cache blocks, with a typical size being 512 bytes. In the SDSM case, we
are dealing with pages, with a typical size being 4096 bytes. The overhead for a cache coherency
transaction can thus be large.
19
Note, though, that we are not actually dealing with a cache here. Each node in the SDSM system will have a
cache, of course, but a nodes cache simply stores parts of that nodes set of pages. The coherency across nodes is
across pages, not caches. We must insure that a change made to a given page is eventually propropagated to pages
on other nodes which correspond to this one.
64
3.11.0.2
Programmer Interface
We will not go into detail on JIAJIA programming here. There is a short tutorial on JIAJIA at
http://heather.cs.ucdavis.edu/~matloff/jiajia.html, but here is an overview:
One writes in C/C++ (or FORTRAN), making calls to the JIAJIA library, which is linked
in upon compilation.
The library calls include standard shared-memory operations for lock, unlock, barrier, processor number, etc., plus some calls aimed at improving performance.
Following is a JIAJIA example program, performing Odd/Even Transposition Sort. This is a
variant on Bubble Sort, sometimes useful in parallel processing contexts.20 The algorithm consists
of n phases, in which each processor alternates between trading with its left and right neighbors.
1
2
3
4
5
6
7
8
9
#include <stdio.h>
#include <stdlib.h>
#include <jia.h> // required include; also must link via -ljia
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
System Workings
JIAJIAs main characteristics as an SDSM are:
page-based
scope consistency
home-based
multiple writers
65
66
21
Writes will also be propagated at barrier operations, but two successive arrivals by a processor to a barrier can
be considered to be a lock/unlock pair, by considering a departure from a barrier to be a lock, and considering
reaching a barrier to be an unlock. So, well usually not mention barriers separately from locks in the remainder
of this subsection.
22
The set of changes is called a diff, remiscent of the Unix file-compare command. A copy, called a twin, had
been made of the original page, which now will be used to produce the diff. This has substantial overhead. The
Treadmarks people found that it took 167 microseconds to make a twin, and as much as 686 microseconds to make
a diff.
23
In JIAJIA, that location is normally fixed, but JIAJIA does include advanced programmer options which allow
the location to migrate.
67
The general principle here is that writes performed at one node can be made visible at other nodes
on a need to know basis. If for instance in the above example with CPUs 5 and 8, CPU 2
does not access this page, it would be wasteful to send the writes to CPU 2, or for that matter
to even inform CPU 2 that the page had been written to. This is basically the idea of all nonSequential consistency protocols, even though they differ in approach and in performance for a
given application.
JIAJIA allows multiple writers of a page. Suppose CPU 4 and CPU 15 are simultaneously writing
to a particular page, and the programmer has relied on a subsequent barrier to make those writes
visible to other processors.24 When the barrier is reached, each will be informed of the writes of the
other.25 Allowing multiple writers helps to reduce the performance penalty due to false sharing.
3.12
Barrier Implementation
Recall that a barrier is program code26 which has a processor do a wait-loop action until all
processors have reached that point in the program.27
A function Barrier() is often supplied as a library function; here we will see how to implement
such a library function in a correct and efficient manner. Note that since a barrier is a serialization
point for the program, efficiency is crucial to performance.
Implementing a barrier in a fully correct manner is actually a bit tricky. Well see here what can
go wrong, and how to make sure it doesnt.
In this section, we will approach things from a shared-memory point of view. But the methods
apply in the obvious way to message-passing systems as well, as will be discused later.
24
The only other option would be to use lock/unlock, but then their writing would not be simultaneous.
If they are writing to the same variable, not just the same page, the programmer would use locks instead of a
barrier, and the situation would not arise.
26
Some hardware barriers have been proposed.
27
I use the word processor here, but it could be just a thread on the one hand, or on the other hand a processing
element in a message-passing context.
25
68
3.12.1
1
2
3
4
5
6
A Use-Once Version
struct BarrStruct {
int NNodes, // number of threads participating in the barrier
Count, // number of threads that have hit the barrier so far
EvenOdd, // "parity"
pthread_mutex_t Lock = PTHREAD_MUTEX_INITIALIZER;
} ;
7
8
9
10
11
12
13
This is very simple, actually overly so. This implementation will work once, so if a program using
it doesnt make two calls to Barrier() it would be fine. But not otherwise. If, say, there is a call
to Barrier() in a loop, wed be in trouble.
What is the problem? Clearly, something must be done to reset Count to 0 at the end of the call,
but doing this safely is not so easy, as seen in the next section.
3.12.2
Unfortunately, this doesnt work either. To see why, consider a loop with a barrier call at the end:
1
2
3
4
5
6
7
struct BarrStruct B;
........
while (.......) {
.........
Barrier(&B);
.........
}
// global variable
At the end of the first iteration of the loop, all the processors will wait at the barrier until everyone
catches up. After this happens, one processor, say 12, will reset B.Count to 0, as desired. But
69
if we are unlucky, some other processor, say processor 3, will then race ahead, perform the second
iteration of the loop in an extremely short period of time, and then reach the barrier and increment
the Count variable before processor 12 resets it to 0. This would result in disaster, since processor
3s increment would be canceled, leaving us one short when we try to finish the barrier the second
time.
Another disaster scenario which might occur is that one processor might reset B.Count to 0 before
another processor had a chance to notice that B.Count had reached B.NNodes.
3.12.3
A Correct Version
One way to avoid this would be to have two Count variables, and have the processors alternate
using one then the other. In the scenario described above, processor 3 would increment the other
Count variable, and thus would not conflict with processor 12s resetting. Here is a safe barrier
function based on this idea:
1
2
3
4
5
struct BarrStruct {
int NNodes, // number of threads participating in the barrier
Count[2], // number of threads that have hit the barrier so far
pthread_mutex_t Lock = PTHREAD_MUTEX_INITIALIZER;
} ;
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
3.12.4
3.12.4.1
Refinements
Use of Wait Operations
The code
else while (PB->Count[Par] > 0) ;
70
is harming performance, since it has the processor spining around doing no useful work. In the
Pthreads context, we can use a condition variable:
1
2
3
4
5
6
struct BarrStruct {
int NNodes, // number of threads participating in the barrier
Count[2], // number of threads that have hit the barrier so far
pthread_mutex_t Lock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t CV = PTHREAD_COND_INITIALIZER;
} ;
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Here, if a thread finds that not everyone has reached the barrier yet, it still waits for the rest, but
does so passively, via the wait for the condition variable CV. This way the thread is not wasting
valuable time on that processor, which can run other useful work.
Note that the call to pthread cond wait() requires use of the lock. Your code must lock the
lock before making the call. The call itself immediately unlocks that lock after it registers the
wait with the threads manager. But the call blocks until awakened when another thread calls
pthread cond signal() or pthread cond broadcast().
It is required that your code lock the lock before calling pthread cond signal(), and that it
unlock the lock after the call.
By using pthread cond wait() and placing the unlock operation later in the code, as seen above,
we actually could get by with just a single Count variable, as before.
Even better, the for loop could be replaced by a single call
pthread_cond_broadcast(&PB->CV);
This still wakes up the waiting threads one by one, but in a much more efficient way, and it makes
for clearer code.
71
3.12.4.2.1 Tree Barriers It is clear from the code above that barriers can be costly to performance, since they rely so heavily on critical sections, i.e. serial parts of a program. Thus in
many settings it is worthwhile to parallelize not only the general computation, but also the barrier
operations themselves.
Consider for instance a barrier in which 16 threads are participating. We could speed things up
by breaking this barrier down into two sub-barriers, with eight threads each. We would then set
up three barrier operations: one of the first group of eight threads, another for the other group
of eight threads, and a third consisting of a competition between the two groups. The variable
NNodes above would have the value 8 for the first two barriers, and would be equal to 2 for the
third barrier.
Here thread 0 could be the representative for the first group, with thread 4 representing the second
group. After both groupss barriers were hit by all of their members, threads 0 and 4 would
participated in the third barrier.
Note that then the notification phase would the be done in reverse: When the third barrier was
complete, threads 0 and 4 would notify the members of their groups.
This would parallelize things somewhat, as critical-section operations could be executing simultaneously for the first two barriers. There would still be quite a bit of serial action, though, so we
may wish to do further splitting, by partitioning each group of four threads into two subroups of
two threads each.
In general, for n threads (with n, say, equal to a power of 2) we would have a tree structure, with
log2 n levels in the tree. The ith level (starting with the root as level 0) with consist of 2i parallel
barriers, each one representing n/2i threads.
3.12.4.2.2 Butterfly Barriers Another method basically consists of each node shaking hands
with every other node. In the shared-memory case, handshaking could be done by having a global
array ReachedBarrier. When thread 3 and thread 7 shake hands, for instance, would reach the
barrier, thread 3 would set ReachedBarrier[3] to 1, and would then wait for ReachedBarrier[7]
to become 1. The wait, as before, could either be a while loop or a call to pthread cond wait().
Thread 7 would do the opposite.
If we have n nodes, again with n being a power of 2, then the barrier process would consist of log2 n
phases, which well call phase 0, phase 1, etc. Then the process works as follows.
For any node i, let i(k) be the number obtained by inverting bit k in the binary representation of
i, with bit 0 being the least significant bit. Then in the k th phase, node i would shake hands with
node i(k).
72
For example, say n = 8. In phase 0, node 5 = 1012 , say, would shake hands with node 4 = 1002 .
Actually, a butterfly exchange amounts to a number of simultaneously tree operations.
Chapter 4
Introduction to OpenMP
OpenMP has become the de facto standard for shared-memory programming.
4.1
Overview
OpenMP has become the environment of choice for many, if not most, practitioners of sharedmemory parallel programming. It consists of a set of directives which are added to ones C/C++/FORTRAN
code that manipulate threads, without the programmer him/herself having to deal with the threads
directly. This way we get the best of both worldsthe true parallelism of (nonpreemptive)
threads and the pleasure of avoiding the annoyances of threads programming.
Most OpenMP constructs are expressed via pragmas, i.e. directives. The syntax is
#pragma omp ......
The number sign must be the first nonblank character in the line.
4.2
The following example, implementing Dijkstras shortest-path graph algorithm, will be used throughout this tutorial, with various OpenMP constructs being illustrated later by modifying this code:
1
// Dijkstra.c
2
3
73
74
4
5
6
7
// usage:
dijkstra nv print
8
9
10
// where nv is the size of the graph, and print is 1 if graph and min
// distances are to be printed out, 0 otherwise
11
12
#include <omp.h>
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
// mv
void updatemind(int s, int e)
{ int i;
for (i = s; i <= e; i++)
if (mind[mv] + ohd[mv*nv+i] < mind[i])
mind[i] = mind[mv] + ohd[mv*nv+i];
}
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
void dowork()
{
#pragma omp parallel
{ int startv,endv, // start, end vertices for my thread
step, // whole procedure goes nv steps
mymv, // vertex which attains the min value in my chunk
me = omp_get_thread_num();
unsigned mymd; // min value found by this thread
#pragma omp single
{ nth = omp_get_num_threads(); // must call inside parallel block
if (nv % nth != 0) {
printf("nv must be divisible by nth\n");
exit(1);
}
chunk = nv/nth;
printf("there are %d threads\n",nth);
}
startv = me * chunk;
endv = startv + chunk - 1;
for (step = 0; step < nv; step++) {
// find closest vertex to 0 among notdone; each thread finds
// closest in its group, then we find overall closest
#pragma omp single
{ md = largeint; mv = 0; }
findmymin(startv,endv,&mymd,&mymv);
// update overall min if mine is smaller
#pragma omp critical
{ if (mymd < md)
{ md = mymd; mv = mymv; }
}
#pragma omp barrier
// mark new vertex as done
#pragma omp single
{ notdone[mv] = 0; }
// now update my section of mind
updatemind(startv,endv);
#pragma omp barrier
}
}
}
110
111
112
113
114
115
116
117
118
119
75
76
120
121
122
123
124
125
126
127
128
129
130
131
132
133
The constructs will be presented in the following sections, but first the algorithm will be explained.
4.2.1
The Algorithm
The code implements the Dijkstra algorithm for finding the shortest paths from vertex 0 to the
other vertices in an N-vertex undirected graph. Pseudocode for the algorithm is shown below, with
the array G assumed to contain the one-hop distances between vertices.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
At each iteration, the algorithm finds the closest vertex J to 0 among all those not yet processed,
and then updates the list of minimum distances to each vertex from 0 by considering paths that go
through J. Two obvious potential candidate part of the algorithm for parallelization are the find
J and for K lines, and the above OpenMP code takes this approach.
4.2.2
77
// parallel
dowork();
// back to single thread
the function main() is run by a master thread, which will then branch off into many threads
running dowork() in parallel. The latter feat is accomplished by the directive in the lines
void dowork()
{
#pragma omp parallel
{ int startv,endv, // start, end vertices for this thread
step, // whole procedure goes nv steps
mymv, // vertex which attains that value
me = omp_get_thread_num();
That directive sets up a team of threads (which includes the master), all of which execute the
block following the directive in parallel.1 Note that, unlike the for directive which will be discussed
below, the parallel directive leaves it up to the programmer as to how to partition the work. In
our case here, we do that by setting the range of vertices which this thread will process:
startv = me * chunk;
endv = startv + chunk - 1;
Again, keep in mind that all of the threads execute this code, but weve set things up with the
variable me so that different threads will work on different vertices. This is due to the OpenMP
call
me = omp_get_thread_num();
4.2.3
Scope Issues
There is an issue here of thread startup time. The OMPi compiler sets up threads at the outset, so that that
startup time is incurred only once. When a parallel construct is encountered, they are awakened. At the end of the
construct, they are suspended again, until the next parallel construct is reached.
78
the pragma comes before the declaration of the local variables. That means that all of them are
local to each thread, i.e. not shared by them. But if a work sharing directive comes within a
function but after declaration of local variables, those variables are actually global to the code
in the directive, i.e. they are shared in common among the threads.
This is the default, but you can change these properties, e.g. using the private keyword and its
cousins. For instance,
#pragma omp parallel private(x,y)
would make x and y nonshared even if they were declared above the directive line. You may wish
to modify that a bit, so that x and y have initial values that were shared before the directive; use
firstprivate for this.
It is crucial to keep in mind that variables which are global to the program (in the C/C++ sense) are
automatically global to all threads. This is the primary means by which the threads communicate
with each other.
4.2.4
In some cases we want just one thread to execute some code, even though that code is part of a
parallel or other work sharing block.2 We use the single directive to do this, e.g.:
#pragma omp single
{ nth = omp_get_num_threads();
if (nv % nth != 0) {
printf("nv must be divisible by nth\n");
exit(1);
}
chunk = nv/nth;
printf("there are %d threads\n",nth); }
Since the variables nth and chunk are global and thus shared, we need not have all threads set
them, hence our use of single.
4.2.5
As see in the example above, the barrier implements a standard barrier, applying to all threads.
2
This is an OpenMP term. The for directive is another example of it. More on this below.
4.2.6
79
Implicit Barriers
Note that there is an implicit barrier at the end of each single block, which is also the case for
parallel, for, and sections blocks. This can be overridden via the nowait clause, e.g.
#pragma omp for nowait
Needless to say, the latter should be used with care, and in most cases will not be usable. On the
other hand, putting in a barrier where it is not needed would severely reduce performance.
4.2.7
The last construct used in this example is critical, for critical sections.
#pragma omp critical
{ if (mymd < md)
{ md = mymd; mv = mymv;
}
It means what it says, allowing entry of only one thread at a time while others wait. Here we are
updating global variables md and mv, which has to be done atomically, and critical takes care of
that for us. This is much more convenient than setting up lock variables, etc., which we would do
if we were programming threads code directly.
4.3
This one breaks up a C/C++ for loop, assigning various iterations to various threads. (The threads,
of course, must have already been set up via the omp parallel pragma.) This way the iterations
are done in parallel. Of course, that means that they need to be independent iterations, i.e. one
iteration cannot depend on the result of another.
4.3.1
// Dijkstra.c
2
3
80
4
5
6
7
// usage:
dijkstra nv print
8
9
10
// where nv is the size of the graph, and print is 1 if graph and min
// distances are to be printed out, 0 otherwise
11
12
#include <omp.h>
13
14
15
16
17
18
19
20
21
22
23
24
25
26
unsigned *ohd,
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
void dowork()
{
#pragma omp parallel
{ int step, // whole procedure goes nv steps
mymv, // vertex which attains that value
me = omp_get_thread_num(),
i;
unsigned mymd; // min value found by this thread
#pragma omp single
{ nth = omp_get_num_threads();
printf("there are %d threads\n",nth); }
for (step = 0; step < nv; step++) {
// find closest vertex to 0 among notdone; each thread finds
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
81
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
The work which used to be done in the function findmymin() is now done here:
#pragma omp for
for (i = 1; i < nv; i++) {
if (notdone[i] && mind[i] < mymd)
mymd = ohd[i];
mymv = i;
82
}
}
Each thread executes one or more of the iterations, i.e. takes responsibility for one or more values
of i. This occurs in parallel, so as mentioned earlier, the programmer must make sure that the
iterations are independent; there is no predicting which threads will do which values of i, in which
order. By the way, for obvious reasons OpenMP treats the loop index, i here, as private even if by
context it would be shared.
4.3.2
Nested Loops
If we use the for pragma to nested loops, by default the pragma applies only to the outer loop. We
can of course insert another for pragma inside, to parallelize the inner loop.
Or, starting with OpenMP version 3.0, one can use the collapse clause, e.g.
#pragma omp parallel for collapse(2)
4.3.3
In this default version of the for construct, iterations are executed by threads in unpredictable
order; the OpenMP standard does not specify which threads will execute which iterations in which
order. But this can be controlled by the programmer, using the schedule clause. OpenMP provides
three choices for this:
static: The iterations are grouped into chunks, and assigned to threads in round-robin
fashion. Default chunk size is approximately the number of iterations divided by the number
of threads.
dynamic: Again the iterations are grouped into chunks, but here the assignment of chunks
to threads is done dynamically. When a thread finishes working on a chunk, it asks the
OpenMP runtime system to assign it the next chunk in the queue. Default chunk size is 1.
guided: Similar to dynamic, but with the chunk size decreasing as execution proceeds.
For instance, our original version of our program in Section 4.2 broke the work into chunks, with
chunk size being the number vertices divided by the number of threads.
83
For the Dijkstra algorithm, for instance, we could get the same operation with less code by asking
OpenMP to do the chunking for us, say with a chunk size of 8:
...
#pragma omp for schedule(static)
for (i = 1; i < nv; i++) {
if (notdone[i] && mind[i] < mymd)
mymd = ohd[i];
mymv = i;
}
}
...
#pragma omp for
for (i = 1; i <
if (mind[mv]
mind[i] =
schedule(static)
nv; i++)
+ ohd[mv*nv+i] < mind[i])
mind[mv] + ohd[mv*nv+i];
...
Note again that this would have the same effect as our original code, which each thread handling
one chunk of contiguous iterations within a loop. So its just a programming convenience for us in
this case. (If the number of threads doesnt evenly divide the number of iterations, OpenMP will
fix that up for us too.)
The more general form is
#pragma omp for schedule(static,chunk)
Here static is still a keyword but chunk is an actual argument. However, setting the chunk size
in the schedule() clause is a compile-time operation. If you wish to have the chunk size set at run
time, call omp set schedule() in conjunction with the runtime clause. Example:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
84
...
#pragma omp for
for (i = 1; i <
if (mind[mv]
mind[i] =
schedule(guided)
nv; i++)
+ ohd[mv*nv+i] < mind[i])
mind[mv] + ohd[mv*nv+i];
...
There are other variations of this available in OpenMP. However, in Section 2.4, I showed that
these would seldom be necessary or desirable; having each thread handle a single chunk would be
best.
See Section 2.4 for a timing example.
1
s e t e n v OMP SCHEDULE s t a t i c , 2 0
4.3.4
This method works in-place, a virtue if we are short on memory. Its cache performance is probably
poor, though. It may be better to look at horizontal slabs above the diagonal, say, and trade them
with vertical ones below the diagonal.
1 #i n c l u d e <omp . h>
2
3 // t r a n s l a t e from 2D t o 1D i n d i c e s
4 i n t onedim ( i n t n , i n t i , i n t j ) { r e t u r n n i + j ;
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
85
v o i d t r a n s p ( i n t m, i n t n )
{
#pragma omp p a r a l l e l
{ i n t i , j , tmp ;
// walk through a l l t h e aboved i a g o n a l e l e m e n t s , swapping them
// with t h e i r belowd i a g o n a l c o u n t e r p a r t s
#pragma omp f o r
f o r ( i = 0 ; i < n ; i ++) {
f o r ( j = i +1; j < n ; j ++) {
tmp = m[ onedim ( n , i , j ) ] ;
m[ onedim ( n , i , j ) ] = m[ onedim ( n , j , i ) ] ;
m[ onedim ( n , j , i ) ] = tmp ;
}
}
}
}
4.3.5
The name of this OpenMP clause alludes to the term reduction in functional programming.
Many parallel programming languages include such operations, to enable the programmer to more
conveniently (and often more efficiently) have threads/processors cooperate in computing sums,
products, etc. OpenMP does this via the reduction clause.
For example, consider
1
2
3
4
int z;
...
#pragma omp for reduction(+:z)
for (i = 0; i < n; i++) z += x[i];
The pragma says that the threads will share the work as in our previous discussion of the for
pragma. In addition, though, there will be independent copies of z maintained for each thread,
each initialized to 0 before the loop begins. When the loop is entirely done, the values of z from
the various threads will be summed, of course in an atomic manner.
Note that the + operator not only indicates that the values of z are to be summed, but also that
their initial values are to be 0. If the operator were *, say, then the product of the values would
be computed, and their initial values would be 1.
One can specify several reduction variables to the right of the colon, separated by commas.
Our use of the reduction clause here makes our programming much easier. Indeed, if we had old
serial code that we wanted to parallelize, we would have to make no change to it! OpenMP is taking
86
care of both the work splitting across values of i, and the atomic operations. Moreovernote this
carefullyit is efficient, because by maintaining separate copies of z until the loop is done, we are
reducing the number of serializing atomic actions, and are avoiding time-costly cache coherency
transactions and the like.
Without this construct, we would have to do
int z,myz=0;
...
#pragma omp for private(myz)
for (i = 0; i < n; i++) myz += x[i];
#pragma omp critical
{ z += myz; }
Here are the eligible operators and the corresponding initial values:
In C/C++, you can use reduction with +, -, *, &, |, && and || (and the exclusive-or operator).
operator
+
*
&
|
^
&&
||
initial value
0
0
1
bit string of 1s
bit string of 0s
0
1
0
The lack of other operations typically found in other parallel programming languages, such as min
and max, is due to the lack of these operators in C/C++. The FORTRAN version of OpenMP
does have min and max.3
Note that the reduction variables must be shared by the threads, and apparently the only acceptable
way to do so in this case is to declare them as global variables.
A reduction variable must be scalar, in C/C++. It can be an array in FORTRAN.
4.4
Note, though, that plain min and max would not help in our Dijkstra example above, as we not only need to find
the minimum value, but also need the vertex which attains that value.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
// c o m p i l e with D, e . g .
//
//
g c c fopenmp o manddyn Gove . c DDYNAMIC
//
// t o g e t t h e v e r s i o n t h a t u s e s dynamic s c h e d u l i n g
#i n c l u d e <omp . h>
#i n c l u d e <complex . h>
#i n c l u d e <time . h>
f l o a t t i m e d i f f ( s t r u c t t i m e s p e c t1 , s t r u c t t i m e s p e c t 2 )
{ i f ( t1 . t v n s e c > t2 . t v n s e c ) {
t 2 . t v s e c = 1 ;
t 2 . t v n s e c += 1 0 0 0 0 0 0 0 0 0 ;
}
r e t u r n t 2 . t v s e c t 1 . t v s e c + 0 . 0 0 0 0 0 0 0 0 1 ( t 2 . t v n s e c t 1 . t v n s e c ) ;
}
#i f d e f RC
// f i n d s chunk among 0 , . . . , n1 t o a s s i g n t o t h r e a d number me among nth
// t h r e a d s
v o i d findmyrange ( i n t n , i n t nth , i n t me , i n t myrange )
{ i n t c h u n k s i z e = n / nth ;
myrange [ 0 ] = me c h u n k s i z e ;
i f (me < nth 1) myrange [ 1 ] = (me+1) c h u n k s i z e 1 ;
e l s e myrange [ 1 ] = n 1 ;
}
#i n c l u d e < s t d l i b . h>
#i n c l u d e <s t d i o . h>
// from h t t p : / /www. c i s . temple . edu / i n g a r g i o / c i s 7 1 / code / randompermute . c
// I t r e t u r n s a random p e r m u t a t i o n o f 0 . . n1
i n t rpermute ( i n t n ) {
i n t a = ( i n t ) ( i n t ) m a l l o c ( n s i z e o f ( i n t ) ) ;
// i n t a = m a l l o c ( n s i z e o f ( i n t ) ) ;
int k ;
f o r ( k = 0 ; k < n ; k++)
a[k] = k;
f o r ( k = n1; k > 0 ; k) {
i n t j = rand ( ) % ( k +1);
i n t temp = a [ j ] ;
a[ j ] = a[k];
a [ k ] = temp ;
}
return a ;
}
#e n d i f
#d e f i n e MAXITERS 1000
87
88
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
// g l o b a l s
i n t count = 0 ;
int nptsside ;
float side2 ;
float side4 ;
i n t i n s e t ( d o u b l e complex c ) {
int iters ;
f l o a t r l , im ;
d o u b l e complex z = c ;
f o r ( i t e r s = 0 ; i t e r s < MAXITERS; i t e r s ++) {
z = zz + c ;
rl = creal (z );
im = cimag ( z ) ;
i f ( r l r l + imim > 4 ) r e t u r n 0 ;
}
return 1;
}
i n t scram ;
v o i d dowork ( )
{
#i f d e f RC
#pragma omp p a r a l l e l r e d u c t i o n (+: count )
#e l s e
#pragma omp p a r a l l e l
#e n d i f
{
i n t x , y ; f l o a t xv , yv ;
d o u b l e complex z ;
#i f d e f STATIC
#pragma omp f o r r e d u c t i o n (+: count ) s c h e d u l e ( s t a t i c )
# e l i f d e f i n e d DYNAMIC
#pragma omp f o r r e d u c t i o n (+: count ) s c h e d u l e ( dynamic )
# e l i f d e f i n e d GUIDED
#pragma omp f o r r e d u c t i o n (+: count ) s c h e d u l e ( g u i d e d )
#e n d i f
#i f d e f RC
i n t myrange [ 2 ] ;
i n t me = omp get thread num ( ) ;
i n t nth = omp get num threads ( ) ;
int i ;
findmyrange ( n p t s s i d e , nth , me , myrange ) ;
f o r ( i = myrange [ 0 ] ; i <= myrange [ 1 ] ; i ++) {
x = scram [ i ] ;
#e l s e
f o r ( x=0; x<n p t s s i d e ; x++) {
#e n d i f
f o r ( y=0; y<n p t s s i d e ; y++) {
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
89
xv = ( x s i d e 2 ) / s i d e 4 ;
yv = ( y s i d e 2 ) / s i d e 4 ;
z = xv + yv I ;
i f ( inset ( z )) {
count++;
}
}
}
}
}
i n t main ( i n t argc , c h a r argv )
{
n p t s s i d e = a t o i ( argv [ 1 ] ) ;
side2 = nptsside / 2.0;
side4 = nptsside / 4.0;
s t r u c t t i m e s p e c bgn , nd ;
c l o c k g e t t i m e (CLOCK REALTIME, &bgn ) ;
#i f d e f RC
scram = rpermute ( n p t s s i d e ) ;
#e n d i f
dowork ( ) ;
// i m p l i e d b a r r i e r
p r i n t f (%d\n , count ) ;
c l o c k g e t t i m e (CLOCK REALTIME, &nd ) ;
p r i n t f (% f \n , t i m e d i f f ( bgn , nd ) ) ;
}
The code is similar to that of a number of books and Web sites, such as the Gove book cited in
Section 2.2. Here RC is the random chunk method discussed in Section 2.4.
4.5
This is new to OpenMP 3.0. The basic idea is to set up a task queue: When a thread encounters
a task directive, it arranges for some thread to execute the associated blockat some time. The
first thread can continue. Note that the task might not execute right away; it may have to wait
for some thread to become free after finishing another task. Also, there may be more tasks than
threads, also causing some threads to wait.
Note that we could arrange for all this ourselves, without task. Wed set up our own work queue,
as a shared variable, and write our code so that whenever a thread finished a unit of work, it would
90
delete the head of the queue. Whenever a thread generated a unit of work, it would add it to the
que. Of course, the deletion and addition would have to be done atomically. All this would amount
to a lot of coding on our part, so task really simplifies the programming.
4.5.1
1
Example: Quicksort
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
// test code
49
50
51
52
53
54
55
56
57
91
The code
if (firstcall == 1) {
#pragma omp single nowait
qs(z,0,zend,0);
gets things going. We want only one thread to execute the root of the recursion tree, hence the
need for the single clause. After that, the code
part = separate(z,zstart,zend);
#pragma omp task
qs(z,zstart,part-1,0);
sets up a call to a subtree, with the task directive stating, OMP system, please make sure that
this subtree is handled by some thread eventually.
There are various refinements, such as the barrier-like taskwait clause.
4.6
Earlier we saw the critical and barrier constructs. There is more to discuss, which we do here.
4.6.1
The critical construct not only serializes your program, but also it adds a lot of overhead. If your
critical section involves just a one-statement update to a shared variable, e.g.
x += y;
etc., then the OpenMP compiler can take advantage of an atomic hardware instruction, e.g. the
LOCK prefix on Intel, to set up an extremely efficient critical section, e.g.
92
4.6.2
Consider a shared-memory multiprocessor system with coherent caches, and a shared, i.e. global,
variable x. If one thread writes to x, you might think that the cache coherency system will ensure
that the new value is visible to other threads. But as discussed in Section 3.6, it is is not quite so
simple as this.
For example, the compiler may store x in a register, and update x itself at certain points. In
between such updates, since the memory location for x is not written to, the cache will be unaware
of the new value, which thus will not be visible to other threads. If the processors have write buffers
etc., the same problem occurs.
In other words, we must account for the fact that our program could be run on different kinds of
hardware with different memory consistency models. Thus OpenMP must have its own memory
consistency model, which is then translated by the compiler to mesh with the hardware.
OpenMP takes a relaxed consistency approach, meaning that it forces updates to memory
(flushes) at all synchronization points, i.e. at:
barrier
entry/exit to/from critical
entry/exit to/from ordered
entry/exit to/from parallel
exit from parallel for
exit from parallel sections
exit from single
In between synchronization points, one can force an update to x via the flush pragma:
93
The flush operation is obviously architecture-dependent. OpenMP compilers will typically have the
proper machine instructions available for some common architectures. For the rest, it can force a
flush at the hardware level by doing lock/unlock operations, though this may be costly in terms of
time.
4.7
In our examples of the for pragma above, that pragma would come within a block headed by
a parallel pragma. The latter specifies that a team of theads is to be created, with each one
executing the given block, while the former specifies that the various iterations of the loop are to
be distributed among the threads. As a shortcut, we can combine the two pragmas:
#pragma omp parallel for
4.8
There is much, much more to OpenMP than what we have seen here. To see the details, there
are many Web pages you can check, and there is also the excellent book, Using OpenMP: Portable
Shared Memory Parallel Programming, by Barbara Chapman, Gabriele Jost and Ruud Van Der
Pas, MIT Press, 2008. The book by Gove cited in Section 2.2 also includes coverage of OpenMP.
4.9
4.9.1
Compiling
There are a number of open source compilers available for OpenMP, including:
Omni: This is available at (http://phase.hpcc.jp/Omni/). To compile an OpenMP program in x.c and create an executable file x, run
omcc -g -o x x.c
94
4.9.2
Running
4.9.3
Debugging
Since OpenMP is essentially just an interface to threads, your debugging tools threads facilities
should serve you well. See Section 1.3.2.4 for the GDB case.
A possible problem, though, is that OpenMPs use of pragmas makes it difficult for the compilers
to maintain your original source code line numbers, and your function and variable names. But
with a little care, a symbolic debugger such as GDB can still be used. Here are some tips for the
compilers mentioned above, using GDB as our example debugging tool:
GCC: GCC maintains line numbers and names well. In earlier versions, it had a problem in
that it did not not retain names of local variables within blocks controlled by omp parallel
at all. That problem was fixed in version 4.4 of the GCC suite, but seems to have slipped back
in with some later versions! This may be due to compiler optimizations that place variables
in registers.
4
You may find certain subversions of GCC 4.1 can be used too.
4.10. PERFORMANCE
95
Omni: The function main() in your executable is actually in the OpenMP library, and your
function main() is renamed ompc main(). So, when you enter GDB, first set a breakpoint
at your own code:
(gdb) b _ompc_main
Then run your program to this breakpoint, and set whatever other breakpoints you want.
You should find that your other variable and function names are unchanged.
Ompi: Older versions also changed your function names, but the current version (1.2.0)
doesnt. Works fine in GDB.
4.10
Performance
As is usually the case with parallel programming, merely parallelizing a program wont necessarily
make it faster, even on shared-memory hardware. Operations such as critical sections, barriers and
so on serialize an otherwise-parallel program, sapping much of its speed. In addition, there are
issues of cache coherency transactions, false sharing etc.
4.10.1
To illustrate this, I ran our original Dijkstra example (Section 4.2 on various graph sizes, on a quad
core machine. Here are the timings:
nv
1000
1000
1000
nth
1
2
4
time
0.005472
0.011143
0.029574
The more parallelism we had, the slower the program ran! The synchronization overhead was just
too much to be compensated by the parallel computation.
However, parallelization did bring benefits on larger problems:
nv
25000
25000
25000
nth
1
2
4
time
2.861814
1.710665
1.453052
96
4.10.2
How could we make our Dijkstra code faster? One idea would be to eliminate the critical section.
Recall that in each iteration, the threads compute their local minimum distance values md and
mv, and then update the global values md and mv. Since the update must be atomic, this causes
some serialization of the program. Instead, we could have the threads store their values mymd
and mymv in a global array mymins, with each thread using a separate pair of locations within
that array, and then at the end of the iteration we could have just one task scan through mymins
and update md and mv.
Here is the resulting code:
1
// Dijkstra.c
2
3
4
5
6
7
8
9
10
//
//
//
//
11
12
// usage:
dijkstra nv print
13
14
15
// where nv is the size of the graph, and print is 1 if graph and min
// distances are to be printed out, 0 otherwise
16
17
#include <omp.h>
18
19
20
21
22
23
24
25
26
27
28
29
int *mymins;
30
31
32
33
34
35
36
37
38
39
40
41
4.10. PERFORMANCE
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
void dowork()
{
#pragma omp parallel
{ int startv,endv, // start, end vertices for my thread
step, // whole procedure goes nv steps
me,
mymv; // vertex which attains the min value in my chunk
unsigned mymd; // min value found by this thread
int i;
me = omp_get_thread_num();
#pragma omp single
{ nth = omp_get_num_threads();
if (nv % nth != 0) {
printf("nv must be divisible by nth\n");
exit(1);
}
chunk = nv/nth;
mymins = malloc(2*nth*sizeof(int));
}
startv = me * chunk;
endv = startv + chunk - 1;
for (step = 0; step < nv; step++) {
// find closest vertex to 0 among notdone; each thread finds
// closest in its group, then we find overall closest
97
98
findmymin(startv,endv,&mymd,&mymv);
mymins[2*me] = mymd;
mymins[2*me+1] = mymv;
#pragma omp barrier
// mark new vertex as done
#pragma omp single
{ md = largeint; mv = 0;
for (i = 1; i < nth; i++)
if (mymins[2*i] < md) {
md = mymins[2*i];
mv = mymins[2*i+1];
}
notdone[mv] = 0;
}
// now update my section of mind
updatemind(startv,endv);
#pragma omp barrier
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
Lets take a look at the latter part of the code for one iteration;
1
2
3
4
5
6
7
8
9
findmymin(startv,endv,&mymd,&mymv);
mymins[2*me] = mymd;
mymins[2*me+1] = mymv;
#pragma omp barrier
// mark new vertex as done
#pragma omp single
{ notdone[mv] = 0;
for (i = 1; i < nth; i++)
if (mymins[2*i] < md) {
4.10. PERFORMANCE
99
md = mymins[2*i];
mv = mymins[2*i+1];
10
11
}
}
// now update my section of mind
updatemind(startv,endv);
#pragma omp barrier
12
13
14
15
16
The call to findmymin() is as before; this thread finds the closest vertex to 0 among this threads
range of vertices. But instead of comparing the result to md and possibly updating it and mv, the
thread simply stores its mymd and mymv in the global array mymins. After all threads have
done this and then waited at the barrier, we have just one thread update md and mv.
Lets see how well this tack worked:
nv
25000
25000
25000
nth
1
2
4
time
2.546335
1.449387
1.411387
This brought us about a 15% speedup in the two-thread case, though less for four threads.
What else could we do? Here are a few ideas:
False sharing could be a problem here. To address it, we could make mymins much longer,
changing the places at which the threads write their data, leaving most of the array as padding.
We could try the modification of our program in Section 4.3.1, in which we use the OpenMP
for pragma, as well as the refinements stated there, such as schedule.
We could try combining all of the ideas here.
4.10.3
OpenMP Internals
We may be able to write faster code if we know a bit about how OpenMP works inside.
You can get some idea of this from your compiler. For example, if you use the -t option with the
Omni compiler, or -k with Ompi, you can inspect the result of the preprocessing of the OpenMP
pragmas.
Here for instance is the code produced by Omni from the call to findmymin() in our Dijkstra
program:
# 93 "Dijkstra.c"
findmymin(startv,endv,&(mymd),&(mymv));{
100
_ompc_enter_critical(&__ompc_lock_critical);
# 96 "Dijkstra.c"
if((mymd)<(((unsigned )(md)))){
# 97 "Dijkstra.c"
(md)=(((int )(mymd)));
# 97 "Dijkstra.c"
(mv)=(mymv);
}_ompc_exit_critical(&__ompc_lock_critical);
Fortunately Omni saves the line numbers from our original source file, but the pragmas have been
replaced by calls to OpenMP library functions.
With Ompi, while preprocessing of your file x.c, the compiler produces an intermediate file x ompi.c,
and the latter is what is actually compiled. Your function main is renamed to ompi originalMain().
Your other functions and variables are renamed. For example in our Dijkstra code, the function
dowork() is renamed to dowork parallel 0. And by the way, all indenting is lost! So its a bit
hard to read, but can be very instructive.
The document, The GNU OpenMP Implementation, http://pl.postech.ac.kr/~gla/cs700-07f/
ref/openMp/libgomp.pdf, includes good outline of how the pragmas are translated.
4.11
The application is described in the comments, but here are a couple of things to look for in
particular:
The variables curra and currb are shared by all the threads, but due to the nature of the
application, no critical sections are needed.
On the other hand, the barrier is essential. The reader should ponder what calamities would
occur without it.
Note the disclaimer in the comments, to the effect that parallelizing this application will be fruitful
only if the functioin f() is very time-consuming to evaluate. It might be the output of some complex
simulation, for instance, with the argument to f() being some simulation parameter.
1 #i n c l u d e <omp . h>
2 #i n c l u d e <math . h>
3
4 // OpenMP example :
root finding
5
6 // t h e f u n c t i o n f ( ) i s known t o be n e g a t i v e
7 // a t a , p o s i t i v e a t b , and t h u s has a t
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
// l e a s t one r o o t i n ( a , b ) ; i f t h e r e a r e
// m u l t i p l e r o o t s , o n l y one i s found ;
// t h e p r o c e d u r e r u n s f o r n i t e r s i t e r a t i o n s
//
//
//
//
//
//
//
//
s t r a t e g y : i n each i t e r a t i o n , t h e c u r r e n t
i n t e r v a l i s s p l i t i n t o nth e q u a l p a r t s ,
and each t h r e a d c h e c k s i t s s u b i n t e r v a l
f o r a s i g n change o f f ( ) ; i f one i s
found , t h i s s u b i n t e r v a l becomes t h e
new c u r r e n t i n t e r v a l ; t h e c u r r e n t g u e s s
f o r the root i s the l e f t endpoint of the
current interval
// o f c o u r s e , t h i s approach i s u s e f u l i n
// p a r a l l e l o n l y i f f ( ) i s v e r y e x p e n s i v e
// t o e v a l u a t e
// f o r s i m p l i c i t y , assumes t h a t no e n d p o i n t
// o f a s u b i n t e r v a l w i l l e v e r e x a c t l y
// c o i n c i d e with a r o o t
f l o a t root ( f l o a t ( f ) ( f l o a t ) ,
float inita , float initb , int niters ) {
f l o a t curra = i n i t a ;
f l o a t currb = i n i t b ;
#pragma omp p a r a l l e l
{
i n t nth = omp get num threads ( ) ;
i n t me = omp get thread num ( ) ;
int iter ;
f o r ( i t e r = 0 ; i t e r < n i t e r s ; i t e r ++) {
#pragma omp b a r r i e r
f l o a t subintwidth =
( c u r r b c u r r a ) / nth ;
f l o a t myleft =
c u r r a + me s u b i n t w i d t h ;
f l o a t myright = m y l e f t + s u b i n t w i d t h ;
i f ( ( f ) ( m y l e f t ) < 0 &&
( f ) ( myright ) > 0 ) {
curra = myleft ;
c u r r b = myright ;
}
}
}
return curra ;
}
float testf ( float x) {
r e t u r n pow ( x 2 . 1 , 3 ) ;
}
101
102
58
59
60
4.12
Consider the example of Section 2.4.3. We have a network graph of some kind, such as Web
links. For any two vertices, say any two Web sites, we might be interested in mutual outlinks, i.e.
outbound links that are common to two Web sites.
The OpenMP code below finds the mean number of mutual outlinks, among all pairs of sites in a
set of Web sites. Note that it uses the method for load balancing presented in Section 2.4.3.
1
2
#include <omp.h>
#include <stdio.h>
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
float dowork()
{
#pragma omp parallel
{ int pn1,pn2,i,mysum=0;
int me = omp_get_thread_num();
nth = omp_get_num_threads();
// in checking all (i,j) pairs, partition the work according to i;
// to get good load balance, this thread me will handle all i that equal
// me mod nth
for (i = me; i < n; i += nth) {
mysum += procpairs(i);
}
#pragma omp atomic
tot += mysum;
#pragma omp barrier
}
int divisor = n * (n-1) / 2;
39
40
103
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
4.13
0
1
0
1
1
0
1
1
0
0
0
1
0
1
1
0
(4.1)
with row and column numbering starting at 0, not 1. Wed like to transform this to a two-column
matrix that displays the links, in this case
0
1
1
2
2
3
3
3
1
0
3
1
3
0
1
2
(4.2)
104
For instance, there is a 1 on the far right, second row of the above matrix, meaning that in the
graph there is an edge from vertex 1 to vertex 3. This results in the row (1,3) in the transformed
matrix seen above.
Suppose further that we require this listing to be in lexicographical order, sorted on source vertex
and then on destination vertex. Here is code to do this computation in OpenMP:
1 // t a k e s a graph a d j a c e n c y matrix f o r a d i r e c t e d graph , and c o n v e r t s i t
2 // t o a 2column matrix o f p a i r s ( i , j ) , meaning an edge from v e r t e x i t o
3 // v e r t e x j ; t h e output matrix must be i n l e x i c o g r a p h i c a l o r d e r
4
5 // not c l a i m e d e f f i c i e n t , e i t h e r i n s p e e d o r i n memory u s a g e
6
7 #i n c l u d e <omp . h>
8
9 // n e e d s l r t l i n k f l a g f o r C++
10 #i n c l u d e <time . h>
11 f l o a t t i m e d i f f ( s t r u c t t i m e s p e c t1 , s t r u c t t i m e s p e c t 2 )
12 { i f ( t 1 . t v n s e c > t 2 . t v n s e c ) {
13
t 2 . t v s e c = 1 ;
14
t 2 . t v n s e c += 1 0 0 0 0 0 0 0 0 0 ;
15
}
16
r e t u r n t 2 . t v s e c t 1 . t v s e c + 0 . 0 0 0 0 0 0 0 0 1 ( t 2 . t v n s e c t 1 . t v n s e c ) ;
17 }
18
19 // t r a n s g r a p h ( ) d o e s t h i s work
20 // arguments :
21 //
adjm :
t h e a d j a c e n c y matr ix (NOT assumed symmetric ) , 1 f o r edge , 0
22 //
o t h e r w i s e ; n o t e : matrix i s o v e r w r i t t e n by t h e f u n c t i o n
23 //
n : number o f rows and columns o f adjm
24 //
nout : output , number o f rows i n r e t u r n e d matrix
25 // r e t u r n v a l u e : p o i n t e r t o t h e c o n v e r t e d matrix
26 i n t t r a n s g r a p h ( i n t adjm , i n t n , i n t nout )
27 {
28
i n t outm , // t o become t h e output matrix
29
num1s , // i th e l e m e n t w i l l be t h e number o f 1 s i n row i o f adjm
30
cumul1s ; // c u m u l a t i v e sums i n num1s
31
#pragma omp p a r a l l e l
32
{ i n t i , j ,m;
33
i n t me = omp get thread num ( ) ,
34
nth = omp get num threads ( ) ;
35
i n t myrows [ 2 ] ;
36
int tot1s ;
37
i n t outrow , num1si ;
38
#pragma omp s i n g l e
39
{
40
num1s = m a l l o c ( n s i z e o f ( i n t ) ) ;
41
cumul1s = m a l l o c ( ( n+1) s i z e o f ( i n t ) ) ;
42
}
43
// d e t e r m i n e t h e rows i n adjm t o be handled by t h i s t h r e a d
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
105
106
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
4.14
Though one of OpenMPs best virtues is that you can avoid working with those pesky lock variables
needed for straight threads programming, there are still some instances in which lock variables may
be useful. OpenMP does provide for locks:
declare your locks to be of type omp lock t
call omp set lock() to lock the lock
call omp unset lock() to unlock the lock
4.15
There are additional OpenMP examples in later sections of this book, such as:5
sampling bucket sort, Section 1.3.2.6
5
If you are reading this presentation on OpenMP separately from the book, the book is at http://heather.cs.
ucdavis.edu/~matloff/158/PLN/ParProcBook.pdf
107
108
Chapter 5
5.1
Overview
The video game market is so lucrative that the industry has developed ever-faster GPUs, in order
to handle ever-faster and ever-more visually detailed video games. These actually are parallel
processing hardware devices, so around 2003 some people began to wonder if one might use them
for parallel processing of nongraphics applications.
Originally this was cumbersome. One needed to figure out clever ways of mapping ones application
to some kind of graphics problem, i.e. ways to disguising ones problem so that it appeared to be
doing graphics computations. Though some high-level interfaces were developed to automate this
transformation, effective coding required some understanding of graphics principles.
But current-generation GPUs separate out the graphics operations, and now consist of multiprocessor elements that run under the familiar shared-memory threads model. Thus they are easily
programmable. Granted, effective coding still requires an intimate knowledge of the hardwre, but
at least its (more or less) familiar hardware, not requiring knowledge of graphics.
Moreover, unlike a multicore machine, with the ability to run just a few threads at one time, e.g.
four threads on a quad core machine, GPUs can run hundreds or thousands of threads at once.
109
110
There are various restrictions that come with this, but you can see that there is fantastic potential
for speed here.
NVIDIA has developed the CUDA language as a vehicle for programming on their GPUs. Its
basically just a slight extension of C, and has become very popular. More recently, the OpenCL
language has been developed by Apple, AMD and others (including NVIDIA). It too is a slight
extension of C, and it aims to provide a uniform interface that works with multicore machines in
addition to GPUs. OpenCL is not yet in as broad use as CUDA, so our discussion here focuses on
CUDA and NVIDIA GPUs.
Also, the discussion will focus on NVIDIAs Tesla line. This then led to the second generation,
Fermi, and then Kepler. Unless otherwise stated, all statements here refer to Tesla.
Some terminology:
A CUDA program consists of code to be run on the host, i.e. the CPU, and code to run on
the device, i.e. the GPU.
A function that is called by the host to execute on the device is called a kernel.
Threads in an application are grouped into blocks. The entirety of blocks is called the grid
of that application.
5.2
Heres a sample program. And Ive kept the sample simple: It just finds the sums of all the rows
of a matrix.
1 #i n c l u d e <s t d i o . h>
2 #i n c l u d e < s t d l i b . h>
3 #i n c l u d e <cuda . h>
4
5 // CUDA example :
f i n d s row sums o f an i n t e g e r matrix m
6
7 // f i n d 1 e l t ( ) f i n d s t h e rowsum o f one row o f t h e nxn matrix m, s t o r i n g t h e
8 // r e s u l t i n t h e c o r r e s p o n d i n g p o s i t i o n i n t h e rowsum a r r a y r s ; matrix
9 // s t o r e d a s 1d i m e n s i o n a l , rowmajor o r d e r
10
11
global
v o i d f i n d 1 e l t ( i n t m, i n t r s , i n t n )
12 {
13
i n t rownum = b l o c k I d x . x ; // t h i s t h r e a d w i l l h a n d l e row # rownum
14
i n t sum = 0 ;
15
f o r ( i n t k = 0 ; k < n ; k++)
16
sum += m[ rownumn+k ] ;
17
r s [ rownum ] = sum ;
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
}
i n t main ( i n t argc , c h a r argv )
{
i n t n = a t o i ( argv [ 1 ] ) ;
// number o f matrix rows / c o l s
i n t hm, // h o s t matrix
dm, // d e v i c e matrix
hrs , // h o s t rowsums
d r s ; // d e v i c e rowsums
i n t m s i z e = n n s i z e o f ( i n t ) ; // s i z e o f matrix i n b y t e s
// a l l o c a t e s p a c e f o r h o s t matrix
hm = ( i n t ) m a l l o c ( m s i z e ) ;
// a s a t e s t , f i l l matrix with c o n s e c u t i v e i n t e g e r s
int t = 0 , i , j ;
f o r ( i = 0 ; i < n ; i ++) {
f o r ( j = 0 ; j < n ; j ++) {
hm [ i n+j ] = t++;
}
}
// a l l o c a t e s p a c e f o r d e v i c e matrix
cudaMalloc ( ( v o i d )&dm, m s i z e ) ;
// copy h o s t matrix t o d e v i c e matrix
cudaMemcpy (dm, hm, msize , cudaMemcpyHostToDevice ) ;
// a l l o c a t e host , d e v i c e rowsum a r r a y s
int rssize = n sizeof ( int );
hrs = ( i n t ) malloc ( r s s i z e ) ;
cudaMalloc ( ( v o i d )& drs , r s s i z e ) ;
// s e t up p a r a m e t e r s f o r t h r e a d s s t r u c t u r e
dim3 dimGrid ( n , 1 ) ; // n b l o c k s
dim3 dimBlock ( 1 , 1 , 1 ) ; // 1 t h r e a d p e r b l o c k
// i n v o k e t h e k e r n e l
f i n d 1 e l t <<<dimGrid , dimBlock>>>(dm, drs , n ) ;
// w a i t f o r k e r n e l t o f i n i s h
cudaThreadSynchronize ( ) ;
// copy row v e c t o r from d e v i c e t o h o s t
cudaMemcpy ( hrs , drs , r s s i z e , cudaMemcpyDeviceToHost ) ;
// check r e s u l t s
i f ( n < 1 0 ) f o r ( i n t i =0; i <n ; i ++) p r i n t f (%d\n , h r s [ i ] ) ;
// c l e a n up
f r e e (hm ) ;
cudaFree (dm ) ;
f r e e ( hrs ) ;
cudaFree ( d r s ) ;
}
This is mostly C, with a bit of CUDA added here and there. Heres how the program works:
111
112
Note that unlike kernel functions, device functions can have return values, e.g. int above.
When a kernel is called, each thread runs it. Each thread receives the same arguments.
Each block and thread has an ID, stored in programmer-accessible structs blockIdx and
threadIdx. Well discuss the details later, but for now, well just note that here the statement
int rownum = blockIdx.x;
picks up the block number, which our code in this example uses to determine which row to
sum.
One calls cudaMalloc() on the host to dynamically allocate space on the devices memory.1
Execution of the statement
cudaMalloc((void **)&drs,rssize);
allocates space on the device, pointed to by drs, a variable in the hosts address space.
The space allocated by a cudaMalloc() call on the device is global to all kernels, and resides
in the global memory of the device (details on memory types later).
One can also allocate device memory statically. For example, the statement
__device int z[100];
appearing outside any function definition would allocate space on device global memory, with
scope global to all kernels. However, it is not accessible to the host.
Data is transferred to and from the host and device memories via cudaMemcpy(). The
fourth argument specifies the direction, e.g. cudaMemcpyHostToDevice, cudaMemcpyDeviceToHost or cudaMemcpyDeviceToDevice.
Kernels return void values, so values are returned via a kernels arguments.
1
This function cannot be called from the device itself. However, malloc() is available from the device, and device
memory allocated by it can be copied to the host. See the NVIDIA programming guide for details.
113
Device functions (which we dont have here) can return values. They are called only by kernel
functions or other device functions.
Note carefully that a call to the kernel doesnt block; it returns immediately. For that reason,
the code above has a host barrier call, to avoid copying the results back to the host from the
device before theyre ready:
cudaThreadSynchronize();
On the other hand, if our code were to have another kernel call, say on the next line after
find1elt<<<dimGrid,dimBlock>>>(dm,drs,n);
and if some of the second calls input arguments were the outputs of the first call, there would
be an implied barrier betwwen the two calls; the second would not start execution before the
first finished.
Calls like cudaMemcpy() do block until the operation completes.
There is also a thread barrier available for the threads themselves, at the block level. The
call is
__syncthreads();
This can only be invoked by threads within a block, not across blocks. In other words, this
is barrier synchronization within blocks.
Ive written the program so that each thread will handle one row of the matrix. Ive chosen
to store the matrix in one-dimensional form in row-major order, and the matrix is of size n x
n, so the loop
for (int k = 0; k < n; k++)
sum += m[rownum*n+k];
will indeed traverse the n elements of row number rownum, and compute their sum. That
sum is then placed in the proper element of the output array:
rs[rownum] = sum;
After the kernel returns, the host must copy the result back from the device memory to the
host memory, in order to access the results of the call.
114
5.3
Scorecards, get your scorecards here! You cant tell the players without a scorecardclassic cry of
vendors at baseball games
Know thy enemySun Tzu, The Art of War
The enormous computational potential of GPUs cannot be unlocked without an intimate understanding of the hardware. This of course is a fundamental truism in the parallel processing world,
but it is acutely important for GPU programming. This section presents an overview of the hardware.
5.3.1
Processing Units
A GPU consists of a large set of streaming multiprocessors (SMs). Since each SM is essentially
a multicore machine in its own right, you might say the GPU is a multi-multiprocessor machine.
Each SM consists of a number of streaming processors (SPs), individual cores. The cores run
threads, as with ordinary cores, but threads in an SM run in lockstep, to be explained below.
It is important to understand the motivation for this SM/SP hierarchy: Two threads located in
different SMs cannot synchronize with each other in the barrier sense. Though this sounds like
a negative at first, it is actually a great advantage, as the independence of threads in separate
SMs means that the hardware can run faster. So, if the CUDA application programmer can write
his/her algorithm so as to have certain independent chunks, and those chunks can be assigned to
different SMs (well see how, shortly), then thats a win.
Note that at present, word size is 32 bits. Thus for instance floating-point operations in hardware
were originally in single precision only, though newer devices are capable of double precision.
5.3.2
Thread Operation
GPU operation is highly threaded, and again, understanding of the details of thread operation is
key to good performance.
5.3.2.1
SIMT Architecture
When you write a CUDA application program, you partition the threads into groups called blocks.
The hardware will assign an entire block to a single SM, though several blocks can run in the same
SM. The hardware will then divide a block into warps, 32 threads to a warp. Knowing that the
115
hardware works this way, the programmer controls the block size and the number of blocks, and in
general writes the code to take advantage of how the hardware works.
The central point is that all the threads in a warp run the code in lockstep. During the machine
instruction fetch cycle, the same instruction will be fetched for all of the threads in the warp.
Then in the execution cycle, each thread will either execute that particular instruction or execute
nothing. The execute-nothing case occurs in the case of branches; see below. This is the classical
single instruction, multiple data (SIMD) pattern used in some early special-purpose computers
such as the ILLIAC; here it is called single instruction, multiple thread (SIMT).
The syntactic details of grid and block configuration will be presented in Section 5.3.4.
5.3.2.2
The SIMT nature of thread execution has major implications for performance. Consider what
happens with if/then/else code. If some threads in a warp take the then branch and others go
in the else direction, they cannot operate in lockstep. That means that some threads must wait
while others execute. This renders the code at that point serial rather than parallel, a situation
called thread divergence. As one CUDA Web tutorial points out, this can be a performance
killer. (On the other hand, threads in the same block but in different warps can diverge with no
problem.)
5.3.2.3
OS in Hardware
Each SM runs the threads on a timesharing basis, just like an operating system (OS). This timesharing is implemented in the hardware, though, not in software as in the OS case.
The hardware OS runs largely in analogy with an ordinary OS:
A process in an ordinary OS is given a fixed-length timeslice, so that processes take turns
running. In a GPUs hardware OS, warps take turns running, with fixed-length timeslices.
With an ordinary OS, if a process reaches an input/output operation, the OS suspends the
process while I/O is pending, even if its turn is not up. The OS then runs some other process
instead, so as to avoid wasting CPU cycles during the long period of time needed for the I/O.
With an SM, though, the analogous situation occurs when there is a long memory operation,
to global memory; if a warp of threads needs to access global memory (including local memory;
see below), the SM will schedule some other warp while the memory access is pending.
The hardware support for threads is extremely good; a context switch takes very little time, quite
a contrast to the OS case. Moreover, as noted above, the long latency of global memory may be
116
solvable by having a lot of threads that the hardware can timeshare to hide that latency; while
one warp is fetching data from memory, another warp can be executing, thus not losing time due
to the long fetch delay. For these reasons, CUDA programmers typically employ a large number
of threads, each of which does only a small amount of workagain, quite a contrast to something
like OpenMP.
5.3.3
Memory Structure
The GPU memory hierarchy plays a key role in performance. Lets discuss the most important two
types of memory firstshared and global.
5.3.3.1
Here is a summary:
type
scope
size
location
speed
lifetime
host access?
cached?
shared
glbl. to block
small
on-chip
blinding
kernel
no
no
global
glbl. to app.
large
off-chip
molasses
application
yes
no
In prose form:
Shared memory: All the threads in an SM share this memory, and use it to communicate
among themselves, just as is the case with threads in CPUs. Access is very fast, as this
memory is on-chip. It is declared inside the kernel, or in the kernel call (details below).
On the other hand, shared memory is small, currently 16K bytes per SM, and the data stored
in it are valid only for the life of the currently-executing kernel. Also, shared memory cannot
be accessed by the host.
Global memory: This is shared by all the threads in an entire application, and is persistent
across kernel calls, throughout the life of the application, i.e. until the program running on
the host exits. It is usually much larger than shared memory. It is accessible from the host.
Pointers to global memory can (but do not have to) be declared outside the kernel.
On the other hand, global memory is off-chip and very slow, taking hundreds of clock cycles
per access instead of just a few. As noted earlier, this can be ameliorated by exploiting latency
hiding; we will elaborate on this in Section 5.3.3.2.
117
The reader should pause here and reread the above comparison between shared and global memories.
The key implication is that shared memory is used essentially as a programmer-managed cache.
Data will start out in global memory, but if a variable is to be accessed multiple times by the GPU
code, its probably better for the programmer to write code that copies it to shared memory, and
then access the copy instead of the original. If the variable is changed and is to be eventually
transmitted back to the host, the programmer must include code to copy it back to global memory.
Accesses to global and shared memory are done via half-warps, i.e. an attempt is made to do all
memory accesses in a half-warp simultaneously. In that sense, only threads in a half-warp run
simultaneously, but the full warp is scheduled to run contemporaneously by the hardware OS, first
one half-warp and then the other.
The host can access global memory via cudaMemcpy(), as seen earlier. It cannot access shared
memory. Here is a typical pattern:
__global__ void abckernel(int *abcglobalmem)
{
__shared__ int abcsharedmem[100];
// ... code to copy some of abcglobalmem to some of abcsharedmem
// ... code for computation
// ... code to copy some of abcsharedmem to some of abcglobalmem
}
Typically you would write the code so that each thread deals with its own portion of the shared
data, e.g. its own portion of abcsharedmem and abcglobalmem above. However, all the threads
in that block can read/write any element in abcsharedmem.
Shared memory consistency (recall Section 3.6) is sequential within a thread, but relaxed among
threads in a block: A write by one thread is not guaranteed to be visible to the others in a block
until syncthreads() is called. On the other hand, writes by a thread will be visible to that same
thread in subsequent reads without calling syncthreads(). Among the implications of this is
that if each thread writes only to portions of shared memory that are not read by other threads in
the block, then syncthreads() need not be called.
In the code fragment above, we allocated the shared memory through a C-style declaration:
__shared__ int abcsharedmem[100];
It is also possible to allocate shared memory in the kernel call, along with the block and thread
configuration. Here is an example:
1 #i n c l u d e <s t d i o . h>
2 #i n c l u d e < s t d l i b . h>
3 #i n c l u d e <cuda . h>
4
118
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
//
//
//
//
}
i n t main ( i n t argc , c h a r argv )
{
i n t n = a t o i ( argv [ 1 ] ) ;
// number o f matrix rows / c o l s
i n t hv , // h o s t a r r a y
dv ; // d e v i c e a r r a y
i n t v s i z e = n s i z e o f ( i n t ) ; // s i z e o f a r r a y i n b y t e s
// a l l o c a t e s p a c e f o r h o s t a r r a y
hv = ( i n t ) m a l l o c ( v s i z e ) ;
// f i l l t e s t a r r a y with c o n s e c u t i v e i n t e g e r s
int t = 0 , i ;
f o r ( i = 0 ; i < n ; i ++)
hv [ i ] = t++;
// a l l o c a t e s p a c e f o r d e v i c e a r r a y
cudaMalloc ( ( v o i d )&dv , v s i z e ) ;
// copy h o s t a r r a y t o d e v i c e a r r a y
cudaMemcpy ( dv , hv , v s i z e , cudaMemcpyHostToDevice ) ;
// s e t up p a r a m e t e r s f o r t h r e a d s s t r u c t u r e
dim3 dimGrid ( 1 , 1 ) ;
dim3 dimBlock ( n , 1 , 1 ) ; // a l l n t h r e a d s i n t h e same b l o c k
// i n v o k e t h e k e r n e l ; t h i r d argument i s amount o f s h a r e d memory
d o u b l e i t <<<dimGrid , dimBlock , v s i z e >>>(dv , n ) ;
// w a i t f o r k e r n e l t o f i n i s h
cudaThreadSynchronize ( ) ;
// copy row a r r a y from d e v i c e t o h o s t
cudaMemcpy ( hv , dv , v s i z e , cudaMemcpyDeviceToHost ) ;
// check r e s u l t s
i f ( n < 1 0 ) f o r ( i n t i =0; i <n ; i ++) p r i n t f (%d\n , hv [ i ] ) ;
// c l e a n up
f r e e ( hv ) ;
cudaFree ( dv ) ;
}
119
shared
Suppose within one device function, we wish to have two extern shared
cannot do that literally, but we can share the space via subarrays, e.g.:
array like
arrays. We
int *x = &sv[120];
As noted, the latency (Section 2.5) for global memory is quite high, on the order of hundreds of
clock cycles. However, the hardware attempts to ameliorate this problem in a couple of ways.
First, as mentioned earlier, if a warp has requested a global memory access that will take a long
time, the hardware will schedule another warp to run while the first is waiting for the memory
access to complete. This is an example of a common parallel processing technique called latency
hiding.
Second, the bandwidth (Section 2.5) to global memory can be high, due to hardware actions called
coalescing. This simply means that if the hardware sees that the threads in this half-warp (or at
least the ones currently accessing global memory) are accessing consecutive words, the hardware
can execute the memory requests in groups of up to 32 words at a time. This works because the
memory is low-order interleaved (Section 3.2.1), and is true for both reads and writes.
120
The newer GPUs go even further, coalescing much more general access patterns, not just to consecutive words.
The programmer may be able to take advantage of coalescing, by a judicious choice of algorithms
and/or by inserting padding into arrays (Section 3.2.2).
5.3.3.3
Shared memory is divided into banks, in a low-order interleaved manner (recall Section 3.2): Words
with consecutive addresses are stored in consecutive banks, mod the number of banks, i.e. wrapping
back to 0 when hitting the last bank. If for instance there are 8 banks, addresses 0, 8, 16,... will
be in bank 0, addresses 1, 9, 17,... will be in bank 1 and so on. (Actually, older devices have 16
banks, while newer ones have 32.) The fact that all memory accesses in a half-warp are attempted
simultaneously implies that the best access to shared memory arises when the accesses are to
different banks, just as for the case of global memory.
An exception occurs in broadcast. If all threads in the block wish to read from the same word in
the same bank, the word will be sent to all the requestors simultaneously without conflict. However,
if only some theads try to read the same word, there may or may not be a conflict, as the hardware
chooses a bank for broadcast in some unspecified way.
As in the discussion of global memory above, we should write our code to take advantage of these
structures.
The biggest performance issue with shared memory is its size, as little as 16K per SM in many
GPU cards. And remember, this is divvied up among the blocks on a given SM. If we have 4 blocks
running on an SM, each one can only use 16K/4 = 4K bytes of shared memory.
5.3.3.4
Copying data between host and device can be a major bottleneck. One way to ameliorate this is to
use cudaMallocHost() instead of malloc() when allocating memory on the host. This sets up
page-locked memory, meaning that it cannot be swapped out by the OS virtual memory system.
This allows the use of DMA hardware to do the memory copy, said to make cudaMemcpy() twice
as fast.
5.3.3.5
There are also other types of memory. Again, lets start with a summary:
121
texture
glbl. to app.
host+device cache
fast if cache hit
application
yes
read
Registers:
Each SM has a set of registers, much more numerous than in a CPU. Access to them is very
fast, said to be slightly faster than to shared memory.
The compiler normally stores the local variables for a device function in registers, but there
are exceptions. An array wont be placed in registers if the array is too large, or if the array
has variable index values, such as
int z[20],i;
...
y = z[i];
Since registers are not indexable by the hardware, the compiler cannot allocate z to registers
in this case. If on the other hand, the only code accessing z has constant indices, e.g. z[8],
the compiler may put z in registers.
Local memory:
This is physically part of global memory, but is an area within that memory that is allocated
by the compiler for a given thread. As such, it is slow, and accessible only by that thread.
The compiler allocates this memory for local variables in a device function if the compiler
cannot store them in registers. This is called register spill.
Constant memory:
As the name implies, its read-only from the device (read/write by the host), for storing values
that will not be changed by device code. It is off-chip, thus potentially slow, but has a cache
on the chip. At present, the size is 64K.
One designates this memory with constant , as a global variable in the source file. One
sets its contents from the host via cudaMemcpyToSymbol(), whose (simple form for the)
call is
cudaMemcpyToSymbol(var_name,pointer_to_source,number_bytes_copy,cudaMemcpyHostToDevice)
For example:
__constant__ int x;
// host code
122
int y = 3;
cudaMemcpyToSymbol("x",&y,sizeof(int));
...
// device code
int z;
z = x;
Note again that the name Constant refers to the fact that device code cannot change it.
But host code certainly can change it between kernel calls. This might be useful in iterative
algorithms like this:
/ host code
for 1 to number of iterations
set Constant array x
call kernel (do scatter op)
cudaThreadSynchronize()
do gather op, using kernel results to form new x
// device code
use x together with thread-specific data
return results to host
Texture:
This is similar to constant memory, in the sense that it is read-only and cached. The difference
is that the caching is two-dimensional. The elements a[i][j] and a[i+1][j] are far from each
other in the global memory, but since they are close in a two-dimensional sense, they may
reside in the same cache line.
5.3.4
Threads Hierarchy
123
The programmer specifies the grid size (the numbers of rows and columns of blocks within a
grid) and the block size (numbers of rows, columns and layers of threads within a block). In
the first example above, this was done by the code
dim3 dimGrid(n,1);
dim3 dimBlock(1,1,1);
find1elt<<<dimGrid,dimBlock>>>(dm,drs,n);
Here the grid is specified to consist of n (n 1) blocks, and each block consists of just one
(1 1 1) thread.
That last line is of course the call to the kernel. As you can see, CUDA extends C syntax to
allow specifying the grid and block sizes. CUDA will store this information in structs of type
dim3, in this case our variables gridDim and blockDim, accessible to the programmer,
again with member variables for the various dimensions, e.g. blockDim.x for the size of the
X dimension for the number of threads per block.
All threads in a block run in the same SM, though more than one block might be on the same
SM.
The coordinates of a block within the grid, and of a thread within a block, are merely
abstractions. If for instance one is programming computation of heat flow across a twodimensional slab, the programmer may find it clearer to use two-dimensional IDs for the
threads. But this does not correspond to any physical arrangement in the hardware.
As noted, the motivation for the two-dimensional block arrangment is to make coding conceptually
simpler for the programmer if he/she is working an application that is two-dimensional in nature.
For example, in a matrix application ones parallel algorithm might be based on partitioning the
matrix into rectangular submatrices (tiles), as well do in Section 11.2. In a small example there,
the matrix
1 5 12
A= 0 3 6
4 8 2
(5.1)
is partitioned as
A=
A00 A01
A10 A11
,
(5.2)
124
where
A00 =
1 5
0 3
(5.3)
A01 =
12
6
A10 =
4 8
(5.4)
(5.5)
and
A11 =
(5.6)
We might then have one block of threads handle A00 , another block handle A01 and so on. CUDAs
two-dimensional ID system for blocks makes life easier for programmers in such situations.
5.3.5
5.4
As mentioned earlier, a barrier for the threads in the same block is available by calling syncthreads().
Note carefully that if one thread writes a variable to shared memory and another then reads that
125
variable, one must call this function (from both threads) in order to get the latest value. Keep in
mind that within a block, different warps will run at different times, making synchronization vital.
Remember too that threads across blocks cannot sync with each other in this manner. There
are, though, several atomic operationsread/modify/write actions that a thread can execute
without pre-emption, i.e. without interruptionavailable on both global and shared memory.
For example, atomicAdd() performs a fetch-and-add operation, as described in Section 3.4.3 of
this book. The call is
where address of integer variable is the address of the (device) variable to add to, and inc is
the amount to be added. The return value of the function is the value originally at that address
before the operation.
There are also atomicExch() (exchange the two operands), atomicCAS() (if the first operand
equals the second, replace the first by the third), atomicMin(), atomicMax(), atomicAnd(),
atomicOr(), and so on.
Use -arch=sm 11 when compiling, e.g.
Though a barrier could in principle be constructed from the atomic operations, its overhead would
be quite high. In earlier models that was near a microsecond, and though that problem has
been ameliorated in more recent models, implementing a barrier in this manner . would not
be not much faster than attaining interblock synchronization by returning to the host and calling
cudaThreadSynchronize() there. Recall that the latter is a possible way to implement a barrier,
since global memory stays intact in between kernel calls, but again, it would be slow.
So, what if synchronization is really needed? This is the case, for instance, for iterative algorithms,
where all threads must wait at the end of each iteration.
If you have a small problem, maybe you can get satisfactory performance by using just one block.
Youll have to use a larger granularity, i.e. more work assigned to each thread. But using just one
block means youre using only one SM, thus only a fraction of the potential power of the machine.
If you use multiple blocks, though, your only feasible option for synchronization is to rely on returns
to the host, where synchronization occurs via cudaThreadSynchronize(). You would then have
the situation outlined in the discussion of Constant memory in Section 5.3.3.5.
126
5.5
Resource size considerations must be kept in mind when you design your code and your grid
configuration. In particular, note the following:
Each block in your code is assigned to some SM. It will be tied to that SM during the entire
execution of your kernel, though of course it will not constantly be running during that time.
If there are more blocks than can be accommodated by all the SMs, then some blocks will
need to wait for assignment; when a block finishes, that blocks resources, e.g. shared memory,
can now be assigned to a waiting block.
The programmer has no control over which block is assigned to which SM.
Within a block, threads execute by the warp, 32 threads. At any give time, the SM is running
one warp, chosen by the GPU OS.
The GPU has a limit on the number of threads that can run on a single block, typically 512,
and on the total number of threads running on an SM, 786.
If a block contains fewer than 32 threads, only part of the processing power of the SM its
running on will be used. So block size should normally be at least 32. Moreover, for the same
reason, block size should ideally be a multiple of 32.
If your code makes used of shared memory, larger block size may be the better. On the other
hand, the larger the block size, the longer the time it will take for barrier synchronization.
We want to use the full power of the GPU, with its many SMs, thus implying a need to use
at least as many blocks as there are SMs (which may require smaller blocks).
Moreover, due to the need for latency hiding in memory access, we want to have lots of warps,
so that some will run while others are doing memory access.
Two threads doing unrelated work, or the same work but with many if/elses, would cause a
lot of thread divergence if they were in the same block.
A commonly-cited rule of thumb is to have between 128 and 256 threads per block.
Though there is a limit on the number of blocks, this limit will be much larger than the number of
SMs. So, you may have multiple blocks running on the same SM. Since execution is scheduled by
the warp anyway, there appears to be no particular drawback to having more than one block on
the same SM.
5.6
127
The -g -G options are for setting up debugging, the first for host code, the second for device code.
You may also need to specify
-I/your_CUDA_include_path
to pick up the file cuda.h. Run the code as you normally would.
You may need to take special action to set your library path properly. For example, on Linux
machines, set the environment variable LD LIBRARY PATH to include the CUDA library.
To determine the limits, e.g. maximum number of threads, for your device, use code like this:
cudaDeviceProp Props;
cudaGetDeviceProperties(&Props,0);
The 0 is for device 0, assuming you only have one device. The return value of cudaGetDeviceProperties() is a complex C struct whose components are listed at http://developer.download.
nvidia.com/compute/cuda/2_3/toolkit/docs/online/group__CUDART__DEVICE_g5aa4f47938af8276f08074d0
html.
Heres a simple program to check some of the properties of device 0:
1 #i n c l u d e <cuda . h>
2 #i n c l u d e <s t d i o . h>
3
4 i n t main ( )
2
It might be in /sbin.
128
5
6
7
8
9
10
11
12
13
{
cudaDeviceProp Props ;
c u d a G e t D e v i c e P r o p e r t i e s ( &Props , 0 ) ;
p r i n t f ( s h a r e d mem: %d ) \ n , Props . sharedMemPerBlock ) ;
p r i n t f ( max t h r e a d s / b l o c k : %d\n , Props . maxThreadsPerBlock ) ;
p r i n t f ( max b l o c k s : %d\n , Props . maxGridSize [ 0 ] ) ;
p r i n t f ( t o t a l Const mem: %d\n , Props . totalConstMem ) ;
}
Under older versions of CUDA, such as 2.3, one can debug using GDB as usual. You must compile
your program in emulation mode, using the -deviceemu command-line option. This is no longer
available as of version 3.2. CUDA also includes a special version of GDB, CUDA-GDB (invoked as
cuda-gdb) for real-time debugging. However, on Unix-family platforms it runs only if X11 is not
running. Short of dedicating a machine for debugging, you may find it useful to install a version
2.3 in addition to the most recent one to use for debugging.
5.7
The issues involving coalescing in Section 5.3.3.2 would suggest that our rowsum code might run
faster with column sums, to take advantage of the memory banking. (So the user would either need
to take the transpose first, or have his code set up so that the matrix is in transpose form to begin
with.) As two threads in the same half-warp march down adjoining columns in lockstep, they will
always be accessing adjoining words in memory.
So, I modified the program accordingly (not shown), and compiled the two versions, as rs and cs,
the row- and column-sum versions of the code, respectively.
This did produce a small improvement (confirmed in subsequent runs, needed in any timing experiment):
pc5:~/CUDA% time rs 20000
2.585u 1.753s 0:04.54 95.3%
pc5:~/CUDA% time cs 20000
2.518u 1.814s 0:04.40 98.1%
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
129
// r e s u l t i n t h e c o r r e s p o n d i n g p o s i t i o n i n t h e colsum a r r a y c s ; matrix
// s t o r e d a s 1d i m e n s i o n a l , rowmajor o r d e r
v o i d f i n d 1 e l t ( i n t m, i n t cs , i n t n )
{
i n t sum=0;
int topofcol ;
int col , k ;
f o r ( c o l = 0 ; c o l < n ; c o l ++) {
topofcol = col ;
sum = 0 ;
f o r ( k = 0 ; k < n ; k++)
sum += m[ t o p o f c o l+kn ] ;
c s [ c o l ] = sum ;
}
}
i n t main ( i n t argc , c h a r argv )
{
i n t n = a t o i ( argv [ 1 ] ) ;
// number o f matrix c o l s / c o l s
i n t hm, // h o s t matrix
h c s ; // h o s t c o l s u m s
i n t m s i z e = n n s i z e o f ( i n t ) ; // s i z e o f matrix i n b y t e s
// a l l o c a t e s p a c e f o r h o s t matrix
hm = ( i n t ) m a l l o c ( m s i z e ) ;
// a s a t e s t , f i l l matrix with c o n s e c u t i v e i n t e g e r s
int t = 0 , i , j ;
f o r ( i = 0 ; i < n ; i ++) {
f o r ( j = 0 ; j < n ; j ++) {
hm [ i n+j ] = t++;
}
}
int cssize = n sizeof ( int );
hcs = ( i n t ) malloc ( c s s i z e ) ;
f i n d 1 e l t (hm, hcs , n ) ;
i f ( n < 1 0 ) f o r ( i =0; i <n ; i ++) p r i n t f (%d\n , h c s [ i ] ) ;
// c l e a n up
f r e e (hm ) ;
f r e e ( hcs ) ;
}
Very impressive! No wonder people talk of CUDA in terms like a supercomputer on our desktop.
And remember, this includes the time to copy the matrix from the host to the device (and to
130
copy the output array back). And we didnt even try to optimize thread configuration, memory
coalescing and bank usage, making good use of memory hierarchy, etc.3
On the other hand, remember that this is an embarrassingly parallel application, and in many
applications we may have to settle for a much more modest increase, and work harder to get it.
5.8
As in Sections 2.4.3 and 4.12, consider a network graph of some kind, such as Web links. For any
two vertices, say any two Web sites, we might be interested in mutual outlinks, i.e. outbound links
that are common to two Web sites. The CUDA code below finds the mean number of mutual
outlinks, among all pairs of sites in a set of Web sites.
1 #i n c l u d e <cuda . h>
2 #i n c l u d e <s t d i o . h>
3
4 // CUDA example :
f i n d s mean number o f mutual o u t l i n k s , among a l l p a i r s
5 // o f Web s i t e s i n our s e t ; i n c h e c k i n g a l l ( i , j ) p a i r s , t h r e a d k w i l l
6 // h a n d l e a l l i such t h a t i mod t o t t h = k , where t o t t h i s t h e number o f
7 // t h r e a d s
8
9 // p r o c p a i r s ( ) p r o c e s s e s a l l p a i r s f o r a g i v e n t h r e a d
10
global
v o i d p r o c p a i r s ( i n t m, i n t t o t , i n t n )
11 { i n t t o t t h = gridDim . x blockDim . x , // t o t a l number o f t h r e a d s
12
me = b l o c k I d x . x blockDim . x + t h r e a d I d x . x ; // my t h r e a d number
13
i n t i , j , k , sum = 0 ;
14
f o r ( i = me ; i < n ; i += t o t t h ) { // do v a r i o u s rows i
15
f o r ( j = i +1; j < n ; j ++) { // do a l l rows j > i
16
f o r ( k = 0 ; k < n ; k++)
17
sum += m[ n i+k ] m[ n j+k ] ;
18
}
19
}
20
atomicAdd ( t o t , sum ) ;
21 }
22
23 i n t main ( i n t argc , c h a r argv )
24 { i n t n = a t o i ( argv [ 1 ] ) , // number o f v e r t i c e s
25
nblk = a t o i ( argv [ 2 ] ) ;
// number o f b l o c k s
26
i n t hm, // h o s t matrix
27
dm, // d e v i c e matrix
28
htot , // h o s t grand t o t a l
29
d t o t ; // d e v i c e grand t o t a l
30
i n t m s i z e = n n s i z e o f ( i n t ) ; // s i z e o f matrix i n b y t e s
31
// a l l o c a t e s p a c e f o r h o s t matrix
3
Neither has the CPU-only version of the program been optimized. As pointed out by Bill Hsu, the row-major
version of that program should run faster than the column-major one, due to cache consideration.
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
131
hm = ( i n t ) m a l l o c ( m s i z e ) ;
// a s a t e s t , f i l l matrix with random 1 s and 0 s
int i , j ;
f o r ( i = 0 ; i < n ; i ++) {
hm [ n i+i ] = 0 ;
f o r ( j = 0 ; j < n ; j ++) {
i f ( j != i ) hm [ i n+j ] = rand ( ) % 2 ;
}
}
// a l l o c a t e s p a c e f o r d e v i c e matrix
cudaMalloc ( ( v o i d )&dm, m s i z e ) ;
// copy h o s t matrix t o d e v i c e matrix
cudaMemcpy (dm, hm, msize , cudaMemcpyHostToDevice ) ;
htot = 0;
// s e t up d e v i c e t o t a l and i n i t i a l i z e i t
cudaMalloc ( ( v o i d )& dtot , s i z e o f ( i n t ) ) ;
cudaMemcpy ( dtot ,& htot , s i z e o f ( i n t ) , cudaMemcpyHostToDevice ) ;
// s e t up p a r a m e t e r s f o r t h r e a d s s t r u c t u r e
dim3 dimGrid ( nblk , 1 ) ;
dim3 dimBlock ( 1 9 2 , 1 , 1 ) ;
// i n v o k e t h e k e r n e l
p r o c p a i r s <<<dimGrid , dimBlock>>>(dm, dtot , n ) ;
// w a i t f o r k e r n e l t o f i n i s h
cudaThreadSynchronize ( ) ;
// copy t o t a l from d e v i c e t o h o s t
cudaMemcpy(& htot , dtot , s i z e o f ( i n t ) , cudaMemcpyDeviceToHost ) ;
// check r e s u l t s
i f ( n <= 1 5 ) {
f o r ( i = 0 ; i < n ; i ++) {
f o r ( j = 0 ; j < n ; j ++)
p r i n t f (%d ,hm [ n i+j ] ) ;
p r i n t f (\n ) ;
}
}
p r i n t f ( mean = %f \n , h t o t / f l o a t ( ( n ( n 1 ) ) / 2 ) ) ;
// c l e a n up
f r e e (hm ) ;
cudaFree (dm ) ;
cudaFree ( d t o t ) ;
}
Again weve used the method in Section 2.4.3 to partition the various pairs (i,j) to the different
threads. Note the use of atomicAdd().
The above code is hardly optimal. The reader is encouraged to find improvements.
132
5.9
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
}
// f i n d s p r i m e s from 2 t o n , s t o r i n g t h e i n f o r m a t i o n i n dprimes , with
// dprimes [ i ] b e i n g 1 i f i i s prime , 0 i f c o m p o s i t e ; nth i s t h e number
// o f t h r e a d s ( threadDim somehow not r e c o g n i z e d )
global
v o i d s i e v e ( i n t dprimes , i n t n , i n t nth )
{
extern
shared
int sprimes [ ] ;
i n t me = t h r e a d I d x . x ;
i n t nth1 = nth 1 ;
// i n i t i a l i z e s p r i m e s a r r a y , 1 s f o r odds , 0 f o r e v e n s
i n i t s p ( s p r i m e s , n , nth , me ) ;
// c r o s s out m u l t i p l e s o f v a r i o u s numbers m, with each t h r e a d d o i n g
// a chunk o f m s ; always check f i r s t t o d e t e r m i n e whether m has
// a l r e a d y been found t o be c o m p o s i t e ; f i n i s h when mm > n
i n t maxmult ,m, s t a r t m u l t , endmult , chunk , i ;
f o r (m = 3 ; mm <= n ; m++) {
i f ( s p r i m e s [m] != 0 ) {
// f i n d l a r g e s t m u l t i p l e o f m t h a t i s <= n
maxmult = n / m;
// now p a r t i t i o n 2 , 3 , . . . , maxmult among t h e t h r e a d s
chunk = ( maxmult 1 ) / nth ;
s t a r t m u l t = 2 + me chunk ;
i f (me < nth1 ) endmult = s t a r t m u l t + chunk 1 ;
e l s e endmult = maxmult ;
}
// OK, c r o s s out my chunk
f o r ( i = s t a r t m u l t ; i <= endmult ; i ++) s p r i m e s [ i m] = 0 ;
}
syncthreads ( ) ;
// copy back t o d e v i c e g l o b a l memory f o r r e t u r n t o h o s t
c p y t o g l b ( dprimes , s p r i m e s , n , nth , me ) ;
}
i n t main ( i n t argc , c h a r argv )
{
i n t n = a t o i ( argv [ 1 ] ) , // w i l l f i n d p r i m e s among 1 , . . . , n
nth = a t o i ( argv [ 2 ] ) ;
// number o f t h r e a d s
i n t hprimes , // h o s t p r i m e s l i s t
dprimes ; // d e v i c e p r i m e s l i s t
i n t p s i z e = ( n+1) s i z e o f ( i n t ) ; // s i z e o f p r i m e s l i s t s i n b y t e s
// a l l o c a t e s p a c e f o r h o s t l i s t
hprimes = ( i n t ) m a l l o c ( p s i z e ) ;
// a l l o c a t e s p a c e f o r d e v i c e l i s t
cudaMalloc ( ( v o i d )& dprimes , p s i z e ) ;
dim3 dimGrid ( 1 , 1 ) ;
dim3 dimBlock ( nth , 1 , 1 ) ;
// i n v o k e t h e k e r n e l , i n c l u d i n g a r e q u e s t t o a l l o c a t e s h a r e d memory
s i e v e <<<dimGrid , dimBlock , p s i z e >>>(dprimes , n , nth ) ;
// check whether we asked f o r t o o much s h a r e d memory
133
134
97
98
99
100
101
102
103
104
105
106
107
108
109
c u d a E r r o r t e r r = c ud aG e tL as t Er ro r ( ) ;
i f ( e r r != c u d a S u c c e s s ) p r i n t f (% s \n , c u d a G e t E r r o r S t r i n g ( e r r ) ) ;
// w a i t f o r k e r n e l t o f i n i s h
cudaThreadSynchronize ( ) ;
// copy l i s t from d e v i c e t o h o s t
cudaMemcpy ( hprimes , dprimes , p s i z e , cudaMemcpyDeviceToHost ) ;
// check r e s u l t s
i f ( n <= 1 0 0 0 ) f o r ( i n t i =2; i<=n ; i ++)
i f ( hprimes [ i ] == 1 ) p r i n t f (%d\n , i ) ;
// c l e a n up
f r e e ( hprimes ) ;
cudaFree ( dprimes ) ;
}
This code has been designed with some thought as to memory speed and thread divergence. Ideally,
we would like to use device shared memory if possible, and to exploit the lockstep, SIMD nature
of the hardware.
The code uses the classical Sieve of Erathosthenes, crossing out multiples of 2, 3, 5, 7 and so on
to get rid of all the composite numbers. However, the code here differs from that in Section 1.3.2.1,
even though both programs use the Sieve of Erathosthenes.
Say we have just two threads, A and B. In the earlier version, thread A might cross out all multiples
of 19 while B handles multiples of 23. In this new version, thread A deals with only some multiples
of 19 and B handles the others for 19. Then they both handle their own portions of multiples of
23, and so on. The thinking here is that the second version will be more amenable to lockstep
execution, thus causing less thread divergence.
Thus in this new version, each thread handles a chunk of multiples of the given prime. Note the
contrast of this with many CUDA examples, in which each thread does only a small amount of
work, such as computing a single element in the product of two matrices.
In order to enhance memory performance, this code uses device shared memory. All the crossing
out is done in the shared memory array sprimes, and then when we are all done, that is copied
to the device global memory array dprimes, which is in turn copies to host memory. By the way,
note that the amount of shared memory here is determined dynamically.
However, device shared memory consists only of 16K bytes, which would limit us here to values of
n up to about 4000. Moreover, by using just one block, we are only using a small part of the CPU.
Extending the program to work for larger values of n would require some careful planning if we
still wish to use shared memory.
5.10
135
Here we wish to compute cumulative sums. For instance, if the original array is (3,1,2,0,3,0,1,2),
then it is changed to (3,4,6,6,9,9,10,12).
(Note: This is a special case of the prefix scan problem, covered in Chapter 10.)
The general plan is for each thread to operate on one chunk of the array. A thread will find
cumulative sums for its chunk, and then adjust them based on the high values of the chunks that
precede it. In the above example, for instance, say we have 4 threads. The threads will first produce
(3,4), (2,2), (3,3) and (1,3). Since thread 0 found a cumulative sum of 4 in the end, we must add
4 to each element of (2,2), yielding (6,6). Thread 1 had found a cumulative sum of 2 in the end,
which together with the 4 found by thread 0 makes 6. Thus thread 2 must add 6 to each of its
elements, i.e. add 6 to (3,3), yielding (9,9). The case of thread 3 is similar.
Below is code for the special case of a single block:
1 // f o r t h i s s i m p l e i l l u s t r a t i o n , i t i s assumed t h a t t h e code r u n s i n
2 // j u s t one b l o c k , and t h a t t h e number o f t h r e a d s e v e n l y d i v i d e s n
3
4 // improvements t h a t c o u l d be made :
5 //
1 . change t o m u l t i p l e b l o c k s , t o t r y t o u s e a l l SMs
6 //
2.
p o s s i b l y u s e s h a r e d memory
7 //
3 . have each t h r e a d work on s t a g g e r e d e l e m e n t s o f dx , r a t h e r than
8 //
on c o n t i g u o u s ones , t o g e t more e f f i c i e n t bank a c c e s s
9
10 #i n c l u d e <cuda . h>
11 #i n c l u d e <s t d i o . h>
12
13
global
v o i d cumulker ( i n t dx , i n t n )
14 {
15
i n t me = t h r e a d I d x . x ;
16
i n t c s i z e = n / blockDim . x ;
17
i n t s t a r t = me c s i z e ;
18
i n t i , j , base ;
19
f o r ( i = 1 ; i < c s i z e ; i ++) {
20
j = start + i ;
21
dx [ j ] = dx [ j 1] + dx [ j ] ;
22
}
syncthreads ( ) ;
23
24
i f (me > 0 ) {
25
base = 0 ;
26
f o r ( j = 0 ; j < me ; j ++)
27
b a s e += dx [ ( j +1) c s i z e 1 ] ;
28
}
29
syncthreads ( ) ;
30
i f (me > 0 ) {
31
f o r ( i = s t a r t ; i < s t a r t + c s i z e ; i ++)
136
32
33
34
dx [ i ] += b a s e ;
}
}
5.11
Shared memory only helps if we are doing multiple accesses to the data. If for instance our code
does a single read and a single write to an element of an array, then transferring it back and forth
between global and shared memory isnt worthwhile.
Would the cumulative-sums program in Section 5.10 benefit from the use of shared memory? (Put
aside the fact that the code runs in just one block, making use of just a sliver of the machine.) The
answer appears to be that a modest imrprovement might be obtained. Each thread (except the
first) reads many elements of dx twice, some of them three times. There are also writes.
The case of the prime-finder program in Section 5.9 is less clear, and probably quite dependent on
whether we are using the more advanced GPUs, which feature at least some L1 cache space.
5.12
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
// k e r n e l t r a n s g r a p h ( ) d o e s t h i s work
// arguments :
//
adjm :
t h e a d j a c e n c y matr ix (NOT assumed symmetric ) , 1 f o r edge , 0
//
o t h e r w i s e ; n o t e : matrix i s o v e r w r i t t e n by t h e f u n c t i o n
//
n : number o f rows and columns o f adjm
//
adjmout : output matrix
//
nout : number o f rows i n adjmout
global
v o i d t g k e r n e l 1 ( i n t dadjm , i n t n , i n t d c o u n t s )
int tot1s , j ;
i n t me = blockDim . x b l o c k I d x . x + t h r e a d I d x . x ;
tot1s = 0;
f o r ( j = 0 ; j < n ; j ++) {
i f ( dadjm [ nme+j ] == 1 ) {
dadjm [ nme+t o t 1 s ++] = j ;
}
d c o u n t s [ me ] = t o t 1 s ;
}
}
global
v o i d t g k e r n e l 2 ( i n t dadjm , i n t n ,
i n t dcounts , i n t d s t a r t s , i n t doutm )
{ i n t outrow , num1si , j ;
// i n t me = t h r e a d I d x . x ;
i n t me = blockDim . x b l o c k I d x . x + t h r e a d I d x . x ;
// f i l l i n t h i s thread s p o r t i o n o f doutm
outrow = d s t a r t s [ me ] ;
num1si = d c o u n t s [ me ] ;
i f ( num1si > 0 ) {
f o r ( j = 0 ; j < num1si ; j ++) {
doutm [ 2 outrow+2 j ] = me ;
doutm [ 2 outrow+2 j +1] = dadjm [ nme+j ] ;
}
}
}
// r e p l a c e s c o u n t s by c u m u l a t i v e c o u n t s
v o i d cumulcounts ( i n t c , i n t s , i n t n )
{ int i ;
s [ 0 ] = 0;
f o r ( i = 1 ; i < n ; i ++) {
s [ i ] = s [ i 1] + c [ i 1 ] ;
}
}
i n t t r a n s g r a p h ( i n t hadjm , i n t n , i n t nout , i n t g s i z e , i n t b s i z e )
{ i n t dadjm ; // d e v i c e a d j a c e n c y matrix
i n t houtm ; // h o s t output matrix
i n t doutm ; // d e v i c e output matrix
137
138
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
}
i n t main ( i n t argc , c h a r argv )
{ int i , j ;
i n t adjm ; // h o s t a d j a c e n c y matrix
i n t outm ; // h o s t output matrix
i n t n = a t o i ( argv [ 1 ] ) ;
i n t g s i z e = a t o i ( argv [ 2 ] ) ;
i n t b s i z e = a t o i ( argv [ 3 ] ) ;
i n t nout ;
adjm = ( i n t ) m a l l o c ( nn s i z e o f ( i n t ) ) ;
f o r ( i = 0 ; i < n ; i ++)
f o r ( j = 0 ; j < n ; j ++)
i f ( i == j ) adjm [ n i+j ] = 0 ;
e l s e adjm [ n i+j ] = rand ( ) % 2 ;
i f (n < 10) {
p r i n t f ( a d j a c e n c y matrix : \n ) ;
f o r ( i = 0 ; i < n ; i ++) {
f o r ( j = 0 ; j < n ; j ++) p r i n t f (%d , adjm [ n i+j ] ) ;
p r i n t f (\n ) ;
}
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
139
}
s t r u c t t i m e s p e c bgn , nd ;
c l o c k g e t t i m e (CLOCK REALTIME, &bgn ) ;
outm = t r a n s g r a p h ( adjm , n,& nout , g s i z e , b s i z e ) ;
p r i n t f ( num rows i n out matrix = %d\n , nout ) ;
i f ( nout < 5 0 ) {
p r i n t f ( out matrix : \n ) ;
f o r ( i = 0 ; i < nout ; i ++)
p r i n t f (%d %d\n , outm [ 2 i ] , outm [ 2 i + 1 ] ) ;
}
c l o c k g e t t i m e (CLOCK REALTIME, &nd ) ;
p r i n t f (% f \n , t i m e d i f f ( bgn , nd ) ) ;
}
5.13
Error Checking
Every CUDA call (except for kernel invocations) returns an error code of type cudaError t. One
can view the nature of the error by calling cudaGetErrorString() and printing its output.
For kernel invocations, one can call cudaGetLastError(), which does what its name implies. A
call would typically have the form
cudaError_t err = cudaGetLastError();
if(err != cudaSuccess) printf("%s\n",cudaGetErrorString(err));
You may also wish to cutilSafeCall(), which is used by wrapping your regular CUDA call. It
automatically prints out error messages as above.
Each CUBLAS call returns a potential error code, of type cublasStatus, not checked here.
5.14
Loop Unrolling
Loop unrolling is an old technique used on uniprocessor machines to achieve speedup due to
branch elimination and the like. Branches make it difficult to do instruction or data prefetching,
so eliminating them may speed things up.
The CUDA compiler provides the programmer with the unroll pragma to request loop unrolling.
Here an n-iteration for loop is changed to k copies of the body of the loop, each working on about
140
n/k iterations. If n and k are known constant, GPU registers can be used to implement the unrolled
loop.
For example, the loop
for (i = 0; i < 2; i++ {
sum += x[i];
sum2 += x[i]*x[i];
}
could be unrolled to
sum += x[1];
sum2 += x[1]*x[1];
sum += x[2];
sum2 += x[2]*x[2];
Here n = k = 2. If x is local to this function, then unrolling will allow the compiler to store it in
a register, which could be a great performance enhancer.
The compiler will try to do loop unrolling even if the programmer doesnt request it, but the
programmer can try to control things by using the pragma:
#pragma unroll k
suggest to the compiler a k-fold unrolling. Setting k = 1 will instruct the compiler not to unroll.
5.15
Short Vectors
In CUDA, there are types such as int4, char2 and so on, up to four elements each. So, an uint4
type is a set of four unsigned ints. These are called short vectors.
The key point is that a short vector can be treated as a single word in terms of memory access
and GPU instructions. It may be possible to reduce time by a factor of 4 by dividing arrays into
chunks of four contiguous words and making short vectors from them.
5.16
The latest GPU architecture from NVIDIA is called Kepler. Many of the advances are of the
bigger and faster than before type. These are important, but be sure to note the significant
architectural changes, including:
141
Host memory, device global memory and device shared memory share a unifed address space.
On-chip memory can be apportioned to both shared memory and cache memory. Since shared
memory is in essence a programmer-managed cache, this gives the programmer access to a
real cache, a great convenience to the programmer though with a possible sacrifice in speed.
Note by the way that this cache is aimed at spatial locality, not temporal locality.
5.17
CUDA programming can involve a lot of work, and one is never sure that ones code is fully efficient.
Fortunately, a number of libraries of tight code have been developed for operations that arise often
in parallel programming.
You are of course using CUDA code at the bottom, but without explicit kernel calls. And again,
remember, the contents of device global memory are persistent across kernel calls in the same
application. Therefore you can mix explicit CUDA code and calls to these libraries. Your program
might have multiple kernel invocations, some CUDA and others to the libraries, with each using
data in device global memory that was written by earlier kernels. In some cases, you may need to
do a conversion to get the proper type.
These packages can be deceptively simple. Remember, each call to a function in these
packages involves a CUDA kernel callwith the associated overhead.
Programming in these libraries is typically much more convenient than in direct CUDA. Note,
though, that even though these libraries have been highly optimized for what they are intended to
do, they will not generally give you the fastest possible code for any given CUDA application.
Well discuss a few such libraries in this section.
5.17.1
CUBLAS
CUDA includes some parallel linear algebra routines callable from straight C code. In other words,
you can get the benefit of GPU in linear algebra contexts without directly programming in CUDA.
5.17.1.1
Below is an example RowSumsCB.c, the matrix row sums example again, this time using
CUBLAS. We can find the vector of row sums of the matrix A by post-multiplying A by a column
vector of all 1s.
I compiled the code by typing
142
You should modify for your own CUDA locations accordingly. Users who merely wish to use
CUBLAS will find the above more convenient, but if you are mixing CUDA and CUBLAS, you
would use nvcc:
nvcc -g -G RowSumsCB.c -lcublas
39
40
41
42
143
c u b l a s F r e e (dm ) ;
cublasFree ( drs ) ;
cublasShutdown ( ) ;
}
As noted in the comments, CUBLAS assumes FORTRAN-style, i.e. column-major order, for
matrices.
Now that you know the basic format of CUDA calls, the CUBLAS versions will look similar. In
the call
cublasAlloc(n*n,sizeof(float),(void**)&dm);
for instance, we are allocating space on the device for an n x n matrix of floats.
The call
cublasSetMatrix(n,n,sizeof(float),hm,n,dm,n);
is slightly more complicated. Here we are saying that we are copying hm, an n x n matrix of floats
on the host, to dm on the host. The n arguments in the last and third-to-last positions again say
that the two matrices each have n dimensioned rows. This seems redundant, but this is needed in
cases of matrix tiling, where the number of rows of a tile would be less than the number of rows of
the matrix as a whole.
The 1s in the call
cublasSetVector(n,sizeof(float),ones,1,drs,1);
are needed for similar reasons. We are saying that in our source vector ones, for example, the
elements of interest are spaced 1 elements apart, i.e. they are contiguous. But if we wanted our
vector to be some row in a matrix with, say, 500 rows, the elements of any particular row of interest
would be spaced 500 elements apart, again keeping in mind that column-major order is assumed.
The actual matrix multiplication is done here:
cublasSgemv(n,n,n,1.0,dm,n,drs,1,0.0,drs,1);
The mv in cublasSgemv stands for matrix times vector. Here the call says: no (n), we do
not want the matrix to be transposed; the matrix has n rows and n columns; we wish the matrix to
be multiplied by 1.0 (if 0, the multiplication is not actually performed, which we could have here);
144
the matrix is at dm; the number of dimensioned rows of the matrix is n; the vector is at drs; the
elements of the vector are spaced 1 word apart; we wish the vector to not be multiplied by a scalar
(see note above); the resulting vector will be stored at drs, 1 word apart.
Further information is available in the CUBLAS manual.
5.17.2
Thrust
The Thrust library is usable not only with CUDA but also to general OpenMP code! So Ive put
my coverage of Thrust in a separate chapter, Chapter 6.
5.17.3
CUDPP
CUDPP is similar to Thrust (though CUDPP was developed earlier) in terms of operations offered.
It is perhaps less flexible than Thrust, but is easier to learn and is said to be faster.
(No examples yet, as the author did not have access to a CUDPP system yet.)
5.17.4
CUFFT
CUFFT does for the Fast Fourier Transform what CUBLAS does for linear algebra, i.e. it provides
CUDA-optimized FFT routines.
5.18
There are additional CUDA examples in later sections of this book. These include:4
Prof. Richard Edgars matrix-multiply code, optimized for use of shared memory, Section
11.3.2.2.
Odd/even transposition sort, Section 12.3.3, showing a typical CUDA pattern for iterative
algorithms.
Gaussian elimination for linear systems, Section 11.5.1.
If you are reading this presentation on CUDA separately from the book, the book is at http://heather.cs.
ucdavis.edu/~matloff/158/PLN/ParProcBook.pdf
Chapter 6
6.1
Thrust allows the programmer a choice of back ends, i.e. platforms on which the executable code
will run. In addition to the CUDA back end, for running on the GPU, one can also choose OpenMP
as the back end. The latter choice allows the high-level expressive power of Thrust to be used on
multicore machines. A third choice is Intels TBB language, which often produces faster code than
OpenMP.
6.1.1
Compiling to CUDA
If your CUDA version is at least 4.0, then Thrust is included, which will be assumed here. In that
case, you compile Thrust code with nvcc, no special link commands needed.
145
146
6.1.2
Compiling to OpenMP
You can use Thrust to generate OpenMP code. The Thrust include files work without having a
GPU. Here for instance is how you would compile the first example program below:1
1
2
3
I had no CUDA-capable GPU on this machine, but put the Thrust include directory tree in /usr/home/matloff/Tmp/tmp1, and then compiled as with any other include file.
The result is real OpenMP code.2 Everywhere you set up a Thrust vector, youll be using OpenMP,
i.e. the threads set up by Thrust will be OpenMP threads on the CPU rather than CUDA threads
on the GPU.3 You set the number of threads as you do with any OpenMP program, e.g. with the
environment variable OMP NUM THREADS.
6.2
As our first example, suppose we wish to determine the number of distinct values in an integer
array. The following code may not be too efficient, but as an introduction to Thrust fundamental
building blocks, well take the following approach:
(a) sort the array
(b) compare the array to a shifted version of itself, so that changes from one distinct element to
another can be detected, producing an array of 1s (change) and 0s (no change)
(c) count the number of 1s
Heres the code:
1
2
3
4
5
6
7
// v a r i o u s Thrust i n c l u d e s
#i n c l u d e <t h r u s t / h o s t v e c t o r . h>
#i n c l u d e <t h r u s t / d e v i c e v e c t o r . h>
#i n c l u d e <t h r u s t / g e n e r a t e . h>
#i n c l u d e <t h r u s t / s o r t . h>
#i n c l u d e <t h r u s t / copy . h>
#i n c l u d e <t h r u s t / count . h>
1
Note that we used a .cpp suffix for the source file name, instead of .cu. Or, we can use the -x cu if compiling
with nvcc.
2
If you search through the Thrust source code, youll fine omp pragmas.
3
Threads will not be set up if you use host arraysvectors.
8 #i n c l u d e <c s t d l i b >
9
10 i n t rand16 ( ) // g e n e r a t e random i n t e g e r s mod 16
11 { r e t u r n rand ( ) % 1 6 ; }
12
13 // C++ f u n c t o r , t o be c a l l e d from t h r u s t : : t r a n s f o r m ( ) ; compares
14 // c o r r e s p o n d i n g e l e m e n t s o f t h e a r r a y s x and y , y i e l d i n g 0 when they
15 // match , 1 when they don t
16 s t r u c t f i n d d i f f
17 {
18
device
i n t o p e r a t o r ( ) ( c o n s t i n t& x , c o n s t i n t&y )
19
{ r e t u r n x == y ? 0 : 1 ; }
20 } ;
21
22 i n t main ( v o i d )
23 {
24
// g e n e r a t e t e s t data , 1000 random numbers , on t h e host , i n t type
25
t h r u s t : : h o s t v e c t o r <i n t > hv ( 1 0 0 0 ) ;
26
t h r u s t : : g e n e r a t e ( hv . b e g i n ( ) , hv . end ( ) , rand16 ) ;
27
28
// copy data t o t h e d e v i c e , c r e a t i n g a v e c t o r t h e r e
29
t h r u s t : : d e v i c e v e c t o r <i n t > dv = hv ;
30
31
// s o r t data on t h e d e v i c e
32
t h r u s t : : s o r t ( dv . b e g i n ( ) , dv . end ( ) ) ;
33
34
// c r e a t e d e v i c e v e c t o r t o h o l d d i f f e r e n c e s , with l e n g t h 1 l e s s than
35
// dv s l e n g t h
36
t h r u s t : : d e v i c e v e c t o r <i n t > d i f f s ( dv . s i z e ( ) 1 ) ;
37
38
// f i n d t h e d i f f s ; n o t e t h a t t h e s y n t a x i s f i n d d i f f ( ) , not f i n d d i f f
39
t h r u s t : : t r a n s f o r m ( dv . b e g i n ( ) , dv . end () 1 ,
40
dv . b e g i n ()+1 , d i f f s . b e g i n ( ) , f i n d d i f f ( ) ) ;
41
42
// count t h e 1 s , by removing 0 s and c h e c k i n g new l e n g t h
43
// ( o r c o u l d u s e t h r u s t : : count ( ) )
44
i n t n d i f f s = t h r u s t : : r e d u c e ( d i f f s . b e g i n ( ) , d i f f s . end ( ) , ( i n t ) 0 ,
45
t h r u s t : : p l u s <i n t > ( ) ) ;
46
p r i n t f (# d i s t i n c t : %d\n , n d i f f s +1);
47
48
// we ve a c h i e v e d our g o a l , but l e t s do a l i t t l e more
49
// t r a n s f e r data back t o h o s t
50
t h r u s t : : copy ( dv . b e g i n ( ) , dv . end ( ) , hv . b e g i n ( ) ) ;
51
52
p r i n t f ( the sorted array : \ n ) ;
53
f o r ( i n t i = 0 ; i < 1 0 0 0 ; i ++) p r i n t f (%d\n , hv [ i ] ) ;
54
55
return 0;
56 }
147
148
After generating some random data on a host array hv, we copy it to the device, creating a vector
dv there. This code is certainly much simpler to write than slogging through calls to cudaMalloc()
and cudaMemcpy()!
The heart of the code is the call to thrust::transform(), which is used to implement step (b) in
our outline above. It performs a map operation as in functional programming, taking one or two
arrays (the latter is the case here) as input, and outputting an array of the same size.
This example, as is typical in Thrust code, defines a functor. Well, what is a functor? This is
a C++ mechanism to produce a callable function, laregly similar in goal to using a pointer to a
function. In the context above, we are turning a C++ struct into a callable function, and we can
do so with classes too. Since structs and classes can have member variables, we can store needed
data in them, and that is what distinguishes functors from function pointers.
The transform function does elementwise operationit calls the functor on each corresponding pair
of elements from the two input arguments (0th element with 0th element, 1st with 1st, etc.), placing
the results in the output array. So we must thus design our functor to do the map operation. In
this case, we want to compare successive elements of our array (after sorting it), so we must find
a way to do this through some element-by-element operation. The solution is to do elementwise
comparison of the array and its shifted version. The call is
t h r u s t : : t r a n s f o r m ( dv . b e g i n ( ) , dv . end () 1 ,
dv . b e g i n ()+1 , d i f f s . b e g i n ( ) , f i n d d i f f ( ) ) ;
rand16() is an ordinary function, not a functor, so we just write its name here, thus passing a
pointer to the function.
In the code
device
i n t o p e r a t o r ( ) ( c o n s t i n t& x , c o n s t i n t&y )
{ r e t u r n x == y ? 0 : 1 ; }
the C++ keyword operator says we are defining a function, which in this case has two int inputs
and an int output. We stated earlier that functors are callable structs, and this is what gets called.
Thrust vectors have built-in member functions begin() and end(), that specify the start and the
place 1 element past the end of the array. Note that we didnt actually create our shifted array in
t h r u s t : : t r a n s f o r m ( dv . b e g i n ( ) , dv . end () 1 ,
dv . b e g i n ()+1 , d i f f s . b e g i n ( ) , f i n d d i f f ( ) ) ;
Instead, we specified the array beginning 1 element past the start of dv.
149
The places returned by calling begin() and end() above are formally called iterators, and work
in a manner similar to pointers. Note again that end() returns a pointer to the location just after
the last element of the array. The pointers are of type thrust::device vector<int>::iterator
here, with similar expressions for cases other than int type.
The transform function, in this case the comparison operation, will be done in parallel, on the
GPU or other backend, as was the sorting. All thats left is to count the 1s. We want to do that
in parallel too, and Thrust provides another functional programming operation, reduction (as in
OpenMP). We specify Thrusts built-in addition function (we could have defined our own if it were
a more complex situation) as the operation, and 0 as the initial value:
i n t n d i f f s = t h r u s t : : r e d u c e ( d i f f s . b e g i n ( ) , d i f f s . end ( ) , ( i n t ) 0 , t h r u s t : : p l u s <i n t > ( ) ) ;
We also could have used Thrusts thrust::count() function for further convenience.
Below is a shorter version of our unique-values-counter program, using thrust::unique(). Note
that that function only removes consecutive duplicates, so the preliminary sort is still needed.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// v a r i o u s Thrust i n c l u d e s
#i n c l u d e <t h r u s t / h o s t v e c t o r . h>
#i n c l u d e <t h r u s t / d e v i c e v e c t o r . h>
#i n c l u d e <t h r u s t / g e n e r a t e . h>
#i n c l u d e <t h r u s t / s o r t . h>
#i n c l u d e <t h r u s t / copy . h>
#i n c l u d e <t h r u s t / u niqu e . h>
#i n c l u d e <c s t d l i b >
i n t rand16 ( )
{ r e t u r n rand ( ) % 1 6 ;
i n t main ( v o i d )
{
t h r u s t : : h o s t v e c t o r <i n t > hv ( 1 0 0 0 ) ;
t h r u s t : : g e n e r a t e ( hv . b e g i n ( ) , hv . end ( ) , rand16 ) ;
t h r u s t : : d e v i c e v e c t o r <i n t > dv = hv ;
t h r u s t : : s o r t ( dv . b e g i n ( ) , dv . end ( ) ) ;
t h r u s t : : d e v i c e v e c t o r <i n t > : : i t e r a t o r newend =
t h r u s t : : uniq ue ( dv . b e g i n ( ) , dv . end ( ) ) ;
p r i n t f (# d i s t i n c t : %d\n , newend dv . b e g i n ( ) ) ;
return 0;
}
The unique() function returns an iterator pointing to (one element past) the end of the result of
applying the unique-ifying operation. We can then subtract iterator values to get our desired
150
count:
p r i n t f (# d i s t i n c t : %d\n , newend dv . b e g i n ( ) ) ;
6.3
We may wish to wrap utility Thrust code in a function callable from a purely C/C++ program.
The code below does that for the Thrust sort function.
1 // d e f i n i t e l y needed
2 e x t e r n C v o i d t s o r t ( i n t x , i n t nx ) ;
3
4 #i n c l u d e <t h r u s t / d e v i c e v e c t o r . h>
5 #i n c l u d e <t h r u s t / s o r t . h>
6
7 // nx s e t up a s p o i n t e r s o can c a l l from R
8 v o i d t s o r t ( i n t x , i n t nx )
9 { i n t n = nx ;
10
// s e t up d e v i c e v e c t o r and copy x t o i t
11
t h r u s t : : d e v i c e v e c t o r <i n t > dx ( x , x+n ) ;
12
// s o r t , then copy back t o x
13
t h r u s t : : s o r t ( dx . b e g i n ( ) , dx . end ( ) ) ;
14
t h r u s t : : copy ( dx . b e g i n ( ) , dx . end ( ) , x ) ;
15 }
To compile in the CUDA case, run nvcc -c and then run gcc (or whatever) as usual, making sure
to link with the CUDA library. For instance,
nvcc c SortForC . cu
g c c Main . c S o L/ u s r / l o c a l / cuda / l i b l c u d a r t
Heres an example:
1 // T e s t S o r t . cpp :
i n t e r f a c e t o Thrust s o r t from nonCUDA c a l l e r s
2
3 #i n c l u d e <s t d i o . h>
4
5 e x t e r n C v o i d t s o r t ( i n t x , i n t nx ) ;
6
7 // t e s t
8 i n t main ( )
9 { int x [ 5 ] = {12 ,13 ,5 ,8 ,88};
10
i n t n=5 ,nx ; nx = &n ;
11
int i ;
12
t s o r t ( x , nx ) ;
13
f o r ( i = 0 ; i < 5 ; i ++) p r i n t f (%d\n , x [ i ] ) ;
14 }
151
6.4
One of the most useful types of Thrust operations is that provided by conditional functions. For
instance, copy if() acts as a filter, copying from an array only those elements that satisfy a predicate. In the example below, for instance, we can copy every third element of an array, or every
eighth etc.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// i l l u s t r a t i o n o f c o p y i f ( )
// f i n d e v e r y kth e l e m e n t i n g i v e n a r r a y , g o i n g from s m a l l e s t t o
// l a r g e s t ; k o b t a i n e d from command l i n e and f e d i n t o i s m u l t k ( ) f u n c t o r
// t h e s e a r e t h e t h e i k /n 100 p e r c e n t i l e s , i = 1 , 2 , . . .
#i n c l u d e <s t d i o . h>
#i n c l u d e
#i n c l u d e
#i n c l u d e
#i n c l u d e
<t h r u s t / d e v i c e v e c t o r . h>
<t h r u s t / s o r t . h>
<t h r u s t / s e q u e n c e . h>
<t h r u s t / remove . h> // f o r c o p y i f ( ) but not c o p y i f . h
// f u n c t o r
struct ismultk {
152
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
This is in a sense the second, though nonexplicit, argument to our calls to ismultk(). For example,
in our call,
t h r u s t : : c o p y i f ( hx . b e g i n ( ) , hx . end ( ) , seq , out , i s m u l t k ( i n c r ) ) ;
153
the function designated by operator within the ismultk struct will be called individually on each
element in hx, each one playing role of i in
bool operator ( ) ( const int i )
{ r e t u r n i != 0 && ( i % increm ) == 0 ;
}
Since this code references increm, the value incr in our call above is used as well. The variable
increm acts as a global variable to all the actions of the operator.
6.5
In order to mix Thrust and CUDA code, Thrust has the function thrust::raw pointer cast() to
convert from a Thrust device pointer type to a CUDA device pointer type, and has thrust::device ptr
to convert in the other direction.
In our example in Section 6.6, we convert from Thrust to an ordinary address on the device:
i n t wd ;
...
wd = t h r u s t : : r a w p o i n t e r c a s t (&w [ 0 ] ) ;
...
{ i f ( i != 0 && ( i % increm ) == 0 ) wd [ i ] = 2 wd [ i ] ;
In the other direction, say we start with a CUDA pointer, and want to use it in Thrust. We might
have something like
i n t dz ;
...
cudaMalloc (&dz , 1 0 0 s i z e o f ( i n t ) ) ;
...
t h r u s t : : d e v i c e p t r <i n t > t z ( dz ) ;
...
i n t k = t h r u s t : : r e d u c e ( tz , t z +100 , ( i n t ) 0 , t h r u s t : : p l u s <i n t > ( ) ) ;
6.6
Lets adapt the code from the last section in order to illustrate another technique.
Suppose instead of copying every kth element of an array (after this first one), we wish to merely
double each such element. There are various ways we could do this, but here well use an approach
that shows another way we can use functors.
154
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
155
c o n s t t h r u s t : : d e v i c e v e c t o r <i n t > : : i t e r a t o r w ;
i n t wd ;
...
wd = t h r u s t : : r a w p o i n t e r c a s t (&w [ 0 ] ) ;
// p o i n t e r t o our a r r a y
Our call to copy if() doesnt actually do any copying. We are exploiting the if in copy if, not
the copy.
6.7
These basically act as permuters; see the comments in the following small examples.
scatter:
1 // i l l u s t r a t i o n o f t h r u s t : : s c a t t e r ( ) ; permutes an a r r a y a c c o r d i n g t o a
2 // map a r r a y
3
4 #i n c l u d e <s t d i o . h>
5 #i n c l u d e <t h r u s t / d e v i c e v e c t o r . h>
6 #i n c l u d e <t h r u s t / s c a t t e r . h>
7
8 i n t main ( )
9 { int x [ 5 ] = {12 ,13 ,5 ,8 ,88};
10
i n t n=5;
11
t h r u s t : : d e v i c e v e c t o r <i n t > dx ( x , x+n ) ;
12
// a l l o c a t e map v e c t o r
13
t h r u s t : : d e v i c e v e c t o r <i n t > dm( n ) ;
14
// a l l o c a t e v e c t o r f o r output o f g a t h e r
15
t h r u s t : : d e v i c e v e c t o r <i n t > h d s t ( n ) ;
16
// example map
17
i n t m[ 5 ] = { 3 , 2 , 4 , 1 , 0 } ;
18
t h r u s t : : copy (m,m+n , dm . b e g i n ( ) ) ;
19
t h r u s t : : s c a t t e r ( dx . b e g i n ( ) , dx . end ( ) ,dm . b e g i n ( ) , d d s t . b e g i n ( ) ) ;
20
// t h e o r i g i n a l x [ 0 ] s h o u l d now be a t p o s i t i o n 3 , t h e o r i g i n a l x [ 1 ]
21
// now a t p o s i t i o n 2 , e t c . , i . e . 8 8 , 8 , 1 3 , 1 2 ; , 5 check i t :
22
t h r u s t : : copy ( d d s t . b e g i n ( ) , d d s t . end ( ) ,
23
s t d : : o s t r e a m i t e r a t o r <i n t >( s t d : : cout , ) ) ;
24
s t d : : c o u t << \n ;
25 }
gather():
1
2
// i l l u s t r a t i o n s o f t h r u s t : : g a t h e r ( ) ; permutes an a r r a y a c c o r d i n g t o a
// map a r r a y
156
3
4 #i n c l u d e <s t d i o . h>
5 #i n c l u d e <t h r u s t / d e v i c e v e c t o r . h>
6 #i n c l u d e <t h r u s t / g a t h e r . h>
7
8 i n t main ( )
9 { int x [ 5 ] = {12 ,13 ,5 ,8 ,88};
10
i n t n=5;
11
t h r u s t : : d e v i c e v e c t o r <i n t > dx ( x , x+n ) ;
12
// a l l o c a t e map v e c t o r
13
t h r u s t : : d e v i c e v e c t o r <i n t > dm( n ) ;
14
// a l l o c a t e v e c t o r f o r output o f g a t h e r
15
t h r u s t : : d e v i c e v e c t o r <i n t > d d s t ( n ) ;
16
// example map
17
i n t m[ 5 ] = { 3 , 2 , 4 , 1 , 0 } ;
18
t h r u s t : : copy (m,m+n , dm . b e g i n ( ) ) ;
19
t h r u s t : : g a t h e r (dm . b e g i n ( ) ,dm . end ( ) , dx . b e g i n ( ) , d d s t . b e g i n ( ) ) ;
20
// t h e o r i g i n a l x [ 3 ] s h o u l d now be a t p o s i t i o n 0 , t h e o r i g i n a l x [ 2 ]
21
// now a t p o s i t i o n 1 , e t c . , i . e . 8 , 5 , 8 8 , 1 3 , 1 2 ; check i t :
22
t h r u s t : : copy ( d d s t . b e g i n ( ) , d d s t . end ( ) ,
23
s t d : : o s t r e a m i t e r a t o r <i n t >( s t d : : cout , ) ) ;
24
s t d : : c o u t << \n ;
25 }
6.7.1
// m a t r ix t r a n s p o s e , u s i n g s c a t t e r ( )
// s i m i l a r t o ( though l e s s
// i n t h e Thrust package
<s t d i o . h>
<t h r u s t / d e v i c e v e c t o r . h>
<t h r u s t / s c a t t e r . h>
<t h r u s t / s e q u e n c e . h>
struct transidx {
c o n s t i n t nr ; // number o f rows i n i n p u t
c o n s t i n t nc ; // number o f columns i n i n p u t
// s e t nr , nc
host
device
t r a n s i d x ( i n t n r , i n t n c ) : nr ( n r ) , nc ( n c ) { } ;
// e l e m e n t i i n i n p u t s h o u l d map t o which e l e m e n t i n output ?
host
device
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
157
The idea is to determine, for each index in the original matrix, the index for that element in the
transposed matrix. Not much new here in terms of Thrust, just more complexity.
It should be mentioned that the performance of this algorithm with a GPU backend would likely
be better if matrix tiling were used (Section 11.2).
6.8
Since each Thrust call invokes considerable overhead, Thrust offers some special iterators to reduce
memory access time and memory space requirements. Here are a few:
Counting iterators: These play the same role as thrust::sequence(), but without actually setting up an array, thus avoiding the memory issues.
Transform iterators: If your code first calls thrust:transform() and then makes another
Thrust call on the result, you can combine them, which the Thrust people call fusion.
158
6.8.1
Lets re-do the example of Section 6.7.1, this time using fusion.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
<s t d i o . h>
<t h r u s t / d e v i c e v e c t o r . h>
<t h r u s t / s c a t t e r . h>
<t h r u s t / s e q u e n c e . h>
<t h r u s t / i t e r a t o r / t r a n s f o r m i t e r a t o r . h>
s t r u c t t r a n s i d x : p u b l i c t h r u s t : : u n a r y f u n c t i o n <i n t , i n t >
{
c o n s t i n t nr ; // number o f rows i n i n p u t
c o n s t i n t nc ; // number o f columns i n i n p u t
// s e t nr , nc
host
device
t r a n s i d x ( i n t n r , i n t n c ) : nr ( n r ) , nc ( n c ) { } ;
// e l e m e n t i i n i n p u t s h o u l d map t o which e l e m e n t i n output ?
host
device
int operator ()( int i )
{ i n t r = i / nc ; i n t c = i % nc ; // row r , c o l c i n i n p u t
// t h a t w i l l be row c and c o l r i n output , which has nr c o l s
r e t u r n c nr + r ;
}
};
i n t main ( )
{ i n t mat [ 6 ] = {
5 , 12 , 13 ,
3 , 4 , 5};
i n t nrow=2, n c o l =3 ,n=nrow n c o l ;
t h r u s t : : d e v i c e v e c t o r <i n t > dmat ( mat , mat+n ) ;
// a l l o c a t e map v e c t o r
t h r u s t : : d e v i c e v e c t o r <i n t > dmap( n ) ;
// a l l o c a t e v e c t o r f o r output o f g a t h e r
t h r u s t : : d e v i c e v e c t o r <i n t > d d s t ( n ) ;
// c o n s t r u c t map ; e l e m e n t r o f i n p u t matrix g o e s t o s o f output
t h r u s t : : d e v i c e v e c t o r <i n t > s e q ( n ) ;
t h r u s t : : s e q u e n c e ( s e q . b e g i n ( ) , s e q . end ( ) ) ;
38
39
40
41
42
43
44
45
159
thrust : : scatter (
dmat . b e g i n ( ) , dmat . end ( ) ,
t h r u s t : : m a k e t r a n s f o r m i t e r a t o r ( s e q . b e g i n ( ) , t r a n s i d x ( nrow , n c o l ) ) ,
ddst . begin ( ) ) ;
t h r u s t : : copy ( d d s t . b e g i n ( ) , d d s t . end ( ) ,
s t d : : o s t r e a m i t e r a t o r <i n t >( s t d : : cout , ) ) ;
s t d : : c o u t << \n ;
}
thrust : : scatter (
dmat . b e g i n ( ) , dmat . end ( ) ,
t h r u s t : : m a k e t r a n s f o r m i t e r a t o r ( s e q . b e g i n ( ) , t r a n s i d x ( nrow , n c o l ) ) ,
ddst . begin ( ) ) ;
Fusion requires a special type of iterator, whose type is horrendous to write. So, Thrust provides
the make transform iterator() function, which we call to produce the special iterator needed,
and then put the result directly into the second phase of our fusion, in this case into scatter().
Essentially our use of make transform iterator() is telling Thrust, Dont apply transidx()
to seq yet. Instead, perform that operation as you go along, and feed each result of transidx()
directly into scatter(). That word direct is the salient one here; it means we save n memory reads
and n memory writes.4 Moreover, we save the overhead of the kernel call, if our backend is CUDA.
Note that we also had to be a little bit more elaborate with data typing issues, writing the first
line of our struct declaration as
s t r u c t t r a n s i d x : p u b l i c t h r u s t : : u n a r y f u n c t i o n <i n t , i n t >
6.9
A Timing Comparison
Lets look at matrix transpose one more time. First, well use the method, shown in earlier sections,
of passing a device vector iterator to a functor. For variety, lets use Thrusts for each() function.
4
We are still writing to temporary storage, but that will probably be in registers (since we dont create the entire
map at once), thus fast to access.
160
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
161
The for each() function does what the name implies: It calls a function/functor for each element
in a sequence, doing so in a parallel manner. Note that this also obviates our earlier need to use a
discard iterator.
For comparison, well use the matrix transpose code that is included in Thrusts examples/ file,
to be referred to as Code 2:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
<t h r u s t / h o s t v e c t o r . h>
<t h r u s t / d e v i c e v e c t o r . h>
<t h r u s t / f u n c t i o n a l . h>
<t h r u s t / g a t h e r . h>
<t h r u s t / s c a n . h>
<t h r u s t / i t e r a t o r / c o u n t i n g i t e r a t o r . h>
<t h r u s t / i t e r a t o r / t r a n s f o r m i t e r a t o r . h>
<i o s t r e a m >
<iomanip>
<s t d i o . h>
// c o n v e r t a l i n e a r i n d e x t o a l i n e a r i n d e x i n t h e t r a n s p o s e
s t r u c t t r a n s p o s e i n d e x : p u b l i c t h r u s t : : u n a r y f u n c t i o n <s i z e t , s i z e t >
{
s i z e t m, n ;
host
device
transpose index ( s i z e t
m, size t
n ) : m( m ) , n ( n ) {}
host
device
s i z e t operator ()( s i z e t linear index )
{
size t i = linear index / n;
162
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
163
i n t mat = ( i n t ) m a l l o c ( nr nc s i z e o f ( i n t ) ) ;
i n t matxp = ( i n t ) m a l l o c ( nr nc s i z e o f ( i n t ) ) ;
t h r u s t : : g e n e r a t e ( mat , mat+nr nc , rand16 ) ;
i n t checkrow = a t o i ( argv [ 2 ] ) ;
i n t c h e c k c o l = a t o i ( argv [ 3 ] ) ;
p r i n t f (%d\n , mat [ checkrow nc+c h e c k c o l ] ) ;
t r a n s p ( mat , matxp , nr , nc ) ;
p r i n t f (%d\n , matxp [ c h e c k c o l nc+checkrow ] ) ;
}
This approach is more efficient than ours in Section 6.7.1, making use of gather() instead of
scatter(). It also takes advantage of fusion etc.
Code 1 is a lot easier to program than Code 2, but is it efficient? It turns out, though, thatgood
news!the simpler code, i.e. Code 1, is actually a little faster than Code 2 in the case of a CUDA
backend, and a lot faster in the OpenMP case.
Here we ran on CUDA backends, on a 10000x10000 matrix:
device
GeForce 9800 GTX
Tesla C2050
Code 1
3.67
3.43
Code 2
3.75
3.50
What about OpenMP? Here are some timing runs on a multicore machine (many more cores than
the 16 we tried), using an input matrix of size 6000x6000:
# threads
2
4
8
16
6.10
Code 1
9.57
5.17
3.01
1.99
Code 2
23.01
10.62
7.42
3.35
Here is a Thrust approach to the example of Sections 4.13 and 5.12. To review, here is the problem:
Say we have a graph with adjacency matrix
0
1
0
1
1
0
1
1
0
0
0
1
0
1
1
0
(6.1)
164
with row and column numbering starting at 0, not 1. Wed like to transform this to a two-column
matrix that displays the links, in this case
0
1
1
2
2
3
3
3
1
0
3
1
3
0
1
2
(6.2)
For instance, there is a 1 on the far right, second row of the above matrix, meaning that in the
graph there is an edge from vertex 1 to vertex 3. This results in the row (1,3) in the transformed
matrix seen above.
Heres Thrust code to do this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
// t r a n s g r a p h problem , u s i n g Thrust
#i n c l u d e <s t d i o . h>
#i n c l u d e
#i n c l u d e
#i n c l u d e
#i n c l u d e
<t h r u s t / d e v i c e v e c t o r . h>
<t h r u s t / t r a n s f o r m . h>
<t h r u s t / remove . h>
<t h r u s t / i t e r a t o r / d i s c a r d i t e r a t o r . h>
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
165
i n t nr =3 , nc =4 , n r c = nr nc , i ;
t h r u s t : : d e v i c e v e c t o r <i n t > dx ( x , x+n r c ) ;
t h r u s t : : d e v i c e v e c t o r <i n t > o n e s ( x , x+n r c ) ;
t h r u s t : : c o u n t i n g i t e r a t o r <i n t > seqb ( 0 ) ;
t h r u s t : : c o u n t i n g i t e r a t o r <i n t > s e q e = seqb + n r c ;
// g e t 1D i n d i c e s o f t h e 1 s
t h r u s t : : d e v i c e v e c t o r <i n t > : : i t e r a t o r newend =
t h r u s t : : c o p y i f ( seqb , s e q e , dx . b e g i n ( ) , o n e s . b e g i n ( ) ,
t h r u s t : : i d e n t i t y <i n t > ( ) ) ;
i n t n1s = newend o n e s . b e g i n ( ) ;
t h r u s t : : d e v i c e v e c t o r <i n t > newmat ( 2 n1s ) ;
t h r u s t : : d e v i c e v e c t o r <i n t > out ( n1s ) ;
t h r u s t : : c o u n t i n g i t e r a t o r <i n t > seq2b ( 0 ) ;
t h r u s t : : t r a n s f o r m ( o n e s . b e g i n ( ) , newend , seq2b ,
t h r u s t : : m a k e d i s c a r d i t e r a t o r ( ) , makerow ( newmat . b e g i n ( ) , nc ) ) ;
t h r u s t : : copy ( newmat . b e g i n ( ) , newmat . end ( ) ,
s t d : : o s t r e a m i t e r a t o r <i n t >( s t d : : cout , ) ) ;
s t d : : c o u t << \n ;
}
One new feature here is the use of counting iterators. First, we create two of them in the code
t h r u s t : : c o u n t i n g i t e r a t o r <i n t > seqb ( 0 ) ;
t h r u s t : : c o u n t i n g i t e r a t o r <i n t > s e q e = seqb + n r c ;
Here seqb (virtually) points to the 0 in 0,1,2,... Actually no array is set up, but references to seqb
will act as if there is an array there. The counting iterator seqb starts at nrc, but its role here is
simply to demarcate the end of the (virtual) array.
Now, how does the code work? The call to copy if() has the goal of indentifying where in dx the
1s are located. This is accomplished by calling Thrusts identity() function, which just does f(x)
= x, which is enough, as it will return either 1 or 0, the latter interpreted as True. In other words,
the values between seqb and seqe will be copied whenever the corresponding values in dx are 1s.
The copied values are then placed into our array ones, which will now tell us where in dx the 1s
are. Each such value, recall, will correspond to one row of our output matrix. The construction of
the latter action is done by calling transform():
t h r u s t : : t r a n s f o r m ( o n e s . b e g i n ( ) , newend , seq2b ,
t h r u s t : : m a k e d i s c a r d i t e r a t o r ( ) , makerow ( newmat . b e g i n ( ) , nc ) ) ;
The construction of the output matrix, newmat, is actually done as a side effect of calling
makerow(). For this reason, weve set our third parameter to thrust::make discard iterator().
Since we never use the output from transform() itself, and it thus would be wastefulof both
memory space and memory bandwidthto store that output in a real array. Hence we use a discard
array instead.
Our algorithm consists of two stagesfirst finding the locations of the 1s, and then calculating the
166
output matrix. Could we combine the two stages? Possibly, but there are difficulties to deal with.
The biggest problem is that we dont know the size of the output matrix in advance; counting the
1s separately gives us that information. Without that, wed either have to make the output matrix
too large initially and then shrink it, or continually expand it as we go through the computation.
The latter would probably result in a major slowdown, as memory allocation takes time.
6.11
Prefix Scan
6.12
6.12.1
Synchronicity
Thrust calls are in fact CUDA kernel calls, and thus entail some latency. Other than the transform()family functions, the calls are all synchronous.
6.13
Error Messages
A message like
t e r m i n a t e c a l l e d a f t e r t h r o w i n g an i n s t a n c e o f s t d : : b a d a l l o c
what ( ) :
167
may mean that Thrust wasnt able to allocate your large array on the GPU.
Also, beware of the following. Consider the code
t h r u s t : : d e v i c e v e c t o r <i n t > s e q ( n ) ;
t h r u s t : : c o p y i f ( hx . b e g i n ( ) , hx . end ( ) , seq , out , i s m u l t k ( hx . b e g i n ( ) , i n c r ) ) ;
We forgot the .begin() for seq! If seq had been a non-Thrust array, declared as
i n t seq [ n ] ;
6.14
168
Chapter 7
7.1
Overview
Traditionally, shared-memory hardware has been extremely expensive, with a typical system costing
hundreds of thousands of dollars. Accordingly, the main users were for very large corporations or
government agencies, with the machines being used for heavy-duty server applications, such as for
large databases and World Wide Web sites. The conventional wisdom is that these applications
require the efficiency that good shared-memory hardware can provide.
But the huge expense of shared-memory machines led to a quest for high-performance messagepassing alternatives, first in hypercubes and then in networks of workstations (NOWs).
The situation changed radically around 2005, when shared-memory hardware for the masses
became available in dual-core commodity PCs. Chips of higher core multiplicity are commercially
available, with a decline of price being inevitable. Ordinary users will soon be able to afford
shared-memory machines featuring dozens of processors.
Yet the message-passing paradigm continues to thrive. Many people believe it is more amenable to
writing really fast code, and the the advent of cloud computing has given message-passing a big
boost. In addition, many of the worlds very fastest systems (see www.top500.org for the latest
list) are in fact of the message-passing type.
In this chapter, we take a closer look at this approach to parallel processing.
169
170
7.2
A popular class of parallel machines in the 1980s and early 90s was that of hypercubes. Intel sold
them, for example, as did a subsidiary of Oracle, nCube. A hypercube would consist of some number
of ordinary Intel processors, with each processor having some memory and serial I/O hardward for
connection to its neighbor processors.
Hypercubes proved to be too expensive for the type of performance they could achieve, and the
market was small anyway. Thus they are not common today, but they are still important, both
for historical reasons (in the computer field, old techniques are often recycled decades later), and
because the algorithms developed for them have become quite popular for use on general machines.
In this section we will discuss architecture, algorithms and software for such machines.
7.2.1
Definitions
A hypercube of dimension d consists of D = 2d processing elements (PEs), i.e. processormemory pairs, with fast serial I/O connections between neighboring PEs. We refer to such a cube
as a d-cube.
The PEs in a d-cube will have numbers 0 through D-1. Let (cd1 , ..., c0 ) be the base-2 representation
of a PEs number. The PE has fast point-to-point links to d other PEs, which we will call its
neighbors. Its ith neighbor has number (cd1 , ..., 1 ci1 , ..., c0 ).1
For example, consider a hypercube having D = 16, i.e. d = 4. The PE numbered 1011, for instance,
would have four neighbors, 0011, 1111, 1001 and 1010.
It is sometimes helpful to build up a cube from the lower-dimensional cases. To build a (d+1)1
Note that we number the digits from right to left, with the rightmost digit being digit 0.
171
dimensional cube from two d-dimensional cubes, just follow this recipe:
(a) Take a d-dimensional cube and duplicate it. Call these two cubes subcube 0 and subcube 1.
(b) For each pair of same-numbered PEs in the two subcubes, add a binary digit 0 to the front
of the number for the PE in subcube 0, and add a 1 in the case of subcube 1. Add a link
between them.
The following figure shows how a 4-cube can be constructed in this way from two 3-cubes:
Given a PE of number (cd1 , ..., c0 ) in a d-cube, we will discuss the i-cube to which this PE belongs,
meaning all PEs whose first d-i digits match this PEs.2 Of all these PEs, the one whose last i
digits are all 0s is called the root of this i-cube.
For the 4-cube and PE 1011 mentioned above, for instance, the 2-cube to which that PE belongs
consists of 1000, 1001, 1010 and 1011i.e. all PEs whose first two digits are 10and the root is
1000.
Given a PE, we can split the i-cube to which it belongs into two (i-1)-subcubes, one consisting of
those PEs whose digit i-1 is 0 (to be called subcube 0), and the other consisting of those PEs whose
digit i-1 is 1 (to be called subcube 1). Each given PE in subcube 0 has as its partner the PE in
subcube 1 whose digits match those of the given PE, except for digit i-1.
2
Note that this is indeed an i-dimensional cube, because the last i digits are free to vary.
172
To illustrate this, again consider the 4-cube and the PE 1011. As an example, let us look at how
the 3-cube it belongs to will split into two 2-cubes. The 3-cube to which 1011 belongs consists of
1000, 1001, 1010, 1011, 1100, 1101, 1110 and 1111. This 3-cube can be split into two 2-cubes, one
being 1000, 1001, 1010 and 1011, and the other being 1100, 1101, 1110 and 1111. Then PE 1000 is
partners with PE 1100, PE 1001 is partners with PE 1101, and so on.
Each link between two PEs is a dedicated connection, much preferable to the shared link we have
when we run, say, MPI, on a collection of workstations on an Ethernet. On the other hand, if one
PE needs to communicate with a non-neighbor PE, multiple links (as many as d of them) will need
to be traversed. Thus the nature of the communications costs here is much different than for a
network of workstations, and this must be borne in mind when developing programs.
7.3
The idea here is simple: Take a bunch of commodity PCs and network them for use as parallel
processing systems. They are of course individual machines, capable of the usual uniprocessor,
nonparallel applications, but by networking them together and using message-passing software
environments such as MPI, we can form very powerful parallel systems.
The networking does result in a significant loss of performance, but the price/performance ratio in
NOW can be much superior in many applications to that of shared-memory or hypercube hardware
of comparable number of CPUs.
7.3.1
Still, one factor which can be key to the success of a NOW is to use a fast network, both in terms
of hardware and network protocol. Ordinary Ethernet and TCP/IP are fine for the applications
envisioned by the original designers of the Internet, e.g. e-mail and file transfer, but they are slow
in the NOW context.
A popular network for a NOW today is Infiniband (IB) (www.infinibandta.org). It features low
latency, about 1.0-3.0 microseconds, high bandwidth, about 1.0-2.0 gigaBytes per second), and uses
a low amount of the CPUs cycles, around 5-10%.
The basic building block of IB is a switch, with many inputs and outputs, similar in concept to
-net. You can build arbitrarily large and complex topologies from these switches.
A central point is that IB, as with other high-performance networks designed for NOWs, uses
RDMA (Remote Direct Memory Access) read/write, which eliminates the extra copying of data
between the application programs address space to that of the operating system.
173
IB has high performance and scalable3 implementations of distributed locks, semaphores, collective
communication operations. An atomic operation takes about 3-5 microseconds.
IB implements true multicast, i.e. the simultaneous sending of messages to many nodes. Note
carefully that even though MPI has its MPI Bcast() function, it will send things out one at a
time unless your network hardware is capable of multicast, and the MPI implementation you use
is configured specifically for that hardware.
For information on network protocols, e.g. for example www.rdmaconsortium.org. A research
paper evaluating a tuned implementation of MPI on IB is available at nowlab.cse.ohio-state.
edu/publications/journal-papers/2004/liuj-ijpp04.pdf.
7.3.2
Other Issues
Increasingly today, the workstations themselves are multiprocessor machines, so a NOW really is
a hybrid arrangement. They can be programmed either purely in a message-passing mannere.g.
running eight MPI processes on four dual-core machinesor in a mixed way, with a shared-memory
approach being used within a workstation but message-passing used between them.
NOWs have become so popular that there are now recipes on how to build them for the specific purpose of parallel processing. The term Beowulf come to mean a NOW, usually with a
fast network connecting them, used for parallel processing. The term NOW itself is no longer in
use, replaced by cluster. Software packages such as ROCKS (http://www.rocksclusters.org/
wordpress/) have been developed to make it easy to set up and administer such systems.
7.4
Scatter/Gather Operations
Writing message-passing code is a lot of work, as the programmer must explicitly arrange for
transfer of data. Contrast that, for instance, to shared-memory machines, in which cache coherency
transactions will cause data transfers, but which are not arranged by the programmer and not even
seen by him/her.
In order to make coding on message-passing machines easier, higher-level systems have been devised.
These basically operate in the scatter/gather paradigm, in which a manager node sends out
chunks of work to the other nodes, serving as workers, and then collects and assembles the results
sent back the workers.
MPI includes scatter/gather operations in its wide offering of functions, and they are used in many
3
The term scalable arises frequently in conversations on parallel processing. It means that this particular method
of dealing with some aspect of parallel processing continues to work well as the system size increases. We say that
the method scales.
174
MPI applications. Rs snow package, which will be discussed in Section ??, is based entirely on
scatter/gather, as is MapReduce, to be discussed below.
Chapter 8
Introduction to MPI
MPI is the de facto standard for message-passing software.
8.1
8.1.1
Overview
History
Though (small) shared-memory machines have come down radically in price, to the point at which a
dual-core PC is now commonplace in the home, historically shared-memory machines were available
only to the very richlarge banks, national research labs and so on. This led to interest in
message-passing machines.
The first affordable message-machine type was the Hypercube, developed by a physics professor
at Cal Tech. It consisted of a number of processing elements (PEs) connected by fast serial I/O
cards. This was in the range of university departmental research labs. It was later commercialized
by Intel and NCube.
Later, the notion of networks of workstations (NOWs) became popular. Here the PEs were
entirely independent PCs, connected via a standard network. This was refined a bit, by the use of
more suitable network hardware and protocols, with the new term being clusters.
All of this necessitated the development of standardized software tools based on a message-passing
paradigm. The first popular such tool was Parallel Virtual Machine (PVM). It still has its adherents
today, but has largely been supplanted by the Message Passing Interface (MPI).
MPI itself later became MPI 2. Our document here is intended mainly for the original.
175
176
8.1.2
MPI is merely a set of Application Programmer Interfaces (APIs), called from user programs written
in C, C++ and other languages. It has many implementations, with some being open source and
generic, while others are proprietary and fine-tuned for specific commercial hardware.
Suppose we have written an MPI program x, and will run it on four machines in a cluster. Each
machine will be running its own copy of x. Official MPI terminology refers to this as four processes.
Now that multicore machines are commonplace, one might indeed run two or more cooperating
MPI processeswhere now we use the term processes in the real OS senseon the same multicore
machine. In this document, we will tend to refer to the various MPI processes as nodes, with an
eye to the cluster setting.
Though the nodes are all running the same program, they will likely be working on different parts
of the programs data. This is called the Single Program Multiple Data (SPMD) model. This is
the typical approach, but there could be different programs running on different nodes. Most of
the APIs involve a node sending information to, or receiving information from, other nodes.
8.1.3
Implementations
Two of the most popular implementations of MPI are MPICH and LAM. MPICH offers more
tailoring to various networks and other platforms, while LAM runs on networks. Introductions to
MPICH and LAM can be found, for example, at http://heather.cs.ucdavis.edu/~matloff/
MPI/NotesMPICH.NM.html and http://heather.cs.ucdavis.edu/~matloff/MPI/NotesLAM.NM.
html, respectively.
LAM is no longer being developed, and has been replaced by Open MPI (not to be confused with
OpenMP). Personally, I still prefer the simplicity of LAM. It is still being maintained.
Note carefully: If your machine has more than one MPI implementation, make absolutely sure
one is not interfering with the other. Make sure all execution and library paths all include one and
only one implementation at a time.
8.1.4
Performance Issues
Mere usage of a parallel language on a parallel platform does not guarantee a performance improvement over a serial version of your program. The central issue here is the overhead involved in
internode communication.
Infiniband, one of the fastest cluster networks commercially available, has a latency of about 1.03.0 microseconds, meaning that it takes the first bit of a packet that long to get from one node on
177
an Infiniband switch to another. Comparing that to the nanosecond time scale of CPU speeds, one
can see that the communications overhead can destroy a programs performance. And Ethernet is
quite a bit slower than Infiniband.
Latency is quite different from bandwidth, which is the number of bits sent per second. Say the
latency is 1.0 microsecond and the bandwidth is 1 gigabit, i.e. 1000000000 bits per second or 1000
bits per microsecond. Say the message is 2000 bits long. Then the first bit of the message arrives
after 1 microsecond, and the last bit arrives after an additional 2 microseconds. In other words,
the message is does not arrive fully at the destination until 3 microseconds after it is sent.
In the same setting, say bandwidth is 10 gigabits. Now the message would need 1.2 seconds to
arrive fully, in spite of a 10-fold increase in bandwidth. So latency is a major problem even if the
bandwidth is high.
For this reason, the MPI applications that run well on networks tend to be of the embarrassingly
parallel type, with very little communication between the processes.
Of course, if your platform is a shared-memory multiprocessor (especially a multicore one, where
communication between cores is particularly fast) and you are running all your MPI processes
on that machine, the problem is less severe. In fact, some implementations of MPI communicate
directly through shared memory in that case, rather than using the TCP/IP or other network
protocol.
8.2
Though the presentation in this chapter is self-contained, you may wish to look first at the somewhat
simpler example in Section 1.3.3.2, a pipelined prime number finder.
8.3
8.3.1
The code implements the Dijkstra algorithm for finding the shortest paths in an undirected graph.
Pseudocode for the algorithm is
1
2
3
4
5
6
Done = {0}
NonDone = {1,2,...,N-1}
for J = 1 to N-1 Dist[J] = infinity
Dist[0] = 0
for Step = 1 to N-1
find J such that Dist[J] is min among all J in NonDone
178
7
8
9
10
11
At each iteration, the algorithm finds the closest vertex J to 0 among all those not yet processed,
and then updates the list of minimum distances to each vertex from 0 by considering paths that go
through J. Two obvious potential candidate part of the algorithm for parallelization are the find
J and for K lines, and the above OpenMP code takes this approach.
8.3.2
1
// Dijkstra.c
2
3
4
5
6
7
nv print dbg
8
9
10
11
12
13
14
15
16
17
#include <stdio.h>
#include <mpi.h>
18
19
20
21
#define MYMIN_MSG 0
#define OVRLMIN_MSG 1
#define COLLECT_MSG 2
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
*mind;
// ohd[i*nv+j]
// min distances found so far
43
44
double T1,T2;
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
void findoverallmin()
{ int i;
MPI_Status status; // describes result of MPI_Recv() call
// nodes other than 0 report their mins to node 0, which receives
// them and updates its value for the global min
if (me > 0)
MPI_Send(mymin,2,MPI_INT,0,MYMIN_MSG,MPI_COMM_WORLD);
179
180
else {
// check my own first
overallmin[0] = mymin[0];
overallmin[1] = mymin[1];
// check the others
for (i = 1; i < nnodes; i++) {
MPI_Recv(othermin,2,MPI_INT,i,MYMIN_MSG,MPI_COMM_WORLD,&status);
if (othermin[0] < overallmin[0]) {
overallmin[0] = othermin[0];
overallmin[1] = othermin[1];
}
}
}
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
void disseminateoverallmin()
{ int i;
MPI_Status status;
if (me == 0)
for (i = 1; i < nnodes; i++)
MPI_Send(overallmin,2,MPI_INT,i,OVRLMIN_MSG,MPI_COMM_WORLD);
else
MPI_Recv(overallmin,2,MPI_INT,0,OVRLMIN_MSG,MPI_COMM_WORLD,&status);
}
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
void dowork()
{ int step, // index for loop of nv steps
i;
if (me == 0) T1 = MPI_Wtime();
for (step = 0; step < nv; step++) {
findmymin();
findoverallmin();
disseminateoverallmin();
// mark new vertex as done
notdone[overallmin[1]] = 0;
updatemymind(startv,endv);
157
158
159
160
161
162
}
updateallmind();
T2 = MPI_Wtime();
163
164
165
166
181
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
8.3.3
8.3.3.1
These are required for starting and ending execution of an MPI program. Their actions may be
implementation-dependent. For instance, if our platform is an Ethernet-based cluster , MPI Init()
will probably set up the TCP/IP sockets via which the various nodes communicate with each
other. On an Infiniband-based cluster, connections in the special Infiniband network protocol will
be established. On a shared-memory multiprocessor, an implementation of MPI that is tailored to
that platform would take very different actions.
8.3.3.2
182
MPI_Comm_size(MPI_COMM_WORLD,&nnodes);
MPI_Comm_rank(MPI_COMM_WORLD,&me);
The first call determines how many nodes are participating in our computation, placing the result in
our variable nnodes. Here MPI COMM WORLD is our node group, termed a communicator
in MPI parlance. MPI allows the programmer to subdivide the nodes into groups, to facilitate
performance and clarity of code. Note that for some operations, such as barriers, the only way to
apply the operation to a proper subset of all nodes is to form a group. The totality of all groups is
denoted by MPI COMM WORLD. In our program here, we are not subdividing into groups.
The second call determines this nodes ID number, called its rank, within its group. As mentioned
earlier, even though the nodes are all running the same program, they are typically working on
different parts of the programs data. So, the program needs to be able to sense which node it is
running on, so as to access the appropriate data. Here we record that information in our variable
me.
8.3.3.3
MPI Send()
To see how MPIs basic send function works, consider our line above,
MPI_Send(mymin,2,MPI_INT,0,MYMIN_MSG,MPI_COMM_WORLD);
183
Receive calls, described in the next section, can ask to receive only messages of a certain type.
COMM WORLD: This is the node group to which the message is to be sent. Above, where we said we are
sending to node 0, we technically should say we are sending to node 0 within the group
MPI COMM WORLD.
8.3.3.4
MPI Recv()
The type is an MPI struct containing information about the received message. Its primary
fields of interest are MPI SOURCE, which contains the identity of the sending node, and
MPI TAG, which contains the message type. These would be useful if the receive had been
done with MPI ANY SOURCE or MPI ANY TAG; the status argument would then
tell us which node sent the message and what type the message was.
184
8.4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
#i n c l u d e <mpi . h>
#i n c l u d e < s t d l i b . h>
#d e f i n e
#d e f i n e
#d e f i n e
#d e f i n e
MAX N 100000
MAX NPROCS 100
DATA MSG 0
NEWDATA MSG 1
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
185
8.5
If you are using GDBeither directly, or via an IDE such as Eclipse or Netbeansthe trick with
MPI is to attach GDB to your running MPI processes.
Set up code like that weve seen in our examples here:
186
w h i l e ( dbg ) ;
This deliberately sets up an infinite loop of dbg is nonzero, for reasons to be discussed below.
For instance, suppose Im running an MPI program a.out, on machines A, B and C. I would start
the processes as usual, and have three terminal windows open. Id log in to machine A, find the
process number for a.out, using for example a command like ps ax on Unix-family systems, then
attach GDB to that process. Say the process number is 88888. Id attach by running the command
% gdb a . out 88888
That would start GDB, in the midst of my already-running process, thus stuck in the infinite loop
seen above. I hit ctrl-c to interrupt it, which gives me the GDB prompt, (gdb). I then type
( gdb ) s e t var dbg = 0
which means when I next hit the c command in GDB, the program will proceed, not stuck in the
loop anymore. But first I set my breakpoints.
8.6
Collective Communications
MPI features a number of collective communication capabilities, a number of which are used in
the following refinement of our Dijkstra program:
8.6.1
1
// Dijkstra.coll1.c
2
3
4
5
6
7
nv print dbg
8
9
10
11
12
13
14
15
16
17
#include <stdio.h>
#include <mpi.h>
18
19
20
21
int nv,
// number of vertices
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
double T1,T2;
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
187
188
mymin[0] = mind[i];
mymin[1] = i;
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
void dowork()
{ int step, // index for loop of nv steps
i;
if (me == 0) T1 = MPI_Wtime();
for (step = 0; step < nv; step++) {
findmymin();
MPI_Reduce(mymin,overallmin,1,MPI_2INT,MPI_MINLOC,0,MPI_COMM_WORLD);
MPI_Bcast(overallmin,1,MPI_2INT,0,MPI_COMM_WORLD);
// mark new vertex as done
notdone[overallmin[1]] = 0;
updatemymind(startv,endv);
}
// now need to collect all the mind values from other nodes to node 0
MPI_Gather(mind+startv,chunk,MPI_INT,mind,chunk,MPI_INT,0,MPI_COMM_WORLD);
T2 = MPI_Wtime();
}
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
189
8.6.2
MPI Bcast()
190
Matloff, Network-Specific Performance Enhancements for PVM, Proceedings of the Fourth IEEE
International Symposium on High-Performance Distributed Computing, 1995, 205-210; N. Matloff,
Analysis of a Programmed Backoff Method for Parallel Processing on Ethernets, in Network-Based
Parallel Computing).
8.6.3
At this point all nodes in this group participate in a reduce operation. The type
of reduce operation is MPI MINLOC, which means that the minimum value among
the nodes will be computed, and the index attaining that minimum will be recorded
as well. Each node contributes a value to be checked, and an associated index, from
a location mymin in their programs; the type of the pair is MPI 2INT. The overall
min value/index will be computed by combining all of these values at node 0, where
they will be placed at a location overallmin.
MPI also includes a function MPI Allreduce(), which does the same operation, except that
instead of just depositing the result at one node, it does so at all nodes. So for instance our code
above,
MPI_Reduce(mymin,overallmin,1,MPI_2INT,MPI_MINLOC,0,MPI_COMM_WORLD);
MPI_Bcast(overallmin,1,MPI_2INT,0,MPI_COMM_WORLD);
could be replaced by
MPI_Allreduce(mymin,overallmin,1,MPI_2INT,MPI_MINLOC,MPI_COMM_WORLD);
8.6.4
191
A classical approach to parallel computation is to first break the data for the application into
chunks, then have each node work on its chunk, and then gather all the processed chunks together
at some node. The MPI function MPI Gather() does this.
In our program above, look at the line
MPI_Gather(mind+startv,chunk,MPI_INT,mind,chunk,MPI_INT,0,MPI_COMM_WORLD);
8.6.5
This is the opposite of MPI Gather(), i.e. it breaks long data into chunks which it parcels out
to individual nodes. For example, in the code in the next section, the call
192
means
Node 0 will break up the array oh of type MPI INT into chunks of length lenchunk,
sending the ith chunk to Node i, where lenchunk items will be deposited at ohchunk.
8.6.6
Below is MPI code to count the number of edges in a directed graph. (Directed means that a
link from i to j does not necessarily imply one from j to i.)
In the context here, me is the nodes rank; nv is the number of vertices; oh is the one-hop distance
matrix; and nnodes is the number of MPI processes. At the beginning only the process of rank 0
has a copy of oh, but it sends that matrix out in chunks to the other nodes, each of which stores
its chunk in an array ohchunk.
1
2
3
4
5
6
7
8
l e n c h u n k = nv / nnodes ;
M P I S c a tter ( oh , lenchunk , MPI INT , ohchunk , lenchunk , MPI INT , 0 ,
MPI COMM WORLD) ;
mycount = 0 ;
f o r ( i = 0 ; i < nvnv/ nnodes )
i f ( ohchunk [ i ] != 0 ) mycount++;
MPI Reduce(&mycount ,&numedge , 1 , MPI INT , MPI SUM, 0 ,MPI COMM WORLD) ;
i f (me == 0 ) p r i n t f ( t h e r e a r e %d e d g e s \n , numedge ) ;
8.6.7
Here we find cumulative sums. For instance, if the original array is (3,1,2,0,3,0,1,2), then it is
changed to (3,4,6,6,9,9,10,12). (This topic is pursued in depth in Chapter 10.)
1
2
3
4
5
6
7
8
9
10
11
12
// f i n d s c u m u l a t i v e sums i n t h e a r r a y x
#i n c l u d e <mpi . h>
#i n c l u d e < s t d l i b . h>
#d e f i n e MAX N 10000000
#d e f i n e MAX NODES 10
i n t nnodes , // number o f MPI p r o c e s s e s
n , // s i z e o f x
me , // MPI rank o f t h i s node
// f u l l data f o r node 0 , p a r t f o r t h e r e s t
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
x [MAX N] ,
csums [MAX N] , // c u m u l a t i v e sums f o r t h i s node
maxvals [MAX NODES ] ;
// t h e max v a l u e s a t t h e v a r i o u s nodes
i n t debug ;
i n i t ( i n t argc , c h a r argv )
{
int i ;
M P I I n i t (& argc ,& argv ) ;
MPI Comm size (MPI COMM WORLD,& nnodes ) ;
MPI Comm rank (MPI COMM WORLD,&me ) ;
n = a t o i ( argv [ 1 ] ) ;
// t e s t data
i f (me == 0 ) {
f o r ( i = 0 ; i < n ; i ++)
x [ i ] = rand ( ) % 3 2 ;
}
debug = a t o i ( argv [ 2 ] ) ;
w h i l e ( debug ) ;
}
v o i d cumulsums ( )
{
MPI Status s t a t u s ;
i n t i , lenchunk , sum , node ;
l e n c h u n k = n / nnodes ; // assumed t o d i v i d e e v e n l y
// n o t e t h a t node 0 w i l l p a r t i c i p a t e i n t h e computation t o o
M P I Scatter ( x , lenchunk , MPI INT , x , lenchunk , MPI INT ,
0 ,MPI COMM WORLD) ;
sum = 0 ;
f o r ( i = 0 ; i < l e nc h u n k ; i ++) {
csums [ i ] = sum + x [ i ] ;
sum += x [ i ] ;
}
MPI Gather(&csums [ lenchunk 1 ] , 1 , MPI INT ,
maxvals , 1 , MPI INT , 0 ,MPI COMM WORLD) ;
MPI Bcast ( maxvals , nnodes , MPI INT , 0 ,MPI COMM WORLD) ;
i f (me > 0 ) {
sum = 0 ;
f o r ( node = 0 ; node < me ; node++) {
sum += maxvals [ node ] ;
}
f o r ( i = 0 ; i < l e nc h u n k ; i ++)
csums [ i ] += sum ;
}
MPI Gather ( csums , lenchunk , MPI INT , csums , lenchunk , MPI INT ,
0 ,MPI COMM WORLD) ;
}
193
194
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
8.6.8
Consider the example of Section 2.4.3. We have a network graph of some kind, such as Web
links. For any two vertices, say any two Web sites, we might be interested in mutual outlinks, i.e.
outbound links that are common to two Web sites.
The MPI code below finds the mean number of mutual outlinks, among all pairs of vertices in a
graph.
1 // MPI s o l u t i o n t o t h e mutual o u t l i n k s problem
2
3 // a d j a c e n c y matrix m i s g l o b a l a t each node , b r o a d c a s t from node 0
4
5 // assumes m i s nxn , and number o f nodes i s < n
6
7 // f o r each node i , check a l l p o s s i b l e p a i r i n g nodes j > i ; t h e v a r i o u s
8 // nodes work on v a l u e s o f i i n a Round Robin f a s h i o n , with node k
9 // h a n d l i n g a l l i f o r which i mod nnodes = k
10
11 #i n c l u d e <mpi . h>
12 #i n c l u d e < s t d l i b . h>
13
14 #d e f i n e MAXLENGTH 10000000
15
16 i n t nnodes , // number o f MPI p r o c e s s e s
17
n , // s i z e o f x
18
me , // MPI rank o f t h i s node
19
m[MAXLENGTH] , // a d j a c e n c y matrix
20
g r a n d t o t ; // grand t o t a l o f a l l c o u n t s o f m u t u a l i t y
21
22 // g e t a d j a c e n c y matrix , i n t h i s c a s e j u s t by s i m u l a t i o n
23 v o i d getm ( )
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
int i ;
f o r ( i = 0 ; i < nn ; i ++)
m[ i ] = rand ( ) % 2 ;
}
i n i t ( i n t argc , c h a r argv )
{
int i ;
M P I I n i t (& argc ,& argv ) ;
MPI Comm size (MPI COMM WORLD,& nnodes ) ;
MPI Comm rank (MPI COMM WORLD,&me ) ;
n = a t o i ( argv [ 1 ] ) ;
i f (me == 0 ) {
getm ( ) ; // g e t t h e data ( apps p e c i f i c )
}
}
void mutlinks ( )
{
int i , j , k , tot ;
MPI Bcast (m, nn , MPI INT , 0 ,MPI COMM WORLD) ;
tot = 0;
f o r ( i = me ; i < n1; i += nnodes ) {
f o r ( j = i +1; j < n ; j ++) {
f o r ( k = 0 ; k < n ; k++)
t o t += m[ twod2oned ( n , i , k ) ] m[ twod2oned ( n , j , k ) ] ;
}
}
MPI Reduce(& t o t ,& g r a n d t o t , 1 , MPI INT , MPI SUM, 0 ,MPI COMM WORLD) ;
}
// c o n v e r t 2D s u b s c r i p t t o 1D
i n t twod2oned ( n , i , j )
{ return n i + j ; }
i n t main ( i n t argc , c h a r argv )
{ int i , j ;
i n i t ( argc , argv ) ;
i f (me == 0 && n < 5 ) { // check t e s t i n p u t
f o r ( i = 0 ; i < n ; i ++) {
f o r ( j = 0 ; j < n ; j ++) p r i n t f (%d ,m[ twod2oned ( n , i , j ) ] ) ;
p r i n t f (\n ) ;
}
}
mutlinks ( ) ;
i f (me == 0 ) p r i n t f (% f \n , ( ( f l o a t ) g r a n d t o t ) / ( n ( n 1 ) / 2 ) ) ;
MPI Finalize ( ) ;
}
195
196
8.6.9
This implements a barrier for a given communicator. The name of the communicator is the sole
argument for the function.
Explicit barriers are less common in message-passing programs than in the shared-memory world.
8.6.10
Creating Communicators
Again, a communicator is a subset (either proper or improper) of all of our nodes. MPI includes a
number of functions for use in creating communicators. Some set up a virtual topology among
the nodes.
For instance, many physics problems consist of solving differential equations in two- or threedimensional space, via approximation on a grid of points. In two dimensions, groups may consists
of rows in the grid.
Heres how we might divide an MPI run into two groups (assumes an even number of MPI processes
to begin with):
MPI_Comm_size(MPI_COMM_WORLD,&nnodes);
MPI_Comm_rank(MPI_COMM_WORLD,&me);
...
// declare variables to bind to groups
MPI_Group worldgroup, subgroup;
// declare variable to bind to a communicator
MPI_Comm subcomm;
...
int i,startrank,nn2 = nnodes/2;
int *subranks = malloc(nn2*sizeof(int));
if (me < nn2) start = 0;
else start = nn2;
for (i = 0; i < nn2; i++)
subranks[i] = i + start;
// bind the world to a group variable
MPI_Comm_group(MPI_COMM_WORLD, &worldgroup);
// take worldgroup the nn2 ranks in "subranks" and form group
// "subgroup" from them
MPI_Group_incl(worldgroup, nn2, subranks, subgroup);
// create a communicator for that new group
MPI_Comm_create(MPI_COMM_WORLD, subgroup, subcomm);
// get my rank in this new group
MPI_Group_rank (subgroup, &subme);
You would then use subcomm instead of MPI COMM WORLD whenever you wish to, say, broadcast, only to that group.
8.7
197
As noted several times so far, interprocess communication in parallel systems can be quite expensive
in terms of time delay. In this section we will consider some issues which can be extremely important
in this regard.
8.7.1
Buffering, Etc.
To understand this point, first consider situations in which MPI is running on some network, under
the TCP/IP protocol. Say an MPI program at node A is sending to one at node B.
It is extremely import to keep in mind the levels of abstraction here. The OSs TCP/IP stack is
running at the Session, Transport and Network layers of the network. MPImeaning the MPI
internalsis running above the TCP/IP stack, in the Application layers at A and B. And the MPI
user-written application could be considered to be running at a Super-application layer, since it
calls the MPI internals. (From here on, we will refer to the MPI internals as simply MPI.)
MPI at node A will have set up a TCP/IP socket to B during the user programs call to MPI Init().
The other end of the socket will be a corresponding one at B. This setting up of this socket pair as
establishing a connection between A and B. When node A calls MPI Send(), MPI will write to
the socket, and the TCP/IP stack will transmit that data to the TCP/IP socket at B. The TCP/IP
stack at B will then send whatever bytes come in to MPI at B.
Now, it is important to keep in mind that in TCP/IP the totality of bytes sent by A to B during
lifetime of the connection is considered one long message. So for instance if the MPI program at A
calls MPI Send() five times, the MPI internals will write to the socket five times, but the bytes
from those five messages will not be perceived by the TCP/IP stack at B as five messages, but
rather as just one long message (in fact, only part of one long message, since more may be yet to
come).
MPI at B continually reads that long message and breaks it back into MPI messages, keeping
them ready for calls to MPI Recv() from the MPI application program at B. Note carefully that
phrase, keeping them ready; it refers to the fact that the order in which the MPI application program
requests those messages may be different from the order in which they arrive.
On the other hand, looking again at the TCP/IP level, even though all the bytes sent are considered
one long message, it will physically be sent out in pieces. These pieces dont correspond to the
pieces written to the socket, i.e. the MPI messages. Rather, the breaking into pieces is done for
the purpose of flow control, meaning that the TCP/IP stack at A will not send data to the one
at B if the OS at B has no room for it. The buffer space the OS at B has set up for receiving
data is limited. As A is sending to B, the TCP layer at B is telling its counterpart at A when A is
allowed to send more data.
198
Think of what happens the MPI application at B calls MPI Recv(), requesting to receive from
A, with a certain tag T. Say the first argument is named x, i.e. the data to be received is to be
deposited at x. If MPI sees that it already has a message of tag T, it will have its MPI Recv()
function return the message to the caller, i.e. to the MPI application at B. If no such message
has arrived yet, MPI wont return to the caller yet, and thus the caller blocks.
MPI Send() can block too. If the platform and MPI implementation is that of the TCP/IP
network context described above, then the send call will return when its call to the OS write() (or
equivalent, depending on OS) returns, but that could be delayed if the OS buffer space is full. On
the other hand, another implementation could require a positive response from B before allowing
the send call to return.
Note that buffering slows everything down. In our TCP scenario above, MPI Recv() at B must
copy messages from the OS buffer space to the MPI application programs program variables, e.g.
x above. This is definitely a blow to performance. That in fact is why networks developed specially
for parallel processing typically include mechanisms to avoid the copying. Infiniband, for example,
has a Remote Direct Memory Access capability, meaning that A can write directly to x at B. Of
course, if our implementation uses synchronous communication, with As send call not returning
until A gets a response from B, we must wait even longer.
Technically, the MPI standard states that MPI Send(x,...) will return only when it is safe for
the application program to write over the array which it is using to store its message, i.e. x. As
we have seen, there are various ways to implement this, with performance implications. Similarly,
MPI Recv(y,...) will return only when it is safe to read y.
8.7.2
Safety
With synchronous communication, deadlock is a real risk. Say A wants to send two messages to
B, of types U and V, but that B wants to receive V first. Then A wont even get to send V, because
in preparing to send U it must wait for a notice from B that B wants to read Ua notice which will
never come, because B sends such a notice for V first. This would not occur if the communication
were asynchronous.
But beyond formal deadlock, programs can fail in other ways, even with buffering, as buffer space
is always by nature finite. A program can fail if it runs out of buffer space, either at the sender
or the receiver. See www.llnl.gov/computing/tutorials/mpi_performance/samples/unsafe.c
for an example of a test program which demonstrates this on a certain platform, by deliberating
overwhelming the buffers at the receiver.
In MPI terminology, asynchronous communication is considered unsafe. The program may run
fine on most systems, as most systems are buffered, but fail on some systems. Of course, as long as
you know your program wont be run in nonbuffered settings, its fine, and since there is potentially
199
such a performance penalty for doing things synchronously, most people are willing to go ahead
with their unsafe code.
8.7.3
Living Dangerously
If one is sure that there will be no problems of buffer overflow and so on, one can use variant send
and receive calls provided by MPI, such as MPI Isend() and MPI Irecv(). The key difference
between them and MPI Send() and MPI Recv() is that they return immediately, and thus are
termed nonblocking. Your code can go on and do other things, not having to wait.
This does mean that at A you cannot touch the data you are sending until you determine that it
has either been buffered somewhere or has reached x at B. Similarly, at B you cant use the data at
x until you determine that it has arrived. Such determinations can be made via MPI Wait(). In
other words, you can do your send or receive, then perform some other computations for a while,
and then call MPI Wait() to determine whether you can go on. Or you can call MPI Probe()
to ask whether the operation has completed yet.
8.7.4
In many applications A and B are swapping data, so both are sending and both are receiving. This
too can lead to deadlock. An obvious solution would be, for instance, to have the lower-rank node
send first and the higher-rank node receive first.
But a more convenient, safer and possibly faster alternative would be to use MPIs MPI Sendrecv()
function. Its prototype is
Note that the sent and received messages can be of different lengths and can use different tags.
8.8
MPI is a vehicle for parallelizing C/C++, but some clever people have extended the concept to
other languages, such as the cases of Python and R that we treat in Chapters ?? and ??.
200
8.9
Chapter 9
Cloud Computing
In cloud computing, the idea is that a large corporation that has many computers could sell time
on them, for example to make profitable use of excess capacity. The typical customer would have
occasional need for large-scale computingand often large-scale data storage. The customer would
submit a program to the cloud computing vendor, who would run it in parallel on the vendors
many machines (unseen, thus forming the cloud), then return the output to the customer.
Google, Yahoo! and Amazon, among others, have recently gotten into the cloud computing business. Moreover, universities, businesses, research labs and so on are setting up their own small
clouds, typically on clusters (a bunch of computers on the same local network, possibly with central controlling software for job management).
The paradigm that has become standard in the cloud today is MapReduce, developed by Google.
In rough form, the approach is as follows. Various nodes server as mappers, and others serve as
reducers.1
The terms map and reduce are in the functional programming sense. In the case of reduce, the
idea is similar to reduction oeprations weve seen earlier in this book, such as the reduction clause
in OpenMP and MPI Reduce() for MPI. So, reducers in Hadoop perform operations such as
summation, finding minima or maxima, and so on.
In this chapter we give a very brief introduction to Hadoop, todays open-source application of
choice of MapReduce.
201
202
9.1
In terms of platforms, Hadoop is basically a Linux product. Quoting from the Hadoop Quick Start,
http://hadoop.apache.org/common/docs/r0.20.2/quickstart.html#Supported+Platforms:
Supported Platforms:
GNU/Linux is supported as a development and production platform. Hadoop has
been demonstrated on GNU/Linux clusters with 2000 nodes.
Win32 is supported as a development platform. Distributed operation has not been
well tested on Win32, so it is not supported as a production platform.
Hadoop runs in one of three modes, of varying degrees of parallelism:
standalone mode: Single mapper, single reducer, mainly useful for testing.
pseudo-distributed mode: Single node, but multiple mapper and reducer threads.
fully-distributed mode: Multiple nodes, multiple mappers and reducers.
9.2
Overview of Operations
9.3
203
Role of Keys
The sorting operation, called the shuffle phase, is based on a key defined by the programmer. The
key defines groups. If for instance we wish to find the total number of men and women in a certain
debate, the key would be gender. The reducer would do addition, in this case adding 1s, one 1 for
each person, but keeping a separate sum for each gender.
During the shuffle stage, Hadoop sends all records for a given key, e.g. all men, to one reducer. In
other words, records for a given key will never be split across multiple reducers. (Keep in mind,
though, that typically a reducer will have the records for many keys.)
9.4
Hadoop Streaming
Actually Hadoop is really written for Java or C++ applications. However, Hadoop can work with
programs in any language under Hadoops Streaming option, by reading from STDIN and writing
to STDOUT, in text, line-oriented form in both cases. In other words, any executable program, be
it Java, C/C++, Python, R, shell scripts or whatever, can run in Hadoop in streaming mode.
Everything is text-file based. Mappers input lines of text, and output lines of text. Reducers input
lines of text, and output lines of text. The final output is lines of text.
Streaming mode may be less efficient, but it is simple to develop programs in it, and efficient enough
in many applications. Here we present streaming mode.
So, STDIN and STDOUT are key in this mode, and as mentioned earlier, input and output are
done in terms of lines of text. One additional requirement, though, is that the line format for both
mappers and reducers must be
key \ t v a l u e
9.5
The typical introductory example is word count in a group of text files. One wishes to determine
what words are in the files, and how many times each word appears. Lets simplify that a bit, so
that we simply want a count of the number of words in the files, not an individual count for each
word.
204
The initial input is the lines of the files (combined internally by Hadoop into one superfile). The
mapper program breaks a line into words, and emits (key,value) pairs in the form of (0,1). Our key
here, 0, is arbitrary and meaningless, but we need to have one.
In the reducer stage, all those (key,value) pairs get sorted by the Hadoop internals (which has no
effect in this case), and then fed into the reducers. Since there is only one key, 0, only one reducer
will actually be involved. The latter adds up all its input values, i.e. all the 1s, yielding a grand
total number of words in all the files.
Heres the pseudocode:
mapper:
1
2
3
4
5
f o r each l i n e i n STDIN
br eak l i n e i n t o words , p l a c e d i n wordarray
f o r each word i n wordarray
# we have found 1 word
p r i n t 0 , 1 t o STDOUT
reducer:
1
2
3
4
5
count = 0
f o r each l i n e i n STDIN
s p l i t l i n e i n t o ( key , v a l u e ) # i . e . ( 0 , 1 ) h e r e
count += v a l u e
# i . e . add 1 t o count
p r i n t count
In terms of the key 0, the final output tells us how many words there were of type 0. Since we
arbitrarily considered all words to be of type 0, the final output is simply an overall word count.
9.6
A common Hadoop example on the Web involves data with the format for each line
y e a r month day h i g h t e m p e r a t u r e a i r q u a l i t y
205
f o r each l i n e i n STDIN
e x t r a c t y e a r and t e m p e r a t u r e
p r i n t year , t e m p e r a t u r e t o STDOUT
We have to be a bit more careful in the case of the reducer. Remember, though no year will be
split across reducers, each reducer will likely receive the data for more than one year. It needs to
find and output the maximum temperature for each of those years.2
Since Hadoop internals sort the output of the mappers by key, our reducer code can expect a bunch
of records for one year, then a bunch for another year and so on. So, as the reducer goes through
its input line by line, it needs to detect when one bunch ends and the next begins. When such an
event occurs, it outputs the max temp for the bunch that just ended.
Here is the pseudocode:
reducer:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
c u r r e n t y e a r = NULL
currentmax = i n f i n i t y
f o r each l i n e i n STDIN
s p l i t l i n e i n t o year , t e m p e r a t u r e
i f y e a r == c u r r e n t y e a r : # s t i l l i n t h e c u r r e n t bunch
currentmax = max( currentmax , t e m p e r a t u r e )
e l s e : # e n c o u n t e r e d a new bunch
# p r i n t summary f o r p r e v i o u s bunch
i f c u r r e n t y e a r not NULL:
p r i n t c u r r e n t y e a r , currentmax
# s t a r t our b o o k k e e p i n g f o r t h e new bunch
currentyear = year
currentmax = t e m p e r a t u r e
p r i n t c u r r e n t y e a r , currentmax
9.7
Hadoop has its own file system, HDFS, which is built on top of the native OS file system of the
machines.
Very large files are possible, in some cases spanning more than one disk/machine. Indeed, this
is the typical goal of Hadoopto easily parallelize operations on a very large database. Files are
2
The results from the various reducers will then in turn be reduced, yielding the max temps for all years.
206
typically gigabytes or terabytes in size. Moreover, there may be thousands of clusters, and millions
of files.
This raises serious reliability issues. Thus HDFS is replicated, with each HDFS block existing in
at least 3 copies, i.e. on at least 3 separate disks.
Disk files play a major role in Hadoop programs:
Input is from a file in the HDFS system.
The output of the mappers goes to temporary files in the native OS file system.
Final output is to a file in the HDFS system. As noted earlier, that file may be distributed
across several disks/machines.
Note that by having the input and output files in HDFS, we minimize communications costs in
shipping the data. The slogan used is Moving computation is cheaper than moving data.
9.8
The HDFS can be accessed via a set of Unix-like commands. For example,
1
hadoop f s mkdir s o m e d i r
hadoop f s put gy s o m e d i r
hadoop f s l s s o m e d i r
9.9
Running Hadoop
You run the above word count example something like this, say on the UCD CSIF machines.
Say my input data is in the directory indata on my HDFS, and I want to write the output to a
new directory outdata. Say Ive placed the mapper and reducer programs in my home directory
(non-HDFS). I could then run
207
$ hadoop j a r \
/ u s r / l o c a l / hadoop 0 . 2 0 . 2 / c o n t r i b / s t r e a m i n g / hadoop 0.20.2 s t r e a m i n g . j a r \
i n p u t i n d a t a output o u t d a t a \
mapper mapper . py r e d u c e r r e d u c e r . py \
f i l e /home/ m a t l o f f /mapper . py \
f i l e /home/ m a t l o f f / r e d u c e r . py
This tells Hadoop to run a Java .jar file, which in our case here contains the code to run streamingmode Hadoop, with the specified input and output data locations, and with the specified mapper
and reducer functions. The -file flag indicates the locations of those functions (not needed if they
are in my shell search path).
I could then run
1
hadoop f s l s o u t d a t a
to see what files were produced, say part 00000, and then type
1
hadoop f s c a t o u t d a t a / p a r t 0 0 0 0 0
9.10
Yet another rendition of the app in Section 4.13, but this time with a bit of problem, which will
illustrate a limitation of Hadoop.
To review:
Say we have a graph with adjacency matrix
0
1
0
1
1
0
1
1
0
0
0
1
0
1
1
0
(9.1)
with row and column numbering starting at 0, not 1. Wed like to transform this to a two-column
208
0
1
1
2
2
3
3
3
1
0
3
1
3
0
1
2
(9.2)
Suppose further that we require this listing to be in lexicographical order, sorted on source vertex
and then on destination vertex.
At first, this seems right up Hadoops alley. After all, Hadoop does sorting for us within groups
automatically, and we could set up one group per row of the matrix, in other words make the row
number the key.
We will actually do this below, but there is a fundamental problem: Hadoops simple elegance
hinges on there being an independence between the lines in the input file. We should be able to
process them one line at a time, independently of other lines.
The problem with this is that we will have many mappers, each reading only some rows of the
adjacency matrix. Then for any given row, the mapper handling that row doesnt know what row
number this row had in the original matrix. So we have no key!
The solution is to add a column to the matrix, containing the original row numbers. The matrix
above, for instance, would become
0
1
2
3
0
1
0
1
1
0
1
1
0
0
0
1
0
1
1
0
(9.3)
Adding this column may be difficult to do, if the matrix is very large and already distributed over
many machines. Assuming we do this, though, here is the mapper code (real code this time, not
pseudocode):3 .
1 #!/ u s r / b i n / env python
2
3 # map/ r e d u c e p a i r i n p u t s a graph a d j a c e n c y matrix and o u t p u t s a l i s t o f
4 # l i n k s ; i f say row 3 , column 8 o f t h e i n p u t i s 1 , then t h e r e w i l l be a
3
209
5 # row ( 3 , 8 ) i n t h e f i n a l output
6
7 import s y s
8
9 for l i n e in sys . stdin :
10
tks = l i n e . s p l i t () # get tokens
11
srcnode = tks [ 0 ]
12
l i n k s = tks [ 1 : ]
13
f o r dstnode in range ( l e n ( l i n k s ) ) :
14
i f l i n k s [ d s t n o d e ] == 1 :
15
t o p r i n t = % s \ t%s % ( s r c n o d e , d s t n o d e )
16
print toprint
Note that the row number, needed for other reasons, is also serving as our Hadoop key variable.
Recall that in the word count and yearly temperature examples above, the reducer did the main
work, with the mappers playing only a supporting role. In this case here, its the opposite, with the
reducers doing little more than printing what theyre given. However, keep in mind that Hadoop
itself did a lot of the work, with its shuffle phase, which produced the sorting that we required in
the output.
Heres the same code in R:
mapper:
1 #!/ u s r / b i n / env R s c r i p t
2
3 # map/ r e d u c e p a i r i n p u t s a graph a d j a c e n c y matrix and o u t p u t s a l i s t o f
4 # l i n k s ; i f say row 3 , column 8 o f t h e i n p u t i s 1 , then t h e r e w i l l be a
5 # row ( 3 , 8 ) i n t h e f i n a l output
6
7 con < f i l e ( s t d i n , open = r )
8 mapin < r e a d L i n e s ( con ) # b e t t e r not t o r e a d a l l a t once , but keep s i m p l e
9 f o r ( l i n e i n mapin ) {
10
t k s < s t r s p l i t ( l i n e , s p l i t = )
11
t k s < t k s [ [ 1 ] ]
12
s r c n o d e < t k s [ 1 ]
13
l i n k s < t k s [ 1]
14
f o r ( dstnode in 1 : length ( l i n k s ) ) {
15
i f ( l i n k s [ d s t n o d e ] == 1 )
210
16
17
18
reducer:
1
2
3
4
5
6
7
8
9
#!/ u s r / b i n / env R s c r i p t
con < f i l e ( s t d i n , open = r )
mapin < r e a d L i n e s ( con )
f o r ( l i n e i n mapin ) {
l i n e < s t r s p l i t ( l i n e , s p l i t =\ t )
l i n e < l i n e [ [ 1 ] ]
cat ( l i n e ,\ n)
}
9.11
# remove \ t
In any large data set, there are various errors, say 3-year-olds who are listed as 7 feet tall. One
way to try to track these down is to comb the data for outliers, which are data points (rows in the
data set) that are far from the others. These may not be erroneous, but they are suspicious, and
we want to flag them for closer inspection.
In this simple version, we will define an outlier point to be one for which at least one of its variables
is in the upper p proportion in its group. Say for example p is 0.02, and our groups are male adults
and female adults, with our data variables being height and weight. Then if the height for some
man were in the upper 2% of all men in the data set, wed flag him as an outlier; wed do the same
for weight. Note that hed be selected if either his height or his weight were in the top 2% for that
variable among men in the data set. Of course, we could also look at the bottom 2%, or those
whose Euclidean distance as vectors are in the most distant 2% from the centroid of the data in a
group, etc.
Well define groups in terms of combinations of variables. These might be, say, Asian male lawyers,
female Kentucky natives registered as Democrats, etc. Well use lexicographical order.
Say Variable 1 takes on the values 0-5, and Variable 2 has the values 0-12. The lex order would
then be (0,0), (0,1),..., (0,11), (1,0), (1,1),...,(1,11),...,(5,0), (5,1),..., (5,11).
Note that the reducer reads in the entire data set first to determine where the upper percentiles
are. This itself could be done with a separate MapReduce operation.
mapper:
1
2
#!/ u s r / b i n / env R s c r i p t
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
p ( given as a decimal
o f data v a r i a b l e s ; g ,
numbers which a r e t h e
( t h e l o w e r bounds a r e
number , e . g . 0 . 0 2 ) ; d , t h e number
t h e number o f g r o u p i n g v a r i a b l e s ; and f i n a l l y g
upper bounds f o r t h e l a s t g1 group v a r i a b l e s
always assumed t o be 0 )
i n i t < f u n c t i o n ( ) {
ca < commandArgs ( t r a i l i n g O n l y=T)
p a r s < ca [ 1 ]
p a r s < s t r s p l i t ( pars , s p l i t = ) [ [ 1 ] ]
p a r s < p a r s [ 1]
ndv << a s . i n t e g e r ( p a r s [ 1 ] )
ngv << a s . i n t e g e r ( p a r s [ 2 ] )
# a few p o s i t i o n v a r i a b l e s , used i n f i n d g r p ( ) below
g r p s t a r t << 2
grpend << g r p s t a r t + ngv 1
d a t a s t a r t << 2+ngv
dataend << 1 + ngv + ndv
# g e t upper bounds , and t h e i r r e v e r s e c u m u l a t i v e p r o d u c t s
ubds << a s . i n t e g e r ( p a r s [ 3 : ( 1 + ngv ) ] )
ubdsprod << v e c t o r ( l e n g t h=ngv 1)
f o r ( i i n 1 : ( ngv 1))
ubdsprod [ i ] << prod ( ubds [ i : ( ngv 1 ) ] )
}
# c o n v e r t s v e c t o r o f group v a r i a b l e s t o group number
f i n d g r p < f u n c t i o n ( g r p v a r s ) {
sum < 0
f o r ( i i n 1 : ( ngv 1)) {
m < g r p v a r s [ i ]
sum < sum + m ubdsprod [ i ]
}
r e t u r n ( sum+g r p v a r s [ ngv ] )
}
211
212
53 # t e s t
54 i n i t ( )
55 con < f i l e ( s t d i n , open = r )
56 mapin < r e a d L i n e s ( con ) # b e t t e r not t o r e a d a l l a t once
57 f o r ( l i n e i n mapin ) {
58
t k s < s t r s p l i t ( l i n e , s p l i t = )
59
t k s < t k s [ [ 1 ] ]
60
rownum < t k s [ 1 ]
61
g r p v a r s < t k s [ g r p s t a r t : grpend ]
62
g r p v a r s < a s . i n t e g e r ( g r p v a r s )
63
grpnum < f i n d g r p ( g r p v a r s )
64
c a t ( grpnum , \ t , rownum , , t k s [ d a t a s t a r t : dataend ] , \ n )
65 }
reducer:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
#!/ u s r / b i n / env R s c r i p t
# s e e comments i n olmap .R
# i n Hadoop command l i n e , u s e r e d u c e r o l r e d .R 0 . 4 2 3 6 1 6 o r
# similar
# R s q u a n t i l e ( ) t o o c o m p l i c a t e d
q u a n t l < f u n c t i o n ( x , q ) {
return ( s o r t ( x ) [ c e i l i n g ( length ( x ) q ) ] )
}
i n i t < f u n c t i o n ( ) {
ca < commandArgs ( t r a i l i n g O n l y=T)
p a r s < ca [ 1 ]
p a r s < s t r s p l i t ( pars , s p l i t = ) [ [ 1 ] ]
# p a r s < c ( 0 . 4 , 2 , 3 , 6 , 1 6 ) # f o r l i t t l e
# p a r s < c ( 0 . 1 , 2 , 2 , 3 ) # f o r b i g t e s t
p << a s . d o u b l e ( p a r s [ 1 ] )
ndv << a s . i n t e g e r ( p a r s [ 2 ] )
}
test
e m i t o u t l i e r s < f u n c t i o n ( datamat ) {
# f i n d t h e upper p q u a n t i l e f o r each v a r i a b l e ( s k i p row number )
t o o h i g h < apply ( datamat [ , ( 1 : 2 ) , drop=F ] , 2 , q u a n t l ,1 p )
f o r ( i i n 1 : nrow ( datamat ) ) {
i f ( any ( datamat [ i , ( 1 : 2 ) ] >= t o o h i g h ) )
c a t ( datamat [ i , ] , \ n )
}
}
# test
init ()
con < f i l e ( s t d i n , open = r )
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
213
9.12
One advantage of the streaming approach is that mapper and reducer programs can be debugged
via normal tools, since those programs can be run on their own, without Hadoop, simply by using
the Unix/Linux pipe capability.
This all depends on the fact that Hadoop essentially does the following Unix shell computation:
c a t i n p u t f i l e | mapperprog | s o r t n | r e d u c e r p r o g
You thus can use whatever debugging tool you favor, to debug the mapper and reducer code
separately.
Note, though, that the above pipe is not quite the same as Hadoop. the pipe doesnt break up
the data, and there may be subtle problems arising as a result. But overall, the above approach
provides a quick and easy first attempt at debugging.
214
The userlogs subdirectory of your Hadoop logs directory contains files that may be helpful, such
as stderr.
9.13
The real challenge in Hadoop is often not the programming, but rather the minimization of overhead. This involves things like tuning the file system, the number of mappers and reducers, and so
on. These topics are beyond the scope of this book.
Chapter 10
sn1
s0 = x0 ,
s 1 = x0 x1 ,
...,
= x0 x1 ... xn1
(10.1)
10.1
Example: Permutations
Say we have the vector (12,5,13,8,88). Applying the permutation (2,0) would say the old element 0
becomes element 2, the old element 2 becomes element 0, and all the rest stay the same. The result
would be (13,5,12,8,88). If we then applied the permutation (1,2,4), it would mean that element 1
215
216
goes to position 2, 2 goes to 4, and 4 goes to 1, with everything else staying put. Our new vector
would then be (13,88,5,8,12).
This too can be cast in matrix terms, by representing any permutation as a matrix multiplication.
We just apply the permutation to the identity matrix I, and then postmultiply the (row) vector by
the matrix. For instance, the matrix corresponding to the permutation (1,2,4) is
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
1
(10.2)
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
1
(10.3)
So in terms of (10.1), x0 would be the identity matrix, xi for i > 0 would be the ith permutation
matrix, and would be matrix multiplication.
Note, however, that although weve couched the problem in terms of matrix multiplication, these
are sparse matrices, i.e. have many 0s. Thus a general parallel matrix-multiply routine may not
be efficient, and special parallel methods for sparse matrices should be used (Section 11.7).
Note that the above example shows that in finding a scan,
the elements might be nonscalars
the associative operator need not be commutative
10.2
For the time being, well assume we have n threads, i.e. one for each datum. Clearly this condition
will often not hold, so well extend things later.
Well describe what is known as a data parallel solution to the prefix problem.
217
x1 x0 + x1
(10.4)
x2 x1 + x2
(10.5)
x3 x2 + x3
(10.6)
x4 x3 + x4
(10.7)
x5 x4 + x5
(10.8)
x6 x5 + x6
(10.9)
x7 x6 + x7
(10.10)
x2 x0 + x2
(10.11)
x3 x1 + x3
(10.12)
x4 x2 + x4
(10.13)
x5 x3 + x5
(10.14)
x6 x4 + x6
(10.15)
x7 x5 + x7
(10.16)
x4 x0 + x4
(10.17)
x5 x1 + x5
(10.18)
x6 x2 + x6
(10.19)
x7 x3 + x7
(10.20)
Step 2:
Step 3:
In Step 1, we look at elements that are 1 apart, then Step 2 considers the ones that are 2 apart,
then 4 for Step 3.
Why does this work? Well, consider how the contents of x7 evolve over time. Let ai be the original
xi , i = 0,1,...,n-1. Then here is x7 after the various steps:
218
step
1
2
3
br eak t h e a r r a y i n t o p b l o c k s
p a r a l l e l f o r i = 0 , . . . , p1
Ti d o e s s c a n o f b l o c k i , r e s u l t i n g i n S i
form new a r r a y G o f r i g h t m o s t e l e m e n t s o f each S i
do p a r a l l e l s c a n o f G
p a r a l l e l f o r i = 1 , . . . , p1
Ti adds Gi t o each e l e m e n t o f b l o c k i
50 3 1 11
7 9 29 10
10.3. IMPLEMENTATIONS
2 27 53 61
50 53 54 65
219
7 16 45 55
But we still dont have the scan of the array overall. That 50, for instance, should be 61+50 =
111 and the 53 should be 61+53 = 114. In other words, 61 must be added to that second section,
(50,53,54,65), and 61+65 = 126 must be added to the third section, (7,16,45,55). This then is the
last step, yielding
2 27 53 61
Another possible approach would be make n fake threads FTj. Each Ti plays the role of n/p
of the FTj. The FTj then do the parallel scan as at the beginning of this section. Key point:
Whenever a Ti becomes idle, it is assigned to help other Tk.
10.3
Implementations
The MPI standard actually includes built-in parallel prefix functions, MPI Scan(). A number of
choices are offered for , such as maximum, minimum, sum, product etc.
The Thrust library for CUDA or OpenMP includes functions thrust::inclusive scan() and thrust::exclusive sca
The CUDPP (CUDA Data Parallel Primitives Library) package contains CUDA functions for
sorting and other operations, many of which are based on parallel scan. See http://gpgpu.
org/developer/cudpp for the library code, and a detailed analysis of optimizing parallel prefix in a GPU context in the book GPU Gems 3, available either in bookstores or free online at
http://developer.nvidia.com/object/gpu_gems_home.html.
10.4
Here is an OpenMP implementation of the approach described at the end of Section 10.2, for
addition:
1 #i n c l u d e <omp . h>
2
3 // c a l c u l a t e s p r e f i x sums s e q u e n t i a l l y on u , inp l a c e , where u i s an
4 // me l e m e n t a r r a y
5 v o i d s e q p r f s u m ( i n t u , i n t m)
6 { i n t i , s=u [ 0 ] ;
7
f o r ( i = 1 ; i < m; i ++) {
8
u [ i ] += s ;
9
s = u[ i ];
10
}
11 }
12
220
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
Here is an example of use: A method for compressing data is to store only repeat counts in
runs, where the latter means a set of consecutive, identical values. For instance, the sequence
2,2,2,0,0,5,0,0 would be compressed to 3,2,2,0,1,5,2,0, meaning that the data consist of first three
2s, then two 0s, then one 5, and finally two 0s. Note that the compressed version consists of
alternating run counts and run values, respectively 2 and 0 at the end of the above example.
To solve this in OpenMP, well first call the above functions to decide where to place the runs in
our overall output.
1
2
3
4
5
6
7
8
9
10
11
12
13
v o i d uncomprle ( i n t x , i n t nx , i n t tmp , i n t y , i n t ny )
{
i n t i , nx2 = nx / 2 ;
i n t z [MAXTHREADS] ;
f o r ( i = 0 ; i < nx2 ; i ++) tmp [ i +1] = x [ 2 i ] ;
parprfsum ( tmp+1, nx2 +1 , z ) ;
tmp [ 0 ] = 0 ;
#pragma omp p a r a l l e l
{ int j , k ;
i n t me=omp get thread num ( ) ;
#pragma omp f o r
f o r ( j = 0 ; j < nx2 ; j ++) {
// where t o s t a r t t h e j th run ?
14
15
16
17
18
19
20
21
22
23
24
221
i n t s t a r t = tmp [ j ] ;
// what v a l u e i s i n t h e run ?
int val = x [2 j +1];
// how l o n g i s t h e run ?
i n t nrun = x [ 2 j ] ;
f o r ( k = 0 ; k < nrun ; k++)
y [ s t a r t+k ] = v a l ;
}
}
ny = tmp [ nx2 ] ;
}
10.5
Heres how we could do the first part of the operation above, i.e. determining where to place the
runs in our overall output, in Thrust:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#i n c l u d e
#i n c l u d e
#i n c l u d e
#i n c l u d e
#i n c l u d e
<s t d i o . h>
<t h r u s t / d e v i c e v e c t o r . h>
<t h r u s t / s c a n . h>
<t h r u s t / s e q u e n c e . h>
<t h r u s t / remove . h>
struct iseven {
bool operator ( ) ( const int i )
{ r e t u r n ( i % 2 ) == 0 ;
}
};
i n t main ( )
{ int i ;
int x [12] = {2 ,3 ,1 ,9 ,3 ,5 ,2 ,6 ,2 ,88 ,1 ,12};
i n t nx = 1 2 ;
t h r u s t : : d e v i c e v e c t o r <i n t > out ( nx ) ;
t h r u s t : : d e v i c e v e c t o r <i n t > s e q ( nx ) ;
t h r u s t : : s e q u e n c e ( s e q . b e g i n ( ) , s e q . end ( ) , 0 ) ;
t h r u s t : : d e v i c e v e c t o r <i n t > dx ( x , x+nx ) ;
t h r u s t : : d e v i c e v e c t o r <i n t > : : i t e r a t o r newend =
t h r u s t : : c o p y i f ( dx . b e g i n ( ) , dx . end ( ) , s e q . b e g i n ( ) , out . b e g i n ( ) , i s e v e n ( ) ) ;
t h r u s t : : i n c l u s i v e s c a n ( out . b e g i n ( ) , out . end ( ) , out . b e g i n ( ) ) ;
// out s h o u l d be 2 ,2+1 = 3 , 2 + 1 + 3 = 6 , . . .
t h r u s t : : copy ( out . b e g i n ( ) , newend ,
s t d : : o s t r e a m i t e r a t o r <i n t >( s t d : : cout , ) ) ;
s t d : : c o u t << \n ;
}
222
Chapter 11
In the early days parallel processing was mostly used in physics problems. Typical problems
of interest would be grid computations such as the heat equation, matrix multiplication, matrix
inversion (or equivalent operations) and so on. These matrices are not those little 3x3 toys you
worked with in your linear algebra class. In parallel processing applications of matrix algebra, our
matrices can have thousands of rows and columns, or even larger.
The range of applications of parallel processing is of course far broader today, such as image
processing, social networks and data mining. Google employs a number of linear algebra experts,
and they deal with matrices with literally millions of rows or columns.
We assume for now that the matrices are dense, meaning that most of their entries are nonzero.
This is in contrast to sparse matrices, with many zeros. Clearly we would use differents type of
algorithms for sparse matrices than for dense ones. Well cover sparse matrices a bit in Section
11.7.
11.2
Partitioned Matrices
Parallel processing of course relies on finding a way to partition the work to be done. In the matrix
algorithm case, this is often done by dividing a matrix into blocks (often called tiles these days).
223
224
1 5 12
A= 0 3 6
4 8 2
(11.1)
and
0 2 5
B = 0 9 10 ,
1 1 2
(11.2)
so that
12 59 79
C = AB = 6 33 42 .
2 82 104
(11.3)
We could partition A as
A=
A00 A01
A10 A11
(11.4)
where
A00 =
1 5
0 3
(11.5)
A01 =
12
6
A10 =
4 8
(11.6)
(11.7)
and
A11 =
(11.8)
225
B00 B01
B10 B11
C00 C01
C10 C11
(11.9)
and
C=
(11.10)
1 1
(11.11)
The key point is that multiplication still works if we pretend that those submatrices are numbers!
For example, pretending like that would give the relation
C00 = A00 B00 + A01 B10 ,
(11.12)
which the reader should verify really is correct as matrices, i.e. the computation on the right side
really does yield a matrix equal to C00 .
11.3
Since so many parallel matrix algorithms rely on matrix multiplication, a core issue is how to
parallelize that operation.
Lets suppose for the sake of simplicity that each of the matrices to be multiplied is of dimensions
n x n. Let p denote the number of processes, such as shared-memory threads or message-passing
nodes.
11.3.1
Message-Passing Case
For concreteness here and in other sections below on message passing, assume we are using MPI.
The obvious plan of attack here is to break the matrices into blocks, and then assign different
blocks to different MPI nodes. Assume that p evenly divides n, and partition each matrix into
226
submatrices of size n/ p x n/ p. In other words, each matrix will be divided into m rows and m
11.3.1.1
Foxs Algorithm
Consider the node that has the responsibility of calculating block (i,j) of the product C, which it
calculates as
Ai0 B0j + Ai1 B1j + ... + Aii Bij + ... + Ai,m1 Bm1,j
(11.13)
(11.14)
Ai,(i+k)mod
m B(i+k)mod m,j
(11.15)
k=0
In other words, start with the Aii term, then go across row i of A, wrapping back up to the left end
when you reach the right end. The order of summation in this rearrangement will be the actual
order of computation. Its similar for B, in column j.
The algorithm is then as follows. The node which is handling the computation of Cij does this (in
parallel with the other nodes which are working with their own values of i and j):
1
2
3
4
5
6
7
8
9
227
i u p = i +1 mod m;
idown = i 1 mod m;
f o r ( k = 0 ; k < m; k++) {
km = ( i+k ) mod m;
b r o a d c a s t (A[ i , km ] ) t o a l l nodes h a n d l i n g row i o f C ;
C [ i , j ] = C [ i , j ] + A[ i , km] B [ km, j ]
send B [ km, j ] t o t h e node h a n d l i n g C [ idown , j ]
r e c e i v e new B [ km+1 mod m, j ] from t h e node h a n d l i n g C [ iup , j ]
}
The main idea is to have the various computational nodes repeatedly exchange submatrices with
each other, timed so that a node receives the submatrix it needs for its computation just in time.
This is Foxs algorithm. Cannons algorithm is similar, except that it does cyclical rotation in both
rows and columns, compared to Foxs rotation only in columns but broadcast within rows.
The algorithm can be adapted in the obvious way to nonsquare matrices, etc.
11.3.1.2
Performance Issues
Note that in MPI we would probably want to implement this algorithm using communicators. For
example, this would make broadcasting within a block row more convenient and efficient.
Note too that there is a lot of opportunity here to overlap computation and communication, which
is the best way to solve the communication problem. For instance, we can do the broadcast above
at the same time as we do the computation.
Obviously this algorithm is best suited to settings in which we have PEs in a mesh topology. This
includes hypercubes, though one needs to be a little more careful about communications costs there.
11.3.2
11.3.2.1
Shared-Memory Case
Example: Matrix Multiply in OpenMP
Since a matrix multiplication in serial form consists of nested loops, a natural way to parallelize
the operation in OpenMP is through the for pragma, e.g.
1 #pragma omp p a r a l l e l f o r
2 f o r ( i = 0 ; i < n c o l s a ; i ++)
3
f o r ( j = 0 ; i < nrowsb ; j ++) {
4
sum = 0 ;
5
f o r ( k = 0 ; i < n c o l s a ; i ++)
6
sum += a [ i ] [ k ] b [ k ] [ j ] ;
7
}
228
This would parallelize the outer loop, and we could do so at deeper nesting levels if profitable.
11.3.2.2
Given that CUDA tends to work better if we use a large number of threads, a natural choice is for
each thread to compute one element of the product, like this:
1
2
3
4
5
6
7
8
9
global
v o i d matmul ( f l o a t ma, f l o a t mb, f l o a t mc , i n t nrowsa ,
i n t ncolsa , i n t ncolsb , f l o a t t o t a l )
{ i n t k , i , j ; f l o a t sum ;
// f i n d i , j a c c o r d i n g t o t h r e a d and b l o c k ID
sum = 0 ;
f o r ( k = 0 ; k < n c o l s a ; k++)
sum += a [ i n c o l s a+k ] b [ k n c o l s+j ] ;
t o t a l = sum ;
}
This should produce a good speedup. But we can do even better, much much better.
The CUBLAS package includes very finely-tuned algorithms for matrix multiplication. The CUBLAS
source code is not public, though, so in order to get an idea of how such tuning might be done,
lets look at Prof. Richard Edgars algorithm, which makes use of shared memory. (Actually, this
may be what CUBLAS uses.)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
global
v o i d M u l t i p l y O p t i m i s e ( c o n s t f l o a t A, c o n s t f l o a t B, f l o a t C) {
// E x t r a c t b l o c k and t h r e a d numbers
i n t bx = b l o c k I d x . x ; i n t by = b l o c k I d x . y ;
i n t tx = t h r e a d I d x . x ; i n t ty = t h r e a d I d x . y ;
// Index o f f i r s t A submatrix p r o c e s s e d by t h i s b l o c k
i n t aBegin = dc wA BLOCK SIZE by ;
// Index o f l a s t A submatrix
i n t aEnd = aBegin + dc wA 1 ;
// S t e p s i z e o f A subm a t r i c e s
i n t aStep = BLOCK SIZE ;
// Index o f f i r s t B submatrix
// p r o c e s s e d by t h i s b l o c k
i n t bBegin = BLOCK SIZE bx ;
// S t e p s i z e f o r B subm a t r i c e s
i n t bStep = BLOCK SIZE dc wB ;
// Accumulator f o r t h i s t h r e a d
f l o a t Csub = 0 ;
f o r ( i n t a = aBegin , b = bBegin ; a <= aEnd ; a += aStep , b+= bStep ) {
// Shared memory f o r subm a t r i c e s
shared
f l o a t As [ BLOCK SIZE ] [ BLOCK SIZE ] ;
shared
f l o a t Bs [ BLOCK SIZE ] [ BLOCK SIZE ] ;
// Load m a t r i c e s from g l o b a l memory i n t o s h a r e d memory
// Each t h r e a d l o a d s one e l e m e n t o f each submatrix
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
229
As [ ty ] [ tx ] = A[ a + ( dc wA ty ) + tx ] ;
Bs [ ty ] [ tx ] = B [ b + ( dc wB ty ) + tx ] ;
// S y n c h r o n i s e t o make s u r e l o a d i s c o m p l e t e
syncthreads ( ) ;
// Perform m u l t i p l i c a t i o n on subm a t r i c e s
// Each t h r e a d computes one e l e m e n t o f t h e C submatrix
f o r ( i n t k = 0 ; k < BLOCK SIZE ; k++ ) {
Csub += As [ ty ] [ k ] Bs [ k ] [ tx ] ;
}
// S y n c h r o n i s e a g a i n
syncthreads ( ) ;
}
// Write t h e C submatrix back t o g l o b a l memory
// Each t h r e a d w r i t e s one e l e m e n t
i n t c = ( dc wB BLOCK SIZE by ) + (BLOCK SIZEbx ) ;
C [ c + ( dc wB ty ) + tx ] = Csub ;
}
Here are the relevant portions of the calling code, including defined constants giving the number of
columns (width) of the multiplier matrix and the number of rows (height) of the multiplicand:
1 #d e f i n e BLOCK SIZE 16
2 ...
3
constant
i n t dc wA ;
4
constant
i n t dc wB ;
5 ...
6 // S i z e s must be m u l t i p l e s o f BLOCK SIZE
7 dim3 t h r e a d s (BLOCK SIZE , BLOCK SIZE ) ;
8 dim3 g r i d (wB/BLOCK SIZE , hA/BLOCK SIZE ) ;
9 M u l t i p l y S i m p l e <<<g r i d , t h r e a d s >>>(d A , d B , d C ) ;
10 . . .
(Note the alternative way to configure threads, using the functions threads() and grid().)
Here the the term block in the defined value BLOCK SIZE refers both to blocks of threads and
the partitioning of matrices. In other words, a thread block consists of 256 threads, to be thought
of as a 16x16 array of threads, and each matrix is partitioned into submatrices of size 16x16.
In addition, in terms of grid configuration, there is again a one-to-one correspondence between
thread blocks and submatrices. Each submatrix of the product matrix C will correspond to, and
will be computed by, one block in the grid.
We are computing the matrix product C = AB. Denote the elements of A by aij for the element
in row i, column j, and do the same for B and C. Row-major storage is used.
Each thread will compute one element of C, i.e. one cij . It will do so in the usual way, by multiplying
column j of B by row i of A. However, the key issue is how this is done in concert with the other
threads, and the timing of what portions of A and B are in shared memory at various times.
230
Here we loop across a row of submatrices of A, and a column of submatrices of B, calculating one
submatrix of C. In each iteration of the loop, we bring into shared memory a new submatrix of
A and a new one of B. Note how even this copying from device global memory to device shared
memory is shared among the threads.
As an example, suppose
A=
1 2 3 4 5 6
7 8 9 10 11 12
(11.16)
and
B=
1
5
9
13
17
21
2
6
10
14
18
22
3
7
11
15
19
23
4
8
12
16
20
24
(11.17)
Further suppose that BLOCK SIZE is 2. Thats too small for good efficiencygiving only four
threads per block rather than 256but its good for the purposes of illustration.
Lets see what happens when we compute C00 , the 2x2 submatrix of Cs upper-left corner. Due to
the fact that partitioned matrices multiply just like numbers, we have
(11.18)
(11.19)
Now, all this will be handled by thread block number (0,0), i.e. the block whose X and Y coordinates are both 0. In the first iteration of the loop, A11 and B11 are copied to shared memory for
that block, then in the next iteration, A12 and B21 are brought in, and so on.
231
Consider what is happening with thread number (1,0) within that block. Remember, its ultimate
goal is to compute c21 (adjusting for the fact that in math, matrix subscripts start at 1). In the
first iteration, this thread is computing
1 2
1
5
= 11
(11.20)
It saves that 11 in its running total Csub, eventually writing it to the corresponding element of C:
i n t c = ( dc wB BLOCK SIZE by ) + (BLOCK SIZEbx ) ;
C [ c + ( dc wB ty ) + tx ] = Csub ;
Professor Edgar found that use of shared device memory resulted a huge improvement, extending
the original speedup of 20X to 500X!
11.3.3
R Snow
Section 1.3.4.1 showed how to parallelize a matrix-vector product computation in snow, by breaking
the matrix rows into chunks, and then exploiting the tiling properties of matrices. Computation of
matrix-matrix products can be done in the same way.
11.3.4
R Interfaces to GPUs
The most widely used of these is probability the gputools library. It includes various matrix
routines, including gpuMatMult() for matrix multiplication.
11.4
In some applications, we are interested not just in multiplying two matrices, but rather in multiplying a matrix by itself, many times.
11.4.1
Let n denote the number of vertices in the graph. As before, define the graphs adjacency matrix
A to be the n x n matrix whose element (i,j) is equal to 1 if there is an edge connecting vertices i
an j (i.e. i and j are adjacent), and 0 otherwise.
Our ultimate goal here will be to compute the corresponding reachability matrix R(k) has its
(i,j) element equal to 1 if there is some path from i to j taking k or fewer steps, and 0 otherwise.
232
(Note that the notation (k) here is a superscript, not an exponent.) We would especially like to
compute R, whose elements indicate whether one can ever reach one vertex starting at another. In
particular, we may be interested in determining whether the graph is connected, meaning that
every vertex eventually leads to every other vertex.
Toward that end, consider the matrix
0
1
0
1
1
0
1
1
0
0
0
1
0
1
1
0
(11.21)
(11.22)
341
(11.23)
If we were to answer this kind of question systematically, say for the number of two-step paths from
i to j, we would evaluate the following boolean expression:
p(i 1 j) + p(i 2 j) + p(i 3 j) + p(i 4 j)
(11.24)
(11.25)
Thus the number of paths for a general n n matrix A from vertex i to vertex j is
n
X
aik akj
(11.26)
i=1
But this is the (i,j) element of A2 ! Moreover, this says that R(2) = b(A2 ), where b() changes
nonzero elements of a matrix to 1s, and retains the original 0s.
233
In general:
Theorem 1 Suppose A is the adjacency matrix A for a graph. Then
(a) The number of r-step paths from i to j is the (i,j) element of Ar .
(b) Let B = A + I, with I being the n n identity matrix. B is the adjacency matrix of the
original graph, augmented with an edge from each vertex to itself.
R(k) = b(Ak )
(11.27)
The purpose of the augmentation of A was to allow for the possibility that there may be a
path from i to j in s < k steps but no other such paths. The augmentation allows us to run
in place at j for the remaining k-s steps, so that (11.27) still shows that we can be at j at k
steps.
(c) Since the longest possible distinct path has length n-1, we have that the graph is connected if
and only if each of the matrices R(1) , ..., R(n1) has all of its off-diagonal elements equal to 1.
(d) Suppose the graph is undirected. Then cycles are possible, so we can keep coming back to a
vertex. Thus the graph is connected if and only if some matrix among R(1) , ..., R(n1) has all
of its off-diagonal elements equal to 1.
So, the original graph connectivity problem reduces to a matrix problem. And (d) is especially
interesting, as it means that if we do manage to find some R(k) that consists of all 1s (off the
diagonal), our computation is done.
11.4.2
The basic problem is well known: Find the Fibonacci numbers fn , where
f0 = f1 = 1
(11.28)
(11.29)
and
234
fn+1
fn
=A
fn
(11.30)
fn1
where
A=
1 1
1 0
(11.31)
fn+1
fn
=A
n1
1
1
(11.32)
In other words, our problem reduces to one of finding the powers A, A2 , ..., An1 .
11.4.3
Many applications make use of A1 for an n x n square matrix A. In many cases, it is not computed
directly, but here we address methods for direct computation.
We could use the methods of Section 11.5 to find matrix inverses, but there is alos a power series
method.
Recall that for numbers x that are smaller than 1 in absolute value,
1
= 1 + x + x2 + ...
1x
(11.33)
(11.34)
(11.35)
i,j
235
(11.36)
To meet the convergence condition, we could set A = dA, where d is small enough so that (11.35)
This will be possible, if all the elements of A are nonnegative. We then find the
holds for I A.
inverse of dA, and in the end multiply by d to get the inverse of A.
11.4.4
Parallel Computation
11.5
(11.37)
(11.38)
236
11.5.1
Gaussian Elimination
Form the n x (n+1) matrix C = (A | b) by appending the column vector b to the right of A. (It
may be advantageous to add padding on the right of b.)
Then we work on the rows of C, with the pseudocode for the sequential case in the most basic
version being
1
2
3
4
f o r i i = 0 t o n1
d i v i d e row i i by c [ i ] [ i ]
f o r r = 0 t o n1, r != i
r e p l a c e row r by row r c [ r ] [ i i ] t i m e s row i i
In the divide operation in the above pseudocode, cii might be 0, or close to 0. In that case, a
pivoting operation is performed (not shown in the pseudocode): that row is first swapped with
another one further down.
This transforms C to reduced row echelon form, in which A is now the identity matrix I and b
is now our solution vector x.
A variation is to transform only to row echelon form. This means that C ends up in upper
triangular form, with all the elements cij with i > j being 0, and with all diagonal elements being
equal to 1. Here is the pseudocode:
1
2
3
4
f o r i i = 0 t o n1
d i v i d e row i i by c [ i ] [ i ]
f o r r = i i +1 t o n1 // vacuous i f r = n1
r e p l a c e row r by row r c [ r ] [ i i ] t i m e s row i i
11.5.2
237
Heres CUDA code for the reduced row echelon form version, suitable for a not-extremely-large
matrix:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
// m u l t i p l y t h e v e c t o r u o f l e n g t h m by t h e c o n s t a n t c ( not c h a n g i n g u )
// and add t h e r e s u l t t o v
device
v o i d v p l s c u ( f l o a t u , f l o a t v , i n t m, f l o a t c )
{ f o r ( i n t i = 0 ; i < m; i ++) v [ i ] += c u [ i ] ;
}
// copy t h e v e c t o r u o f l e n g t h m t o v
device
v o i d cpuv ( f l o a t u , f l o a t v , i n t m)
{ f o r ( i n t i = 0 ; i < m; i ++) v [ i ] = u [ i ] ;
}
// s o l v e matrix e q u a t i o n Ax = b ; s t r a i g h t Gaussian e l i m i n a t i o n , no
// p i v o t i n g e t c . ; t h e matrix ab i s (A| b ) , n rows ; ab i s d e s t r o y e d , with
// x p l a c e d i n t h e l a s t column ; one b l o c k , with t h r e a d i h a n d l i n g row i
global
v o i d g a u s s ( f l o a t ab , i n t n )
{ i n t i , n1=n+1 , a b i i , abme ;
extern
shared
float iirow [ ] ;
i n t me = t h r e a d I d x . x ;
f o r ( i = 0 ; i < n ; i ++) {
i f ( i == me) {
a b i i = onedim ( i , i , n1 ) ;
c v e c (&ab [ a b i i ] , n1i , 1 / ab [ a b i i ] ) ;
cpuv(&ab [ a b i i ] , i i r o w , n1i ) ;
}
syncthreads ( ) ;
i f ( i != me) {
abme = onedim (me , i , n1 ) ;
v p l s c u ( i i r o w ,&ab [ abme ] , n1i ,ab [ abme ] ) ;
}
syncthreads ( ) ;
}
}
Here we have one thread for each row, and are using just one block, so as to avoid interblock
synchronization problems and to easily use shared memory. Concerning the latter, note that since
the pivot row, iirow, is read many times, it makes sense to put it in shared memory.
238
Needless to say, the restriction to one block is quite significant. With a 512-thread limit per block,
this would limit us to 512x512 matrices. But its even worse than thatif shared memory is only
4K in size, in single precision that would mean something like 30x30 matrices! We could go to
multiple blocks, at the cost of incurring synchronization delays coming from repeated kernel calls.
In a row echelon version of the code, we could have dynamic assignment of rows to threads, but
still would eventually have load balancing issues.
11.5.3
xi =
1
[bi (ai0 x0 + ... + ai,i1 xi1 + ai,i+1 xi+1 + ... + ai,n1 xn1 )], i = 0, 1, ..., n 1.
aii
(11.39)
This suggests a natural iterative algorithm for solving the equations. We start with our guess
being, say, xi = bi for all i. At our kth iteration, we find our (k+1)st guess by plugging in our kth
guess into the right-hand side of (11.39). We keep iterating until the difference between successive
guesses is small enough to indicate convergence.
This algorithm is guaranteed to converge if each diagonal element of A is larger in absolute value
than the sum of the absolute values of the other elements in its row.
Parallelization of this algorithm is easy: Just assign each process to handle a section of x =
(x0 , x1 , ..., xn1 ). Note that this means that each process must make sure that all other processes
get the new value of its section after every iteration.
Note too that in matrix terms (11.39) can be expressed as
x(k+1) = D1 (b Ox(k) )
(11.40)
where D is the diagonal matrix consisting of the diagonal elements of A (so its inverse is just the
diagonal matrix consisting of the reciprocals of those elements), O is the square matrix obtained by
replacing As diagonal elements by 0s, and x(i) is our guess for x in the ith iteration. This reduces
the problem to one of matrix multiplication, and thus we can parallelize the Jacobi algorithm by
utilizing a method for doing parallel matrix multiplication.
11.5.4
1 #i n c l u d e <omp . h>
2
3 // p a r t i t i o n s s . . e i n t o nc chunks , p l a c i n g t h e i t h i n f i r s t and l a s t ( i
4 // = 0 , . . . , nc 1)
5 v o i d chunker ( i n t s , i n t e , i n t nc , i n t i , i n t f i r s t , i n t l a s t )
6 { i n t c h u n k s i z e = ( es +1) / nc ;
7
f i r s t = s + i chunksize ;
8
i f ( i < nc 1) l a s t = f i r s t + c h u n k s i z e 1 ;
9
else last = e ;
10 }
11
12 // r e t u r n s t h e dot p r o d u c t o f v e c t o r s u and v
13 f l o a t i n n e r p r o d ( f l o a t u , f l o a t v , i n t n )
14 { f l o a t sum = 0 . 0 ; i n t i ;
15
f o r ( i = 0 ; i < n ; i ++)
16
sum += u [ i ] v [ i ] ;
17
r e t u r n sum ;
18 }
19
20 // s o l v e s AX = Y, A nxn ; s t o p s i t e r a t i o n when t o t a l change i s < n e p s
21 v o i d j a c o b i ( f l o a t a , f l o a t x , f l o a t y , i n t n , f l o a t e p s )
22 {
23
f l o a t o l d x = m a l l o c ( n s i z e o f ( f l o a t ) ) ;
24
f l o a t se ;
25
#pragma omp p a r a l l e l
26
{ int i ;
27
i n t thn = omp get thread num ( ) ;
28
i n t nth = omp get num threads ( ) ;
29
int first , last ;
30
chunker ( 0 , n1, nth , thn ,& f i r s t ,& l a s t ) ;
31
f o r ( i = f i r s t ; i <= l a s t ; i ++) o l d x [ i ] = x [ i ] = 1 . 0 ;
32
f l o a t tmp ;
33
while (1) {
34
f o r ( i = f i r s t ; i <= l a s t ; i ++) {
35
tmp = i n n e r p r o d (&a [ n i ] , oldx , n ) ;
36
tmp = a [ n i+i ] o l d x [ i ] ;
37
x [ i ] = ( y [ i ] tmp ) / a [ n i+i ] ;
38
}
39
#pragma omp b a r r i e r
40
#pragma omp f o r r e d u c t i o n (+: s e )
41
f o r ( i = f i r s t ; i <= l a s t ; i ++)
42
s e += abs ( x [ i ] o l d x [ i ] ) ;
43
#pragma omp b a r r i e r
44
i f ( s e < n e p s ) break ;
45
f o r ( i = f i r s t ; i <= l a s t ; i ++)
46
oldx [ i ] = x [ i ] ;
47
}
48
}
49 }
239
240
11.5.5
library ( gputools )
j c b < f u n c t i o n ( a , b , e p s ) {
n < l e n g t h ( b )
d < d i a g ( a ) # a v e c t o r , not a matr ix
tmp < d i a g ( d ) # a matrix , not a v e c t o r
o < a d i a g ( d )
d i < 1/d
x < b # i n i t i a l g u e s s , c o u l d be b e t t e r
repeat {
o l d x < x
tmp < gpumatmult ( o , x )
tmp < b tmp
x < d i tmp # e l e m e n t w i s e m u l t i p l i c a t i o n
i f ( sum ( abs ( xo l d x ) ) < n e p s ) r e t u r n ( x )
}
}
11.6
With the popularity of document search (Web search, text mining etc.), eigenanalysis has become
much more broadly used. Given the size of the problems, again parallel computation is needed.
This can become quite involved, with many complicated methods having been developed.
11.6.1
One of the simplest methods is the power method. Consider an nxn matrix A, with eigenvalues
1 , ..., n , where the labeling is such that |1 | |2 | ... |n |. Well assume here that A is a
symmetric matrix, which it is for instance in statistical applications (Section 14.4). That implies
that the eigenvalues of A are real, and that the eigenvectors are orthogonal to each other.
Start with some nonzero vector x, and define the kth iterate by
x(k) =
Ak x
k Ak x k
(11.41)
241
(11.42)
(11.43)
= 1 v1 1 v1(1)
(11.44)
= 0
(11.45)
(11.46)
= i vi 1 v1(0)
(11.47)
= i vi
(11.48)
In other words, the eigenvalues of B are 2 , ..., n , 0. So we can now apply the same procedure to
B to get 2 and v2 , and iterate for the rest.
11.6.2
Parallel Computation
To use the power method in parallel, note that this is again a situation in which we wish to compute
powers of matrices. However, there is also scaling involved, as seen in (11.41). We may wish to try
the log method of Section 11.4, with scaling done occasionally.
The CULA library for CUDA, mentioned earlier, includes routines for finding the singular value
decomposition of a matrix, thus providing the eigenvectors.1 The R package gputools has an
interface to the SVD routine in CULA.
1
242
11.7
Sparse Matrices
As mentioned earlier, in many parallel processing applications of linear algebra, the matrices can
be huge, even having millions of rows or columns. However, in many such cases, most of the matrix
consists of 0s. In an effort to save memory, one can store such matrices in compressed form, storing
only the nonzero elements.
Sparse matrices roughly fall into two categories. In the first category, the matrices all have 0s at
the same known positions. For instance, in tridiagonal matrices, the only nonzero elements are
either on the diagonal or on subdiagonals just below or above the diagonal, and all other elements
are guaranteed to be 0, such as
2
1
0
0
0
0
1
1
0
0
0
8
5
0
0
0
0
8
8
3
0
0
0
8
5
(11.49)
Code to deal with such matrices can then access the nonzero elements based on this knowledge.
In the second category, each matrix that our code handles will typically have its nonzero matrices in
different, random, positions. A number of methods have been developed for storing amorphous
sparse matrices, such as the Compressed Sparse Row format, which well code in this C struct,
representing an mxn matrix A, with k nonzero entries:
1
2
3
4
5
6
7
8
struct {
i n t m, n ; // numbers o f rows and columns o f A
f l o a t a v a l s ; // t h e n o n z e r o v a l u e s o f A, i n rowmajor o r d e r ; l e n g t h k
i n t c o l s ; // a v a l s [ i ] i s i n column c o l s [ i ] i n A; l e n g t h k
i n t r o w p l a c e s ; // r o w p l a c e s [ i ] i s t h e i n d e x i n a v a l s f o r t h e 1 s t
// n o n z e r o e l e m e n t o f row i i n A ( but l a s t e l e m e n t
// i s k ) ; l e n g t h m+1
}
For the matrix in (11.49) (if we were not to exploit its tridiagonal nature, and just treat it as
amorphous):
m,n: 5,5
avals: 2,1,1,8,1,5,8,8,8,3,5
cols: 0,0,1,2,1,2,3,3,4,3,4
rowplaces: 0,2,4,6,9,11
11.8. LIBRARIES
243
For instance, look at the 4 in rowplaces. Its at position 2 in that array, so it says that element
4 in avalsthe third 1is the first nonzero element in row 2 of A. Look at the matrix, and youll
see this is true.
Parallelizing operations for sparse matrices can be done in the usual manner, e.g. breaking the
rows of A into chunks. Note, though, that there could be a load-balance issue, again addressable
in ways weve used before.
11.8
Libraries
Of course, remember that CUDA provides some excellent matrix-operation routines, in CUBLAS.
There is also the CUSP library for sparse matrices (i.e. those with a lot of 0s). Note too the CULA
library (not developed by NVIDIA, but using CUDA).
More general (i.e. non-CUDA) parallel libraries for linear algebra include ScalaPACK and PLAPACK.
244
Chapter 12
12.1
Quicksort
You are probably familiar with the idea of quicksort: First break the original array into a smallelement pile and a large-element pile, by comparing to a pivot element. In a naive implementation, the first element of the array serves as the pivot, but better performance can be obtained
by taking, say, the median of the first three elements. Then recurse on each of the two piles, and
then string the results back together again.
This is an example of the divide and conquer approach seen in so many serial algorithms. It
is easily parallelized (though load-balancing issues may arise). Here, for instance, we might assign
one pile to one thread and the other pile to another thread.
Suppose the array to be sorted is named x, and consists of n elements.
12.1.1
246
implementation places the two piles back into the original array x. The following C code does that.
The function separate() is intended to be used in a recursive quicksort operation. It operates
on x[l] through x[h], a subarray of x that itself may have been formed at an earlier stage of the
recursion. It forms two piles from those elements, and placing the piles back in the same region
x[l] through x[h]. It also has a return value, showing where the first pile ends.
int separate(int l, int h)
{ int ref,i,j,k,tmp;
ref = x[h]; i = l-1; j = h;
do {
do i++; while (x[i] < ref && i < h);
do j--; while (x[j] > ref && j > l);
tmp = x[i]; x[i] = x[j]; x[j] = tmp;
} while (j > i);
x[j] = x[i]; x[i] = x[h]; x[h] = tmp;
return i;
}
35
12
13
10
168
(12.1)
Well take the first element, 28, as the pivot, and form a new array of 1s and 0s, where 1 means
less than the pivot:
28
0
35
0
12
1
5
1
13
1
6
1
48
0
10
1
168
0
Now form the prefix scan (Chapter 10) of that second array, with respect to addition. It will be an
exclusive scan (Section 10.3). This gives us
12.1. QUICKSORT
28
0
0
35
0
0
12
1
0
5
1
1
13
1
2
247
6
1
3
48
0
3
10
1
4
168
0
4
Now, the key point is that for every element 1 in that second row, the corresponding element in
the third row shows where the first-row element should be placed under the separation operation!
Heres why:
The elements 12, 5, 13, 6 and 10 should go in the first pile, which in an in-place separation would
means indices 0, 1, 2, 3, and 4. Well, as you can see above, these are precisely the values shown in
the third row for 12, 5, 13, 6 and 10, all of which have 1s in the second row.
The pivot, 28, then should immediately follow that low pile, i.e. it should be placed at index 5.
We can simply place the high pile at the remaining indicies, 6 through 8 (though well do it more
systematically below).
In general for an array of length k, we:
form the second row of 1s and 0s indicating < pivot
form the third row, the exclusive prefix scan
for each 1 in the second row, place the corresponding element in row 1 into the spot indicated
by row 3
place the pivot in the place indicated by 1 plus m, the largest value in row 3
form row 4, equal to (0,1,...,k-1) minus row 3 plus m
for each 0 in the second row, place the corresponding element in row 1 into the spot indicated
by row 4
Note that this operation, using scan, could be used an an alternative to the separate() function
above. But it could be done in parallel; more on this below.
12.1.2
Here is OpenMP code which performs quicksort in the shared-memory paradigm (adapted from
code in the OpenMP Source Code Repository, http://www.pcg.ull.es/ompscr/):
1
2
3
4
248
5
6
7
8
9
10
11
12
Note the nowait clause. Since different threads are operating on different portions of the array,
they need not be synchronized.
Recall that another implementation, using the task directive, was given earlier in Section 4.5.
In both of these implementations, we used the function separate() defined above. So, different
threads apply different separation operations to different subarrays. An alternative would be to
place the parallelism in the separation operation itself, using the parallel algorithms for prefix scan
in Chapter 10.
12.1.3
Hyperquicksort
This algorithm was originally developed for hypercubes, but can be used on any message-passing
system having a power of 2 for the number of nodes.1
It is assumed that at the beginning each PE contains some chunk of the array to be sorted. After
sorting, each PE will contain some chunk of the sorted array, meaning that:
each chunk is itself in sorted form
for all cases of i < j, the elements at PE i are less than the elements at PE j
If the sorted array itself were our end, rather than our means to something else, we could now
collect it at some node, say node 0. If, as is more likely, the sorting is merely an intermediate step
in a larger distributed computation, we may just leave the chunks at the nodes and go to the next
phase of work.
Say we are on a d-cube. The intuition behind the algorithm is quite simple:
for i = d downto 1
for each i-cube:
root of the i-cube broadcasts its median to all in the i-cube,
to serve as pivot
consider the two (i-1)-subcubes of this i-cube
1
12.2. MERGESORTS
249
To avoid deadlock, have the lower-numbered partner send then receive, and vice versa for the
higher-numbered one. Better, in MPI, use MPI SendRcv().
After the first iteration, all elements in the lower (d-1)-cube are less than all elements in higher
(d-1)-cube. After d such steps, the array will be sorted.
12.2
Mergesorts
12.2.1
Sequential Form
The function merge() should be done in-place, i.e. without using an auxiliary array. It basically
codes the operation shown in pseudocode for the message-passing case in Section 12.2.3.
12.2.2
Shared-Memory Mergesort
This is similar to the patterns for shared-memory quicksort in Section 12.1.2 above.
12.2.3
First, we organize the processing nodes into a binary tree. This is simply from the point of view
of the software, rather than a physical grouping of the nodes. We will assume, though, that the
number of nodes is one less than a power of 2.
To illustrate the plan, say we have seven nodes in all. We could label node 0 as the root of the
tree, label nodes 1 and 2 to be its two children, label nodes 3 and 4 to be node 1s children, and
finally label nodes 5 and 6 to be node 2s children.
250
It is assumed that the array to be sorted is initially distributed in the leaf nodes (recall a similar
situation for hyperquicksort), i.e. nodes 3-6 in the above example. The algorithm works best if
there are approximately the same number of array elements in the various leaves.
In the first stage of the algorithm, each leaf node applies a regular sequential sort to its current
holdings. Then each node begins sending its now-sorted array elements to its parent, one at a time,
in ascending numerical order.
Each nonleaf node then will merge the lists handed to it by its two children. Eventually the root
node will have the entire sorted array. Specifically, each nonleaf node does the following:
do
if my left-child datum < my right-child datum
pass my left-child datum to my parent
else
pass my right-child datum to my parent
until receive the "no more data" signal from both children
There is quite a load balancing issue here. On the one hand, due to network latency and the like,
one may get better performance if each node accumulates a chunk of data before sending to the
parent, rather than sending just one datum at a time. Otherwise, upstream nodes will frequently
have no work to do.
On the other hand, the larger the chunk size, the earlier the leaf nodes will have no work to do.
So for any particular platform, there will be some optimal chunk size, which would need to be
determined by experimentation.
12.2.4
Compare-Exchange Operations
12.2.5
Bitonic Mergesort
Definition: A sequence (a0 , a1 , .., ak1 ) is called bitonic if either of the following conditions holds:
12.2. MERGESORTS
251
(a) The sequence is first nondecreasing then nonincreasing, meaning that for some r
(a0 a1 ... ar ar+1 an1 )
(b) The sequence can be converted to the form in (a) by rotation, i.e. by moving the last k
elements from the right end to the left end, for some k.
As an example of (b), the sequence (3,8,12,15,14,5,1,2) can be rotated rightward by two element
positions to form (1,2,3,8,12,15,14,5). Or we could just rotate by one element, moving the 2 to
forming (2,3,8,12,15,14,5,1).
Note that the definition includes the cases in which the sequence is purely nondecreasing (r = n-1)
or purely nonincreasing (r = 0).
Also included are V-shape sequences, in which the numbers first decrease then increase, such as
(12,5,2,8,20). By (b), these can be rotated to form (a), with (12,5,2,8,20) being rotated to form
(2,8,20,12,5), an A-shape sequence.
(For convenience, from here on I will use the terms increasing and decreasing instead of nonincreasing and nondecreasing.)
Suppose we have bitonic sequence (a0 , a1 , .., ak1 ), where k is a power of 2. Rearrange the sequence
by doing compare-exchange operations between ai and an/2+i ), i = 0,1,...,n/2-1. Then it is not hard
to prove that the new (a0 , a1 , .., ak/21 ) and (ak/2 , ak/2+1 , .., ak1 ) are bitonic, and every element
of that first subarray is less than or equal to every element in the second one.
So, we have set things up for yet another divide-and-conquer attack:
1
2
3
4
5
6
7
8
This can be parallelized in the same ways we saw for Quicksort earlier.
So much for sorting bitonic sequences. But what about general sequences?
We can proceed as follows, using our function sortbitonic() above:
1. For each i = 0,2,4,...,n-2:
252
12.3
12.3.1
Here the function compare-exchange() is as in Section 12.2.4 above. In the context here, it boils
down to
if x[i] > x[j]
swap x[i] and x[j]
In the first i iteration, the largest element bubbles all the way to the right end of the array. In
the second iteration, the second-largest element bubbles to the next-to-right-end position, and so
on.
You learned in your algorithms class that this is a very inefficient algorithmwhen used serially.
But its actually rather usable in parallel systems.
253
For example, in the shared-memory setting, suppose we have one thread for each value of i. Then
those threads can work in parallel, as long as a thread with a larger value of i does not overtake a
thread with a smaller i, where overtake means working on a larger j value.
Once again, it probably pays to chunk the data. In this case, compare-exchange() fully takes on
the meaning it had in Section 12.2.4.
12.3.2
A popular variant of this is the odd-even transposition sort. The pseudocode for a sharedmemory version is:
1
2
3
4
5
6
7
8
9
10
11
12
13
If the second or third argument of compare-exchange() is less than 0 or greater than n-1, the
function has no action.
This looks a bit complicated, but all its saying is that, from the point of view of an even-numbered
element of x, it trades with its right neighbor during odd phases of the procedure and with its left
neighbor during even phases.
Again, this is usually much more effective if done in chunks.
12.3.3
1
2
3
#include <stdio.h>
#include <stdlib.h>
#include <cuda.h>
4
5
6
7
8
9
10
254
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
Recall that in CUDA code, separate blocks of threads cannot synchronize with each other. Unless
we deal with just a single block, this necessitates limiting the kernel to a single iteration of the
algorithm, so that as iterations progress, execution alternates between the device and the host.
Moreover, we do not take advantage of shared memory. One possible solution would be to use
syncthreads() within each block for most of the compare-and-exchange operations, and then
having the host take care of the operations on the boundaries between blocks.
12.4. SHEARSORT
12.4
255
Shearsort
In some contexts, our hardware consists of a two-dimensional mesh of PEs. A number of methods
have been developed for such settings, one of the most well known being Shearsort, developed by
Sen, Shamir and the eponymous Isaac Scherson of UC Irvine. Again, the data is assumed to be
initially distributed among the PEs. Here is the pseudocode:
1
2
3
4
5
6
for i = 1 to ceiling(log2(n)) + 1
if i is odd
sort each even row in descending order
sort each odd row in ascending order
else
sort each column is ascending order
12
9
6
9
12
5
6
9
5
12
5
12
6
9
No matter what kind of system we have, a natural domain decomposition for this problem would
be for each process to be responsible for a group of rows. There then is the question about what to
do during the even-numbered iterations, in which column operations are done. This can be handled
via a parallel matrix transpose operation. In MPI, the function MPI Alltoall() may be useful.
12.5
For concreteness, suppose we are using MPI on message-passing hardware, say with 10 PEs. As
usual in such a setting, suppose our data is initially distributed among the PEs.
Suppose we knew that our array to be sorted is a random sample from the uniform distribution on
(0,1). In other words, about 20% of our array will be in (0,0.2), 38% will be in (0.45,0.83) and so
on.
256
What we could do is assign PE0 to the interval (0,0.1), PE1 to (0.1,0.2) etc. Each PE would look
at its local data, and distribute it to the other PEs according to this interval scheme. Then each
PE would do a local sort.
In general, we dont know what distribution our data comes from. We solve this problem by doing
sampling. In our example here, each PE would sample some of its local data, and send the sample
to PE0. From all of these samples, PE0 would find the decile values, i.e. 10th percentile, 20th
percentile,..., 90th percentile. These values, called splitters would then be broadcast to all the
PEs, and they would then distribute their local data to the other PEs according to these intervals.
OpenMP code for this was given in Section 1.3.2.6. Here is similar MPI code below (various
improvements could be made, e.g. with broadcast):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
// b u c k et s o r t , b i n b o u n d a r i e s known i n advance
//
//
//
//
i n t nnodes , //
n , // s i z e o f f u l l a r r a y
me , // my node number
f u l l d a t a [MAX N] ,
tmp [MAX N] ,
n b d r i e s , // number o f b i n b o u n d a r i e s
c o u n t s [MAX NPROCS ] ;
f l o a t b d r i e s [MAX NPROCS 2 ] ; // b i n b o u n d a r i e s
i n t debug , debugme ;
i n i t ( i n t argc , c h a r argv )
{
int i ;
debug = a t o i ( argv [ 3 ] ) ;
debugme = a t o i ( argv [ 4 ] ) ;
M P I I n i t (& argc ,& argv ) ;
MPI Comm size (MPI COMM WORLD,& nnodes ) ;
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
257
258
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
12.6
259
Radix Sort
The radix sort is essentially a special case of a bucket sort. If we have 16 threads, say, we could
determine a datums bucket by its lower 4 bits. As long as our data is uniformly distributed under
the mod 16 operation, we would not need to do any sampling.
The CUDPP GPU library uses a radix sort. The buckets are formed one bit at a time, using
segmented scan as above.
12.7
Enumeration Sort
This one is really simple. Take for instance the array (12,5,13,18,6). There are 2 elements less than
12, so in the end, it should go in position 2 of the sorted array, (5,6,12,13,18).
Say we wish to sort x, which for convenience we assume contains no tied values. Then the pseudocode for this algorithm, placing the results in y, is
for all i in 0...n-1:
count = 0
elt = x[i]
for all j in 0...n-1:
if x[j] < elt then count++
y[count] = elt
260
Chapter 13
13.1
General Principles
13.1.1
A sound wave form graphs volume of the sound against time. Here, for instance, is the wave form
for a vibrating reed:1
Reproduced here by permission of Prof. Peter Hamburger, Indiana-Purdue University, Fort Wayne.
http://www.ipfw.edu/math/Workshop/PBC.html
261
See
262 CHAPTER 13. PARALLEL COMPUTATION FOR AUDIO AND IMAGE PROCESSING
Recall that we say a function of time g(t) is periodic (repeating, in our casual wording above)
with period T if if g(u+T) = g(u) for all u. The fundamental frequency of g() is then defined
to be the number of periods per unit time,
f0 =
1
T
(13.1)
Recall also from calculus that we can write a function g(t) (not necessarily periodic) as a Taylor
series, which is an infinite polynomial:
g(t) =
cn tn .
(13.2)
n=0
The specific values of the cn may be derived by differentiating both sides of (13.2) and evaluating
at t = 0, yielding
cn =
g (n) (0)
,
n!
(13.3)
X
1 n
t
n!
(13.4)
et =
n=0
In the case of a repeating function, it is more convenient to use another kind of series representation,
an infinite trig polynomial, called a Fourier series. This is just a fancy name for a weighted sum
263
of sines and cosines of different frequencies. More precisely, we can write any repeating function
g(t) with period T and fundamental frequency f0 as
g(t) =
an cos(2nf0 t) +
n=0
bn sin(2nf0 t)
(13.5)
n=1
for some set of weights an and bn . Here, instead of having a weighted sum of terms
1, t, t2 , t3 , ...
(13.6)
(13.7)
and of similar sine terms. Note that the frequencies nf0 , in those sines and cosines are integer
multiples of the fundamental frequency of x, f0 , called harmonics.
The weights an and bn , n = 0, 1, 2, ... are called the frequency spectrum of g(). The coefficients
are calculated as follows:2
1
a0 =
T
2
T
2
bn =
T
an =
g(t) dt
(13.8)
g(t) cos(2nf0 t) dt
(13.9)
g(t) sin(2nf0 t) dt
(13.10)
0
T
By analyzing these weights, we can do things like machine-based voice recognition (distinguishing
one persons voice from another) and speech recognition (determining what a person is saying). If
for example one persons voice is higher-pitched than that of another, the first persons weights
will be concentrated more on the higher-frequency sines and cosines than will the weights of the
second.
Since g(t) is a graph of loudness against time, this representation of the sound is called the time
domain. When we find the Fourier series of the sound, the set of weights an and bn is said to be a
2
The get an idea as to how these formulas arise, see Section 13.9. But for now, if you integrate both sides of
(13.5), you will at least verify that the formulas below do work.
264 CHAPTER 13. PARALLEL COMPUTATION FOR AUDIO AND IMAGE PROCESSING
representation of the sound in the frequency domain. One can recover the original time-domain
representation from that of the frequency domain, and vice versa, as seen in Equations (13.8),
(13.9), (13.10) and (13.5).
In other words, the transformations between the two domains are inverses of each other, and there
is a one-to-one correspondence between them. Every g() corresponds to a unique set of weights
and vice versa.
Now here is the frequency-domain version of the reed sound:
Note that this graph is very spiky. In other words, even though the reeds waveform includes all
frequencies, most of the power of the signal is at a few frequencies which arise from the physical
properties of the reed.
Fourier series are often expressed in terms of complex numbers, making use of the relation
ei = cos() + i sin(),
where i =
3
(13.11)
1.3
There is basically no physical interpretation of complex numbers. Instead, they are just mathematical abstrac-
265
g(t) =
cj e2ij T .
(13.12)
j=
The cj are now generally complex numbers. They are functions of the aj and bj , and thus form the
frequency spectrum.
Equation (13.12) has a simpler, more compact form than (13.5). Do you now see why I referred to
t
Fourier series as trig polynomials? The series (13.12) involves the jth powers of e2 T .
13.1.2
Lets now move from sounds to images. Just as we were taking time to be a continuous variable
above, for the time being we are taking the position within an image to be continuous too; this is
equivalent to having infinitely many pixels. Here g() is a function of two variables, g(u,v), where
u and v are the horizontal and vertical coordinates of a point in the image, with g(u,v) being the
intensity of the image at that point. If it is a gray-scale image, the intensity is whiteness of the
image at that point, typically with 0 being pure black and 255 being pure white. If it is a color
image, a typical graphics format is to store three intensity values at a point, one for each of red,
green and blue. The various colors come from combining three colors at various intensities.
The terminology changes a bit. Our original data is now referred to as being in the spatial domain,
rather than the time domain. But the Fourier series coefficients are still said to be in the frequency
domain.
13.2
In sound and image applications, we seldom if ever know the exact form of the repeating function
g(). All we have is a sampling from g(), i.e. we only have values of g(t) for a set of discrete values
of t.
In the sound example above, a typical sampling rate is 8000 samples per second.4 So, we may have
g(0), g(0.000125), g(0.000250), g(0.000375), and so on. In the image case, we sample the image
tions. However, they are highly useful abstractions, with the complex form of Fourier series, beginning with (13.12),
being a case in point.
It is not assumed that you know complex variables well. All that is required is knowledge of how to add, subtract,
multiply and divide, and the definition of |c| for complex c.
4
See Section 13.10 for the reasons behind this.
266 CHAPTER 13. PARALLEL COMPUTATION FOR AUDIO AND IMAGE PROCESSING
pixel by pixel.
Integrals like (13.8) now change to sums.
13.2.1
One-Dimensional Data
Let X = (x0 , ..., xn1 ) denote the sampled values, i.e. the time-domain representation of g() based
on our sample data. These are interpreted as data from one period of g(), with the period being n
and the fundamental frequency being 1/n. The frequency-domain representation will also consist
of n numbers, c0 , ..., cn1 , defined as follows:
n1
ck =
n1
1X
1X
xj e2ijk/n =
xj q jk
n
n
j=0
(13.13)
j=0
where
q = e2i/n
(13.14)
again with i = 1. The array C of complex numbers ck is called the discrete Fourier transform
(DFT) of X. Note that (13.13) is basically a discrete analog of (13.9) and (13.10).
Note that instead of having infinitely many frequencies, we only have n of them, i.e. the n original
data points xj map to n frequency weights ck .5
The quantity q is a nth root of 1:
(13.15)
C=
5
1
AX,
n
(13.16)
Actually, in the case of xj real, which occurs with sound data, we really get only n/2 frequencies. The weight
of the frequences after k = n/2 turn out to be the conjugates of those before n/2, where the conjugate of a+bi is
defined to be a-bi.
267
1
1
1
2
1
q
q
A=
... ...
...
1 q n1 q 2(n1)
...
1
...
q n1
...
...
(n1)(n1)
... q
(13.17)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
13.2.2
Inversion
As in the continuous case, the DFT is a one-to-one transformation. so we can recover each domain
from the other. The details are important:
The matrix A in (13.17) is a special case of Vandermonde matrices, known to be invertible. In
fact, if we think of that matrix as a function of q, A(q), then it turns out that
[A(q)]1 =
1 1
A( )
n q
(13.18)
(13.19)
268 CHAPTER 13. PARALLEL COMPUTATION FOR AUDIO AND IMAGE PROCESSING
In nonmatrix terms:
n1
X
xj =
2ijk/n
ck e
k=0
n1
X
ck q jk
(13.20)
k=0
13.2.2.1
Alternate Formulation
Equation (13.16) has a factor 1/n while (13.19) doesnt. In order to achieve symmetry, some authors
of material on DFT opt to define the DFT and its inverse with 1/ n in (13.13) instead of 1/n, and
by adding a factor 1/ n in (13.20). They then include a factor 1/ n in (13.17), with the result
that [A(q)]1 = A(1/q). Thus everything simplifies.
Other formulations are possible. For instance, the R fft() routines documentation says its unnor
malized, meaning that there is neither a 1/n nor a 1/ n in (13.20). When using a DFT routine,
be sure to determine what it assumes about these constant factors.
13.2.3
Two-Dimensional Data
The spectrum numbers crs are double-subscripted, like the original data xuv , the latter being the
pixel intensity in row u, column v of the image, u = 0,1,...,n-1, v = 0,1,...,m-1. Equation (13.13)
becomes
n1 m1
crs =
jr
ks
1 1 XX
xjk e2i( n + m )
nm
(13.21)
j=0 k=0
xrs =
n1
X m1
X
j=0 k=0
jr
ks
cjk e2i( n + m )
(13.22)
13.3
13.3.1
269
Speedy computation of a discrete Fourier transform was developed by Cooley and Tukey in their
famous Fast Fourier Transform (FFT), which takes a divide and conquer approach:
Equation (13.13) can be rewritten as
ck =
m1
X
1
n
x2j q 2jk +
j=0
m1
X
x2j+1 q (2j+1)k ,
(13.23)
j=0
where m = n/2.
After some algebraic manipulation, this becomes
ck =
11
2 m
m1
X
j=0
x2j z jk + q k
1
m
m1
X
x2j+1 z jk
(13.24)
j=0
where z = e2i/m .
A look at Equation (13.24) shows that the two sums within the brackets have the same form as
Equation (13.13). In other words, Equation (13.24) shows how we can compute an n-point FFT
from two n2 -point FFTs. That means that a DFT can be computed recursively, cutting the sample
size in half at each recursive step.
In a shared-memory setting such as OpenMP, we could implement this recursive algorithm in the
manners of Quicksort in Chapter 12.
In a message-passing setting, again because this is a divide-and-conquer algorithm, we can use the
pattern of Hyperquicksort, also in Chapter 12.
Some digital signal processing chips implement this in hardware, with a special interconnection
network to implement this algorithm.
270 CHAPTER 13. PARALLEL COMPUTATION FOR AUDIO AND IMAGE PROCESSING
13.3.2
A Matrix Approach
C=
1
AX
n
(13.25)
13.3.3
The form of the DFT (13.13) and its inverse (13.20) are very similar. For example, the inverse
transform is again of a matrix form as in (13.25); even the new matrix looks a lot like the old one.6
Thus the methods mentioned above, e.g. FFT and the matrix approach, apply to calculation of
the inverse transforms too.
13.3.4
n1
crs =
1X
n
j=0
1
n
n1
X
m1
ks
1 X
xjk e2i( m )
m
jr
e2i( n )
(13.26)
k=0
jr
yjs e2i( n )
(13.27)
j=0
Note that yjs , i.e. the expression between the large parentheses, is the sth component of the DFT
of the jth row of our data. And hey, the last expression (13.27) above is in the same form as (13.13)!
Of course, this means we are taking the DFT of the spectral coefficients rather than observed data,
but numbers are numbers.
6
In fact, one can obtain the new matrix easily from the old, as explained in Section 13.9.
271
In other words: To get the two-dimensional DFT of our data, we first get the one-dimensional
DFTs of each row of the data, place these in rows, and then find the DFTs of each column. This
property is called separability.
This certainly opens possibilities for parallelization. Each thread (shared memory case) or node
(message passing case) could handle groups of rows of the original data, and in the second stage
each thread could handle columns.
Or, we could interchange rows and columns in this process, i.e. put the j sum inside and k sum
outside in the above derivation.
13.4
13.4.1
As of now, R only offers serial computation, through its function fft(). It works on both one- and
two-dimensional (or more) data. If its argument inverse is set to TRUE, it will find the inverse.
Parallel computation of a two-dimensional transform can be easily accomplished by using fft()
together with the approach in Section 13.3.4 and one of the packages for parallel R in Chapter ??.
Heres how to do it in snow:
1
2
3
4
p a r f f t 2 < f u n c t i o n ( c l s ,m) {
tmp < parApply ( c l s ,m, 1 , f f t )
parApply ( c l s , tmp , 1 , f f t )
}
Recall that when parApply() is called with a vector-valued function argument, the output from
row i of the input matrix is placed in column i of the output matrix. Thus in the second call above,
we used rows (argument 1) instead of columns.
13.4.2
CUFFT
13.4.3
FFTW
FFTW (Fastest Fourier Transform in the West) is available for free download at http://www.
fftw.org. It includes versions callable from OpenMP and MPI.
272 CHAPTER 13. PARALLEL COMPUTATION FOR AUDIO AND IMAGE PROCESSING
13.5
In image processing, there are a number of different operations which we wish to perform. We will
consider two of them here.
13.5.1
Smoothing
An image may be too rough. There may be some pixels which are noise, accidental values that
dont fit smoothly with the neighboring points in the image.
One way to smooth things out would be to replace each pixel intensity value7 by the mean or
median among the pixels neighbors. These could be the four immediate neighbors if just a little
smoothing is needed, or we could go further out for a higher amount of smoothing. There are many
variants of this.
But another way would be to apply a low-pass filter to the DFT of our image. This means that
after we compute the DFT, we simply delete the higher harmonics, i.e. set crs to 0 for the larger
values of r and s. We then take the inverse transform back to the spatial domain. Remember, the
sine and cosine functions of higher harmonics are wigglier, so you can see that all this will have
the effect of removing some of the wiggliness in our imageexactly what we wanted.
We can control the amount of smoothing by the number of harmonics we remove.
The term low-pass filter obviously alludes to the fact that the low frequencies pass through the
filter but the high frequencies are blocked. Since weve removed the high-oscillatory components,
the effect is a smoother image.8
To do smoothing in parallel, if we just average neighbors, this is easily parallelized. If we try a
low-pass filter, then we use the parallelization methods shown here earlier.
13.5.2
Below is code to do smoothing on sound. It inputs a sound sequence snd, and performs low-pass
filtering, setting to 0 all DFT terms having k greater than maxidx in (13.13).
1
2
3
4
5
6
p <- function(snd,maxidx) {
four <- fft(snd)
n <- length(four)
newfour <- c(four[1:maxidx],rep(0,n-maxidx))
return(Re(fft(newfour,inverse=T)/n))
}
7
8
Remember, there may be three intensity values per pixel, for red, green and blue.
Note that we may do more smoothing in some parts of the image than in others.
273
Here the Re() function extracts the real part of a complex number.
13.5.3
Edge Detection
In computer vision applications, we need to have a machine-automated way to deduce which pixels
in an image form an edge of an object.
Again, edge-detection can be done in primitive ways. Since an edge is a place in the image in which
there is a sharp change in the intensities at the pixels, we can calculate slopes of the intensities,
in the horizontal and vertical directions. (This is really calculating the approximate values of the
partial derivatives in those directions.)
But the Fourier approach would be to apply a high-pass filter. Since an edge is a set of pixels which
are abruptly different from their neighbors, we want to keep the high-frequency components and
block out the low ones.
Again, this means first taking the Fourier transform of the original, then deleting the low-frequency
terms, then taking the inverse transform to go back to the spatial domain.
Below we have before and after pictures, first of original data and then the picture after an
edge-detection process has been applied.9
These pictures are courtesy of Bill Green of the Robotics Laboratory at Drexel University. In this case he is
using a Sobel process instead of Fourier analysis, but the result would have been similar for the latter. See his Web
tutorial at www.pages.drexel.edu/~weg22/edge.html, including the original pictures, which may not show up well
in our printed book here.
274 CHAPTER 13. PARALLEL COMPUTATION FOR AUDIO AND IMAGE PROCESSING
The second picture looks like a charcoal sketch! But it was derived mathematically from the original
picture, using edge-detection methods.
Note that edge detection methods also may be used to determine where sounds (ah, ee) begin
and end in speech-recognition applications. In the image case, edge detection is useful for face
recognition, etc.
Parallelization here is similar to that of the smoothing case.
13.6
In order to apply these transformations to sound and image files, you need to extract the actual
data from the files. The formats are usually pretty complex. You can do this easily using the R
tuneR and pixmap libraries.
After extracting the data, you can apply the transformations, then transform back to the time/spatial domain, and replace the data component of the original class.
13.7
Normally pixel intensities are stored as integers between 0 and 255, inclusive. With many of the
operations mentioned above, both Fourier-based and otherwise, we can get negative intensity values,
or values higher than 255. We may wish to discard the negative values and scale down the positive
ones so that most or all are smaller than 256.
Furthermore, even if most or all of our values are in the range 0 to 255, they may be near 0, i.e.
too faint. If so, we may wish to multiply them by a constant.
13.8
275
It is clear that in the case of a vibrating reed, our loudness function g(t) really is periodic. What
about other cases?
A graph of your voice would look locally periodic. One difference would be that the graph would
exhibit more change through time as you make various sounds in speaking, compared to the one
repeating sound for the reed. Even in this case, though, your voice is repeating within short time
intervals, each interval corresponding to a different sound. If you say the word eye, for instance, you
make an ah sound and then an ee sound. The graph of your voice would show one repeating
pattern during the time you are saying ah, and another repeating pattern during the time you
are saying ee. So, even for voices, we do have repeating patterns over short time intervals.
On the other hand, in the image case, the function may be nearly constant for long distances
(horizontally or vertically), so a local periodicity argument doesnt seem to work there.
The fact is, though, that it really doesnt matter in the applications we are considering here. Even
though mathematically our work here has tacitly assumed that our image is duplicated infinitely
times (horizontally and vertically),10 we dont care about this. We just want to get a measure of
wiggliness, and fitting linear combinations of trig functions does this for us.
13.9
The theory of Fourier series (and of other similar transforms), relies on vector spaces. It actually
is helpful to look at some of that here. Lets first discuss the derivation of (13.13).
Define X and C as in Section 13.2. Xs components are real, but it is also a member of the vector
space V of all n-component arrays of complex numbers.
For any complex number a+bi, define its conjugate, a + bi = a bi. Note that
ei = cos i sin == cos() + i sin() = ei
(13.28)
[u, w] =
1X
uj w
j .
n
(13.29)
j=0
10
And in the case of the cosine transform, implicitly we are assuming that the image flips itself on every adjacent
copy of the image, first right-side up, then upside-own, then right-side up again, etc.
276 CHAPTER 13. PARALLEL COMPUTATION FOR AUDIO AND IMAGE PROCESSING
Define
vh = (1, q h , q 2h , ..., q (n1)h ), h = 0, 1, ..., n 1.
(13.30)
Then it turns out that the vh form an orthonormal basis for V.11 For example, to show orthnogonality, observe that for r 6= s
n1
[vr , vs ] =
=
1X
vrj vsj
n
j=0
1 X j(r+s)
q
n
(13.31)
(13.32)
j=0
1 q (r+s)n
n(1 q)
= 0,
=
1y k+1
1y
(13.33)
(13.34)
The DFT of X, which we called C, can be considered the coordinates of X in V, relative to this
orthonormal basis. The kth coordinate is then [X, vk ], which by definition is (13.13).
The fact that we have an orthonormal basis for V here means that the matrix A/n in (13.25) is
an orthogonal matrix. For real numbers, this means that this matrixs inverse is its transpose. In
t
the complex case, instead of a straight transpose, we do a conjugate transpose, B = A/n , where t
means transpose. So, B is the inverse of A/n. In other words, in (13.25), we can easily get back to
X from C, via
X = BC =
1 t
A C.
n
(13.35)
Its really the same for the nondiscrete case. Here the vector space consists of all the possible
periodic functions g() (with reasonable conditions placed regarding continuity etc.) forms the
vector space, and the sine and cosine functions form an orthonormal basis. The an and bn are then
the coordinates of g() when the latter is viewed as an element of that space.
11
Recall that this means that these vectors are orthogonal to each other, and have length 1, and that they span V.
13.10. BANDWIDTH: HOW TO READ THE SAN FRANCISCO CHRONICLE BUSINESS PAGE (OPTIONAL S
13.10
The popular press, especially business or technical sections, often uses the term bandwidth. What
does this mean?
Any transmission medium has a natural range [fmin ,fmax ] of frequencies that it can handle well.
For example, an ordinary voice-grade telephone line can do a good job of transmitting signals
of frequencies in the range 0 Hz to 4000 Hz, where Hz means cycles per second. Signals of
frequencies outside this range suffer fade in strength, i.e are attenuated, as they pass through the
phone line.12
We call the frequency interval [0,4000] the effective bandwidth (or just the bandwidth) of the
phone line.
In addition to the bandwidth of a medium, we also speak of the bandwidth of a signal. For
instance, although your voice is a mixture of many different frequencies, represented in the Fourier
series for your voices waveform, the really low and really high frequency components, outside the
range [340,3400], have very low power, i.e. their an and bn coefficients are small. Most of the power
of your voice signal is in that range of frequencies, which we would call the effective bandwidth
of your voice waveform. This is also the reason why digitized speech is sampled at the rate of
8,000 samples per second. A famous theorem, due to Nyquist, shows that the sampling rate should
be double the maximum frequency. Here the number 3,400 is rounded up to 4,000, and after
doubling we get 8,000.
Obviously, in order for your voice to be heard well on the other end of your phone connection, the
bandwidth of the phone line must be at least as broad as that of your voice signal, and that is the
case.
However, the phone lines bandwidth is not much broader than that of your voice signal. So, some
of the frequencies in your voice will fade out before they reach the other person, and thus some
degree of distortion will occur. It is common, for example, for the letter f spoken on one end to be
mis-heard as son the other end. This also explains why your voice sounds a little different on the
phone than in person. Still, most frequencies are reproduced well and phone conversations work
well.
We often use the term bandwidth to literally refer to width, i.e. the width of the interval
[fmin , fmax ].
There is huge variation in bandwidth among transmission media. As we have seen, phone lines
have bandwidth intervals covering values on the order of 103 . For optical fibers, these numbers are
more on the order of 1015 .
12
278 CHAPTER 13. PARALLEL COMPUTATION FOR AUDIO AND IMAGE PROCESSING
The radio and TV frequency ranges are large also, which is why, for example, we can have many AM
radio stations in a given city. The AM frequency range is divided into subranges, called channels.
The width of these channels is on the order of the 4000 we need for a voice conversation. That
means that the transmitter at a station needs to shift its content, which is something like in the
[0,4000] range, to its channel range. It does that by multiplying its content times a sine wave of
frequency equal to the center of the channel. If one applies a few trig identities, one finds that the
product signal falls into the proper channel!
Accordingly, an optical fiber could also carry many simultaneous phone conversations.
Bandwidth also determines how fast we can set digital bits. Think of sending the sequence
10101010... If we graph this over time, we get a squarewave shape. Since it is repeating, it
has a Fourier series. What happends if we double the bit rate? We get the same graph, only horizontally compressed by a factor of two. The effect of this on this graphs Fourier series is that, for
example, our former a3 will now be our new a6 , i.e. the 2 3f0 frequency cosine wave component
of the graph now has the double the old frequency, i.e. is now 2 6f0 . That in turn means that
the effective bandwidth of our 10101010... signal has doubled too.
In other words: To send high bit rates, we need media with large bandwidths.
Chapter 14
Parallel Computation in
Statistics/Data Mining
How did the word statistics get supplanted by data mining? In a word, it is a matter of scale.
In the old days of statistics, a data set of 300 observations on 3 or 4 variables was considered large.
Today, the widespread use of computers and the Web yield data sets with numbers of observations
that are easily in the tens of thousands range, and in a number of cases even tens of millions. The
numbers of variables can also be in the thousands or more.
In addition, the methods have become much more combinatorial in nature. In a classification
problem, for instance, the old discriminant analysis involved only matrix computation, whereas a
nearest-neighbor analysis requires far more computer cycles to complete.
In short, this calls for parallel methods of computation.
14.1
Itemset Analysis
14.1.1
What Is It?
The term data mining is a buzzword, but all it means is the process of finding relationships
among a set of variables. In other words, it would seem to simply be a good old-fashioned statistics
problem.
Well, in fact it is simply a statistics problembut writ large, as mentioned earlier.
Major, Major Warning: With so many variables, the chances of picking up spurious relations
279
280
between variables is large. And although many books and tutorials on data mining will at least
pay lip service to this issue (referring to it as overfitting), they dont emphasize it enough.1
Putting the overfitting problem aside, though, by now the readers reaction should be, This calls
for parallel processing, and he/she is correct. Here well look at parallelizing a particular problem,
called itemset analysis, the most famous example of which is the market basket problem:
14.1.2
Consider an online bookstore that has records of every sale on the stores site. Those sales may be
represented as a matrix S, whose (i,j)th element Sij is equal to either 1 or 0, depending on whether
the ith sale included book j, i = 0,1,...,s-1, j = 0,1,...,t-1. So each row of S represents one sale, with
the 1s in that row showing which titles were bought. Each column of S represents one book title,
with the 1s showing which sales transactions included that book.
Lets denote the entire line of book titles by T0 , ..., Tb1 . An itemset is just a subset of this. A
frequent itemset is one which appears in many of sales transactions. But there is more to it than
that. The store wants to choose some books for special ads, of the form We see you bought books
X and Y. We think you may be interested in Z.
Though we are using marketing as a running example here (which is the typical way that this
subject is introduced), we will usually just refer to items instead of books, and to database
records rather than sales transactions.
We have the following terminology:
An association rule I J is simply an ordered pair of disjoint itemsets I and J.
The support of an an association rule I J is the proportion of records which include both
I and J.
The confidence of an association rule I J is the proportion of records which include J,
among those records which include I.
Note that in probability terms, the support is basically P(I and J) while the confidence is P(J|I).
If the confidence is high in the book example, it means that buyers of the books in set I also tend
to buy those in J. But this information is not very useful if the support is low, because it means
that the combination occurs so rarely that it may not be worth our time to deal with it.
1
Some writers recommend splitting ones data into a training set, which is used to discover relationships, and
a validation set, which is used to confirm those relationships. Its a good idea, but overfitting can still occur even
with this precaution.
281
So, the userlets call him/her the data minerwill first set thresholds for support and confidence, and then set out to find all association rules for which support and confidence exceed their
respective thresholds.
14.1.3
Serial Algorithms
Various algorithms have been developed to find frequent itemsets and association rules. The most
famous one for the former task is the Apriori algorithm. Even it has many forms. We will discuss
one of the simplest forms here.
The algorithm is basically a breadth-first tree search. At the root we find the frequent 1-item
itemsets. In the online bookstore, for instance, this would mean finding all individual books that
appear in at least r of our sales transaction records, where r is our threshold.
At the second level, we find the frequent 2-item itemsets, e.g. all pairs of books that appear in
at least r sales records, and so on. After we finish with level i, we then generate new candidate
itemsets of size i+1 from the frequent itemsets we found of size i.
The key point in the latter operation is that if an itemset is not frequent, i.e. has support less than
the threshold, then adding further items to it will make it even less frequent. That itemset is then
pruned from the tree, and the branch ends.
Here is the pseudocode:
set F1 to the set of 1-item itemsets whose support exceeds the threshold
for i = 2 to b
Fi =
for each I in Fi1
for each K in F1
Q=I K
if support(Q) exceeds support threshold
add Q to Fi
if Fi is empty break
return i Fi
In other words, we are building up the itemsets of size i from those of size i-1, adding all possible
choices of one element to each of the latter.
Again, there are many refinements of this, which shave off work to be done and thus increase speed.
For example, we should avoid checking the same itemsets twice, e.g. first {1,2} then {2,1}. This can
be accomplished by keeping itemsets in lexicographical order. We will not pursue any refinements
here.
282
14.1.4
Clearly there is lots of opportunity for parallelizing the serial algorithm above. Both of the inner
for loops can be parallelized in straightforward ways; they are embarrassingly parallel. There are
of course critical sections to worry about in the shared-memory setting, and in the message-passing
setting one must designate a manager node in which to store the Fi .
However, as more and more refinements are made in the serial algorithm, then the parallelism in
this algorithm become less and less embarrassing. And things become more challenging if the
storage needs of the Fi , and of their associated accounting materials such as a directory showing
the current tree structure (done via hash trees), become greater than what can be stored in the
memory of one node, say in the message-passing case.
In other words, parallelizing the market basket problem can be very challenging. The interested
reader is referred to the considerable literature which has developed on this topic.
14.2
Let X denote some quantity of interest in a given population, say peoples heights. Technically, the
probability density function of X, typically denoted by f, is a function on the real line with the
following properties:
f (t) dt
(14.1)
This seems abstract, but its really very simple: Say we have data on X, n sample values X1 , ..., Xn ,
and we plot a histogram from this data. Then f is what the histogram is estimating. If we have
more and more data, the histogram gets closer and closer to the true f.2
So, how do we estimate f, and how do we use parallel computing to reduce the time needed?
2
The histogram must be scaled to have total area 1. Most statistical programs have options for this.
14.2.1
283
Histogram computation breaks the real down into intervals, and then counts how many Xi fall into
each interval. This is fine as a crude method, but one can do better.
No matter what the interval width is, the histogram will consist of a bunch of rectanges, rather
than a smooth curve. This problem basically stems from a lack of weighting on the data.
For example, suppose we are estimating f(25.8), and suppose our histogram interval is [24.0,26.0],
with 54 points falling into that interval. Intuitively, we can do better if we give the points closer
to 25.8 more weight.
One way to do this is called kernel-based density estimation, which for instance in R is handled
by the function density().
We need a set of weights, more precisely a weight function k, called the kernel. Any nonnegative
function which integrates to 1i.e. a density function in its own rightwill work. Typically k is
taken to be the Gaussian or normal density function,
1
2
k(u) = e0.5u
2
(14.2)
1 X
fb(t) =
k
nh
i=1
t Xi
h
(14.3)
In statistics, it is customary to use the bsymbol (pronounced hat) to mean estimate of. Here
fb means the estimate of f.
Note carefully that we are estimating an entire function! There are infinitely many possible values
of t, thus infinitely many values of f(t) to be estimated. This is reflected in (14.3), as fb(t) does
indeed give a (potentially) different value for each t.
Here h, called the bandwidth, is playing a role analogous to the interval width in the case of
histograms. We must choose the value of h, just like for a histogram we must choose the bin
width.3
Again, this looks very abstract, but all it is doing is assigning weights to the data. Consider our
example above in which we wish to estimate f(25.8), i.e. t = 25.8 and suppose we choose h to be
6.0. If say, X88 is 1209.1, very far as away from 25.8, we dont want this data point to have much
3
284
weight in our estimation of f(25.8). Well, it wont have much weight at all, because the quantity
u=
25.8 88
6
(14.4)
will be very large, and (14.2) will be tiny, as u will be way, way out in the left tail.
Now, keep all this in perspective. In the end, we will be plotting a curve, just like we do with a
histogram. We simply have a more sophiticated way to do this than plotting a histogram. Following
are the graphs generated first by the histogram method, then by the kernel method, on the same
data:
200
100
0
Frequency
300
Histogram of x
10
15
x
20
285
0.10
0.00
0.05
Density
0.15
density.default(x = x)
10
15
20
If youve seen the term before and are curious as to how this is a convolution, read on:
Write (14.3) as
fb(t) =
n
X
1
1
t Xi
k
h
h
n
i=1
(14.5)
Now consider two artificial random variables U and V, created just for the purpose of facilitating computation,
defined as follows.
The random variable U takes on the values ih with probability g h1 k(i), i = -c,-c+1,...,0,1,...,c for some value of
c that we choose to cover most of the area under k, with g chosen so that the probabilities sum to 1. The random
variable V takes on the values X1 , ..., Xn (considered fixed here), with probability 1/n each. U and V are set to be
independent.
Then (g times) (14.5) becomes P(U+V=t), exactly what convolution is about, the probability mass function (or
density, in the continuous case) of a random variable arising as the sum of two independent nonnegative random
variables.
5
Again, if you have some background in probability and have see characteristic functions, this fact comes from
286
14.2.2
In image processing, histograms are used to find tallies of how many pixels there are of each
intensity. (Note that there is thus no interval width issue, as there is a separate interval value
for each possible intensity level.) The serial pseudocode is:
for i = 1,...,numintenslevels:
count = 0
for row = 1,...,numrows:
for col = 1,...,numcols:
if image[row][col] == i: count++
hist[i] = count
On the surface, this is certainly an embarrassingly parallel problem. In OpenMP, for instance,
we might have each thread handle a block of rows of the image, i.e. parallelize the for row loop.
In CUDA, we might have each thread handle an individual pixel, thus parallelizing the nested for
row/col loops.
However, to make this go fast is a challenge, say in CUDA, due to issues of what to store in shared
memory, when to swap it out, etc. A very nice account of fine-tuning this computation in CUDA
is given in Histogram Calculation in CUDA, by Victor Podlozhnyuk of NVIDIA, 2007, http://
developer.download.nvidia.com/compute/cuda/1_1/Website/projects/histogram256/doc/histogram.
pdf. The actual code is at http://developer.download.nvidia.com/compute/cuda/sdk/website/
Data-Parallel_Algorithms.html#histogram. A summary follows:
(Much of the research into understand Podlozhnyuks algorithm was done by UC Davis graduate
student Spencer Mathews.)
Podlozhnyuks overall plan is to have the threads compute subhistograms for various chunks of the
image, then merge the subhistograms to create the histogram for the entire data set. Each thread
will handle 1/k of the images pixels, where k is the total number of threads in the grid, i.e. across
all blocks.
In Podlozhnyuks first cut at the problem, he maintains a separate subhistogram for each thread. He
calls this version of the code histogram64. The name stems from the fact that only 64 intensity
levels are used, i.e. the more significant 6 bits of each pixels data byte. The reason for this
restriction will be discussed later.
Each thread will store its subhistogram as an array of bytes; the count of pixels that a thread finds
to have intensity i will be stored in the ith byte of this array. Considering the content of a byte as
an unsigned number, that means that each thread can process only 255 pixels.
the fact that the characteristic function of the sum of two independent random variables is equal to the product of
the characteristic functions of the two variables.
14.3. CLUSTERING
287
The subhistograms will be stored together in a two-dimensional array, the jth being the subhistogram for thread j. Since the subhistograms are accessed repeatedly, we want to store this twodimensional array in shared memory. (Since each pixel will be read only once, there would be no
value in storing it in shared memory, so it is in global memory.)
The main concern is bank conflicts. As the various threads in a block write to the two-dimensional
array, they may collide with each other, i.e. try to write to different locations within the same
bank. But Podlozhnyuk devised a clever way to stagger the accesses, so that in fact there are no
bank conflicts at all.
In the end, the many subhistograms within a block must be merged, and those merged counts must
in turn be merged across all blocks. The former operation is done again by careful ordering to
avoid any bank conflicts, and then the latter is done atomicAdd().
Now, why does histogram64 tabulate image intensities at only 6-bit granularity? Its simply a
matter of resource limitations. Podlozhnyuk notes that NVIDIA says that for best efficiency, there
should be between 128 and 256 threads per block. He takes the middle ground, 192. With 16K
of shared memory per block, 16K/192 works out to about 85 bytes per thread. That eliminates
computing a histogram for the full 8-bit image data, with 256 intensity levels, which would require
256 bytes for each thread.
Accordingly, Podlozhnyuk offers histogram256, which refines the process, by having one subhistogram per warp, instead of per thread. This allows the full 8-bit data, 256 levels, to be tabulated,
one word devoted to each count, rather than just one byte. A subhistogram is now a table, 256
rows by 32 columns (one column for each thread in the warp), with each table entry being 4 bytes
(1 byte is not sufficient, as 32 threads are tabulating with it).
14.3
Clustering
Suppose you have data consisting of (X,Y) pairs, which when plotted look like this:
288
5
0
xy[,2]
10
10
15
20
xy[,1]
It looks like there may be two or three groups here. What clustering algorithms do is to form
groups, both their number and their membership, i.e. which data points belong to which groups.
(Note carefully that there is no correct answer here. This is merely an exploratory data analysis
tool.)
Clustering is used is many diverse fields. For instance, it is used in image processing for segmentation
and edge detection.
Here we have to two variables, say peoples heights and weights. In general we have many variables,
say p of them, so whatever clustering we find will be in p-dimensional space. No, we cant picture
it very easily of p is larger than (or even equal to) 3, but we can at least identify membership, i.e.
John and Mary are in group 1, Jenny is in group 2, etc. We may derive some insight from this.
There are many, many types of clustering algorithms. Here we will discuss the famous k-means
algorithm, developed by Prof. Jim MacQueen of the UCLA business school.
The method couldnt be simpler. Choose k, the number of groups you want to form, and then run
this:
1
2
3
4
5
# form initial groups from the first k data points (or choose randomly)
for i = 1,...,k:
group[i] = (x[i],y[i])
center[i] = (x[i],y[i])
do:
14.3. CLUSTERING
6
7
8
9
10
11
12
289
for j = 1,...,n:
find the closest center[i] to (x[j],y[j])
cl[j] = the i you got in the previous line
for i = 1,...,k:
group[i] = all (x[j],y[j]) such that cl[j] = i
center[i] = average of all (x,y) in group[i]
until group memberships do not change from one iteration to the next
Definitions of terms:
Closest means in p-dimensional space, with the usual Euclidean distance: The distance from
(a1 , ..., ap to (b1 , ..., bp is
q
(b1 a1 )2 + ... + (bp ap )2
(14.6)
14.3.1
In terms of parallelization, again we have an embarrassingly parallel problem. Heres snow code
for it:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
290
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
n g r p s < nrow ( c u r r c t r s )
spacedim < n c o l ( c u r r c t r s ) # what d i m e n s i o n s p a c e a r e we i n ?
# s e t up t h e r e t u r n matrix
sumcounts < matrix ( r e p ( 0 , n g r p s ( spacedim +1)) , nrow=n g r p s )
f o r ( i i n 1 : nrow ( mchunk ) ) {
d s t s < d s t ( mchunk [ i , ] , t ( c u r r c t r s ) )
j < which . min ( d s t s )
sumcounts [ j , ] < sumcounts [ j , ] + c ( mchunk [ i , ] , 1 )
}
sumcounts
}
parkm < f u n c t i o n ( c l s ,m, n i t e r s , i n i t c e n t e r s ) {
n < nrow (m)
spacedim < n c o l (m) # what d i m e n s i o n s p a c e a r e we i n ?
# d e t e r m i n e which worker g e t s which chunk o f rows o f m
o p t i o n s ( warn=1)
i c h u n k s < s p l i t ( 1 : n , 1 : l e n g t h ( c l s ) )
o p t i o n s ( warn=0)
# form row chunks
mchunks < l a p p l y ( i c h u n k s , f u n c t i o n ( i chu nk ) m[ ichunk , ] )
mcf < f u n c t i o n ( mchunk ) mchunk << mchunk
# send row chunks t o w o r k e r s ; each chunk w i l l be a g l o b a l v a r i a b l e a t
# t h e worker , named mchunk
i n v i s i b l e ( c l u s t e r A p p l y ( c l s , mchunks , mcf ) )
# send d s t ( ) t o w o r k e r s
clusterExport ( c l s , dst )
# start iterations
c e n t e r s < i n i t c e n t e r s
for ( i in 1: n i t e r s ) {
sumcounts < c l u s t e r C a l l ( c l s , f i n d n e w g r p s , c e n t e r s )
tmp < Reduce (+ , sumcounts )
c e n t e r s < tmp [ , 1 : spacedim ] / tmp [ , spacedim +1]
# i f a group i s empty , l e t s s e t i t s c e n t e r t o 0 s
c e n t e r s [ i s . nan ( c e n t e r s ) ] < 0
}
centers
}
14.4
Consider data consisting of (X,Y) pairs as we saw in Section 14.3. Suppose X and Y are highly
correlated with each other. Then for some constants c and d,
Y c + dX
(14.7)
291
Then in a sense there is really just one random variable here, as the second is nearly equal to some
linear combination of the first. The second provides us with almost no new information, once we
have the first. In other words, even though the vector (X,Y) roams in two-dimensional space, it
usually sticks close to a one-dimensional object, namely the line (14.7).
Now think again of p variables. It may be the case that there exist r < p variables, consisting
of linear combinations of the p variables, that carry most of the information of the full set of p
variables. If r is much less than p, we would prefer to work with those r variables. In data mining,
this is called dimension reduction.
It can be shown that we can find these r variables by finding the r eigenvectors corresponding to
the r largest eigenvalues of a certain matrix. So again we have a matrix formulation, and thus
parallelizing the problem can be done easily by using methods for parallel matrix operations. We
discussed parallel eigenvector algorithms in Section 11.6.
14.5
Monte Carlo simulation is typically (though not always) used to find probabilistic quantities such
as probabilities and expected values. Consider a simple example problem:
An urn contains blue, yellow and green marbles, in numbers 5, 12 and 13, respectively.
We choose 6 marbles at random. What is the probability that we get more yellow
marbles than than green and more green than blue?
We could find the approximate answer by simulation:
1
2
3
4
5
count = 0
for i = 1 ,... , n
s i m u l a t e drawing 6 m a r b l e s
i f y e l l o w s > g r e e n s > b l u e s then count = count + 1
c a l c u l a t e approximate p r o b a b i l i t y a s count /n
The larger n is, the more accurate will be our approximate probability.
At first glance, this problem seems quite embarrassingly parallel. Say we are on a shared memory
machine running 10 threads and wish to have n = 100000. Then we simply have each of our threads
run the above code with n = 10000, and then average our 10 results.
The trouble with this, though, is that it assumes that the random numbers used by each thread
are independent of the others. A naive approach, say by calling random() in the C library, will
not achieve such independence. With some random number libraries, in fact, youll get the same
stream for each thread, certainly not what you want.
292
A number of techniques have been developed for generating parallel independent random number
streams. We will not pursue the technical details here, but will give links to code for them.
The NVIDIA CUDA SDK includes a parallel random number generator, the Mersenne Twister.
The CURAND library has more.
RngStream can be used with, for example, OpenMP and MPI.
SPRNG is aimed at MPI, but apparently usable in shared memory settings as well. Rsprng
is an R interface to SPRNG.
OpenMP: An OpenMP version of the Mersenne Twister is available at http://www.pgroup.
com/lit/articles/insider/v2n2a4.htm. Other parallel random number generators for
OpenMP are available via a Web search.
There are many, many more.
Appendix A
A.1
A.1.1
Timesharing
Many Processes, Taking Turns
Suppose you and someone else are both using the computer pc12 in our lab, one of you at the
console and the other logged in remotely. Suppose further that the other persons program will run
for five hours! You dont want to wait five hours for the other persons program to end. So, the
OS arranges things so that the two programs will take turns running, neither of them running to
completion all at once. It wont be visible to you, but that is what happens.
Timesharing involves having several programs running in what appears to be a simultaneous
manner. (These programs could be from different users or the same user; in our case with threaded
code, several processes actually come from a single invocation of a program.) If the system has
only one CPU, which well assume temporarily, this simultaneity is of course only an illusion, since
only one program can run at any given time, but it is a worthwhile illusion, as we will see.
First of all, how is this illusion attained? The answer is that we have the programs all take
turns running, with each turncalled a quantum or timeslicebeing of very short duration, for
example 50 milliseconds. (Well continue to assume 50 ms quanta below.)
Say we have four programs, u, v, x and y, running currently. What will happen is that first u
runs for 50 milliseconds, then u is suspended and v runs for 50 milliseconds, then v is suspended
293
294
and x runs for 50 milliseconds, and so on. After y gets its turn, then u gets a second turn,
etc. Since the turn-switching, formally known as context-switching,1 is happening so fast (every
50 milliseconds), it appears to us humans that each program is running continuously (though at
one-fourth speed), rather than on and off, on and off, etc.2
But how can the OS enforce these quanta? For example, how can the OS force the program u
above to stop after 50 milliseconds? The answer is, It cant! The OS is dead while u is running.
Instead, the turns are implemented via a timing device, which emits a hardware interrupt at the
proper time. For example, we could set the timer to emit an interrupt every 50 milliseconds. When
the timer goes off, it sends a pulse of current (the interrupt) to the CPU, which is wired up to
suspend its current process and jump to the driver of the interrupting device (here, the timer).
Since the driver is in the OS, the OS is now running!
We will make such an assumption here. However, what is more common is to have the timer
interrupt more frequently than the desired quantum size. On a PC, the 8253 timer interrupts
100 times per second. Every sixth interrupt, the OS will perform a context switch. That results
in a quantum size of 60 milliseconds. But this can be changed, simply by changing the count of
interrupts needed to trigger a context switch.
The timer device driver saves all us current register values, including its Program Counter value
(the address of the current instruciton) and the value in its EFLAGS register (flags that record, for
instance, whether the last instruction produced a 0 result). Later, when us next turn comes, those
values will be restored, and u including its PC value and the value in its EFLAGS register. Later,
when us next turn comes, those values will be restored, and u will resume execution as if nothing
ever happened. For now, though, the OS routine will restore vss previously-saved register values,
making sure to restore the PC value last of all. That last action forces a jump from the OS to v,
right at the spot in v where v was suspended at the end of its last quantum. (Again, the CPU
just minds its own business, and does not know that one program, the OS, has handed over
control to another, v; the CPU just keeps performing its fetch/execute cycle, fetching whatever the
PC points to, oblivious to which process is running.)
A process turn can end early, if the current process voluntarily gives us control of the CPU. Say
the process reaches a point at which it is supposed to read from the keyboard, with the source code
calling, say, scanf() or cin. The process will make a systems call to do this (the compiler placed
that there), which means the OS will now be running! The OS will mark this process as being in
Sleep state, meaning that its waiting for some action. Later, when the user for that process hits a
key, it will cause an interrupt, and since the OS contains the keyboard device driver, this means the
OS will then start running. The OS will change the process entry in the process table from Sleep
to Runmeaning only that it is ready to be given a turn. Eventually, after some other process
1
295
turn ends, the OS will give this process its next turn.
On a multicore machine, several processes can be physically running at the same time, but the
operation is the same as above. On a Linux system, to see all the currently running threads, type
ps e L f
A.2
Memory Hierarchies
A.2.1
Cache Memory
Memory (RAM) is usually not on the processor chip, which makes it far away. Signals must go
through thicker wires than the tiny ones inside the chip, which slows things down. And of course
the signal does have to travel further. All this still occurs quite quickly by human standards, but
not relative to the blinding speeds of modern CPUs.
Accordingly, a section of the CPU chip is reserved for a cache, which at any given time contains
a copy of part of memory. If the requested item (say x above) is found in the cache, the CPU is
in luck, and access is quick; this is called a cache hit. If the CPU is not lucky (a cache miss), it
must bring in the requested item from memory.
Caches organize memory by chunks called blocks. When a cache miss occurs, the entire block
containing the requested item is brought into the cache. Typically a block currently in the cache
must be evicted to make room for the new one.
A.2.2
Virtual Memory
Most modern processor chips have virtual memory (VM) capability, and most general-purpose OSs
make use of it.
A.2.2.1
296
A.2.2.2
How It Works
Suppose a variable x has the virtual address 1288, i.e. &x = 1288 in a C/C++ program. But,
when the OS loads the program into memory for execution, it rearranges everything, and the actual
physical address of x may be, say, 5088.
The high-order bits of an address are considered to be the page number of that address, with the
lower bits being the offset within the page. For any given item such as x, the offset is the same in
both its virtual and physical addresses, but the page number differs.
To illustrate this simply, suppose that our machine uses base-10 numbering instead of base-2, and
that page size is 100 bytes. Then x above would be in offset 88 of virtual page 12. Its physical
page would be 50, with the same offset. In other words, x is stored 88 bytes past the beginning of
page 50 in memory.
The correspondences between virtual and physical page numbers is given in the page table, which
is simply an array in the OS. The OS will set up this array at the time it loads the program into
memory, so that the virtual-to-physical address translations can be done.
Those translations are done by the hardware. When the CPU executes a machine instruction that
specifies access to 1288, the CPU will do a lookup on the page table, in the entry for virtual page
12, and find that the actual page is 50. The CPU will then know that the true location is 5088,
and it would place 5088 in the address lines in the system bus to then access 5088.
On the other hand, x may not currently be resident in memory at all, in which case the page table
will mark it as such. If the CPU finds that page 12 is nonresident, we say a page fault occurs, and
this will cause an internal interrupt, which in turn will cause a jump to the operating system (OS).
The OS will then read the page containing x in from disk, place it somewhere in memory, and
then update the page table to show that virtual page 12 is now in some physical page in memory.
The OS will then execute an interrupt return instruction, and the CPU will restart the instruction
which triggered the page fault.
A.2.3
297
Performance Issues
Upon a cache miss, the hardware will need to read an entire block from memory, and if an eviction
is involved, an entire block will be written as well, assuming a write-back policy. (See the reference
at the beginning of this appendix.) All this is obviously slow.
The cache3 is quite small relative to memory, so you might guess that cache misses are very frequent.
Actually, though, they arent, due to something called locality of reference. This term refers to
the fact that most programs tend to either access the same memory item repeatedly within short
time periods (temporal locality), and/or access items within the same block often during short
periods (spatial locality). Hit rates are typically well above 90%. Part of this depends on having
a good block replacement policy, which decides which block to evict (hopefully one that wont
be needed again soon!).
A page fault is pretty catastrophic in performance terms. Remember, the disk speed is on a
mechanical scale, not an electronic one, so it will take quite a while for the OS to service a page
fault, much worse than for a cache miss. So the page replacement policy is even more important
as well.
On Unix-family machines, the time command not only tells how long your program ran, but also
how many page faults it caused. Note that since the OS runs every time a page fault occurs, it
can keep track of the number of faults. This is very different from a cache miss, which although
seems similar to a page fault in many ways, the key point is that a cache miss is handled solely in
hardware, so no program can count the number of misses.4
Note that in a VM system each memory access becomes two memory accessesthe page table read
and the memory access itself. This would kill performance, so there is a special cache just for the
page table, called the Translation Lookaside Buffer.
A.3
A.3.1
Array Issues
Storage
It is important to understand how compilers store arrays in memory, an overview of which will now
be presented.
Consider the array declaration
int y [100];
3
Or caches, plural, as there are often multiple levels of caches in todays machines.
Note by the way that cache misses, though harmful to program speed, arent as catastrophic as page faults, as
the disk is not involved.
4
298
The compiler will store this in 100 consecutive words of memory. You may recall that in C/C++,
an expression consisting of an array name, no subscript, is a pointer to the array. Well, more
specifically, it is the address of the first element of the array.
An array element, say y [8] actually means the same as the C/C++ pointer expression y+8, which
in turn means the word 8 ints past the beginning of y.
Two-dimensional arrays, say
int z [ 3 ] [ 1 0 ] ;
exist only in our imagination. They are actually treated as one-dimensional arrays, in the above
case consisting of 3 10 = 30 elements. C/C++ arranges this in row-major order, meaning that
all of row 0 comes first, then all of row 1 and so on. So for instance z [2][5] is stored in element
10 + 10 + 5 of z, and we could for example set that element to 8 with the code
z [25] = 8;
or
( z +25) = 8 ;
Note that if we have a c-column two-dimensional array, element (i,j) is stored in the word ic+j of
the array. Youll see this fact used a lot in this book, and in general in code written in the parallel
processing community.
A.3.2
Subarrays
The considerations in the last section can be used to access subarrays. For example, here is code
to find the sum of a float array of length k:
1
2
3
4
5
f l o a t sum ( f l o a t x , i n t k )
{ float s = 0.0; int i ;
f o r ( i = 0 ; i < k ; i ++) s += x [ i ] ;
return s ;
}
Quite ordinary, but suppose we wish to find the sum in row 2 of the two-dimensional array z above.
We could do this as sum(z+20,10).
A.3.3
Memory Allocation
Very often one needs to set up an array whose size is not known at compile time. You are probably
accustomed to doing this via malloc() or new. However, in large parallel programs, this approach
may be quite slow.
299
With an array with size known at compile time, and which is declared local to some function, it will
be allocated on the stack and you might run out of stack space. The easiest solution is probably
to make the array global, of fixed size.
To accommodate larger arrays under gcc on a 64-bit system, use the -mcmodel=medium command line option.
300
Appendix B
B.1
A matrix is a rectangular array of numbers. A vector is a matrix with only one row (a row
vector or only one column (a column vector).
The expression, the (i,j) element of a matrix, will mean its element in row i, column j.
Please note the following conventions:
Capital letters, e.g. A and X, will be used to denote matrices and vectors.
Lower-case letters with subscripts, e.g. a2,15 and x8 , will be used to denote their elements.
Capital letters with subscripts, e.g. A13 , will be used to denote submatrices and subvectors.
If A is a square matrix, i.e. one with equal numbers n of rows and columns, then its diagonal
elements are aii , i = 1,...,n.
A square matrix is called upper-triangular if aij = 0 whenever i > j, with a corresponding
definition for lower-triangular matrices.
301
302
(B.1)
i=1
B.1.1
For two matrices have the same numbers of rows and same numbers of columns, addition is
defined elementwise, e.g.
1 5
6 2
7 7
0 3 + 0 1 = 0 4
4 8
4 0
8 8
(B.2)
2.8 2.8
7 7
0.4 0 4 = 0 1.6
3.2 3.2
8 8
(B.3)
x k yk
(B.4)
k=1
The product of matrices A and B is defined if the number of rows of B equals the number of
columns of A (A and B are said to be conformable). In that case, the (i,j) element of the
product C is defined to be
cij =
n
X
aik bkj
(B.5)
k=1
For instance,
7 6
19 66
0 4 1 6 = 8 16
2 4
8 8
24 80
(B.6)
303
It is helpful to visualize cij as the inner product of row i of A and column j of B, e.g. as
shown in bold face here:
7 70
7 6
1
6
0 4
= 8 16
2 4
8 80
8 8
(B.7)
B.2
A(BC) = (AB)C
(B.8)
A(B + C) = AB + AC
(B.9)
AB 6= BA
(B.10)
Matrix Transpose
(B.11)
(A + B)0 = A0 + B 0
(B.12)
(AB)0 = B 0 A0
(B.13)
If A + B is defined, then
304
B.3
Linear Independence
(B.14)
B.4
Determinants
Let A be an nxn matrix. The definition of the determinant of A, det(A), involves an abstract
formula featuring permutations. It will be omitted here, in favor of the following computational
method.
Let A(i,j) denote the submatrix of A obtained by deleting its ith row and jth column. Then the
determinant can be computed recursively across the kth row of A as
det(A) =
n
X
(1)k+m det(A(k,m) )
(B.15)
m=1
where
det
s t
u v
= sv tu
(B.16)
Generally, determinants are mainly of theoretical importance, but they often can clarify ones
understanding of concepts.
B.5
Matrix Inverse
The identity matrix I of size n has 1s in all of its diagonal elements but 0s in all off-diagonal
elements. It has the property that AI = A and IA = A whenever those products are defined.
The A is a square matrix and AB = I, then B is said to be the inverse of A, denoted A1 .
Then BA = I will hold as well.
A1 exists if and only if its rows (or columns) are linearly independent.
305
(B.17)
A matrix U is said to be orthogonal if its rows each have norm 1 and are orthogonal to each other,
i.e. their inner product is 0. U thus has the property that U U 0 = I i.e. U 1 = U .
The inverse of a triangular matrix is easily obtain by something called back substitution.
Typically one does not compute matrix inverses directly. A common alternative is the QR decomposition: For a matrix A, matrices Q and R are calculated so that A = QR, where Q is an
orthogonal matrix and R is upper-triangular.
If A is square and inveritble, A1 is easily found:
A1 = (QR)1 = R1 Q0
(B.18)
Again, though, in some cases A is part of a more complex system, and the inverse is not explicitly
computed.
B.6
(B.19)
(B.20)
for a diagonal matrix D. The elements of D are the eigenvalues of A, and the columns of U
are the eigenvectors of A.
1
For nonsquare matrices, the discussion here would generalize to the topic of singular value decomposition.
306
B.7
Matrix Algebra in R
The R programming language has extensive facilities for matrix algebra, introduced here.
Note first that R matrix subscripts, like those of vectors, begin at 1, rather than 0 as in C/C++.
For instance:
> m < r b i n d ( 3 : 4 , c ( 1 , 8 ) )
> m
[ ,1] [ ,2]
[1 ,]
3
4
[2 ,]
1
8
> m[ 2 , 2 ]
[1] 8
Next, it is important to know that R uses column-major order, i.e. its elements are stored in
memory column-by-column. In the case of the matrix m above, for instance, the element 1 will be
the second one in the internal memory storage of m, while the 8 will be the fourth.
This is also reflected in how R inputs data when a matrix is constructed, e.g.
> d < matrix ( c ( 1 , 1 , 0 , 0 , 3 , 8 ) , nrow=2)
> d
[ ,1] [ ,2] [ ,3]
[1 ,]
1
0
3
[2 ,]
1
0
8
> c < a %% b
> c
[ ,1] [ ,2] [ ,3]
[1 ,]
14
32
50
[2 ,]
68 167 266
> c + matrix ( c ( 1 , 1 , 0 , 0 , 3 , 8 ) , nrow=2) # 2 d i f f e r e n t c s !
[ ,1] [ ,2] [ ,3]
[1 ,]
15
32
53
[2 ,]
67 167 274
> c %% c ( 1 , 5 , 6 )
[ ,1]
[ 1 , ] 474
[ 2 , ] 2499
> t ( a ) # matrix t r a n s p o s e
[ ,1] [ ,2]
[1 ,]
1
10
[2 ,]
2
11
[3 ,]
3
12
> # m a t rix i n v e r s e
> u < matrix ( r u n i f ( 9 ) , nrow=3)
> u
[ ,1]
[ ,2]
[ ,3]
[ 1 , ] 0.08446154 0.86335270 0.6962092
[ 2 , ] 0.31174324 0.35352138 0.7310355
[ 3 , ] 0.56182226 0.02375487 0.2950227
> ui nv < s o l v e ( u )
> ui nv
[ ,1]
[ ,2]
[ ,3]
[1 ,]
0 . 5 8 1 8 4 8 2 1.594123 2 . 5 7 6 9 9 5
[2 ,]
2 . 1 3 3 3 9 6 5 2.451237 1 . 0 3 9 4 1 5
[ 3 , ] 1.2798127 3 . 2 3 3 1 1 5 1.601586
> u %% uinv # check , but n o t e r o u n d o f f e r r o r
[ ,1]
[ ,2]
[ ,3]
[ 1 , ] 1 . 0 0 0 0 0 0 e+00 1.680513 e 16 2.283330 e 16
[ 2 , ] 6 . 6 5 1 5 8 0 e 17 1 . 0 0 0 0 0 0 e+00 4 . 4 1 2 7 0 3 e 17
[ 3 , ] 2 . 2 8 7 6 6 7 e 17 3.539920 e 17 1 . 0 0 0 0 0 0 e+00
> # e i g e n v a l u e s and e i g e n v e c t o r s
> eigen (u)
$values
[1]
1 . 2 4 5 6 2 2 0 + 0 . 0 0 0 0 0 0 0 i 0.2563082+0.2329172 i 0.2563082 0.2329172 i
$vectors
[ ,1]
[ ,2]
[ ,3]
[ 1 , ] 0.6901599+0 i 0.6537478+0.0000000 i 0.6537478+0.0000000 i
[ 2 , ] 0.5874584+0 i 0.1989163 0.3827132 i 0.1989163+0.3827132 i
[ 3 , ] 0.4225778+0 i
0.5666579+0.2558820 i 0.5666579 0.2558820 i
> # d i a g o n a l m a t r i c e s ( o f f d i a g o n a l s 0 )
> diag (3)
[ ,1] [ ,2] [ ,3]
[1 ,]
1
0
0
307
308
[2 ,]
0
1
0
[3 ,]
0
0
1
> diag (( c (5 ,12 ,13)))
[ ,1] [ ,2] [ ,3]
[1 ,]
5
0
0
[2 ,]
0
12
0
[3 ,]
0
0
13
> m
[ ,1] [ ,2] [ ,3]
[1 ,]
5
6
7
[2 ,]
10
11
12
> d i a g (m) < c ( 8 , 8 8 )
> m
[ ,1] [ ,2] [ ,3]
[1 ,]
8
6
7
[2 ,]
10
88
12
Appendix C
R Quick Start
Here we present a quick introduction to the R data/statistical programming language. Further
learning resources are listed at http://heather.cs.ucdavis.edu/~/matloff/r.html.
R syntax is similar to that of C. It is object-oriented (in the sense of encapsulation, polymorphism
and everything being an object) and is a functional language (i.e. almost no side effects, every
action is a function call, etc.).
C.1
Correspondences
aspect
assignment
array terminology
subscripts
array notation
2-D array storage
mixed container
return mechanism
primitive types
logical values
mechanism for combining modules
run method
C/C++
=
array
start at 0
m[2][3]
row-major order
struct, members accessed by .
return
int, float, double, char, bool
true, false
include, link
batch
309
R
<- (or =)
vector, matrix, array
start at 1
m[2,3]
column-major order
list, members acessed by $ or [[ ]]
return() or last value computed
integer, float, double, character, logical
TRUE, FALSE (abbreviated T, F)
library()
interactive, batch
310
C.2
Starting R
To invoke R, just type R into a terminal window. On a Windows machine, you probably have
an R icon to click.
If you prefer to run from an IDE, you may wish to consider ESS for Emacs, StatET for Eclipse or
RStudio, all open source. ESS is the favorite among the hard core coder types, while the colorful,
easy-to-use, RStudio is a big general crowd pleaser. If you are already an Eclipse user, StatET will
be just what you need.
R is normally run in interactive mode, with > as the prompt. Among other things, that makes it
easy to try little experiments to learn from; remember my slogan, When in doubt, try it out!
C.3
Below is a commented R session, to introduce the concepts. I had a text editor open in another
window, constantly changing my code, then loading it via Rs source() command. The original
contents of the file odd.R were:
1
2
3
4
5
6
7
oddcount < f u n c t i o n ( x ) {
k < 0 # a s s i g n 0 t o k
for (n in x) {
i f ( n %% 2 == 1 ) k < k+1
}
return (k)
}
# %% i s t h e modulo o p e r a t o r
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
> # any o b j e c t by t y p i n g
> oddcount # a f u n c t i o n
function (x) {
k < 0 # a s s i g n 0 t o
for (n in x) {
i f ( n %% 2 == 1 ) k
}
return (k)
}
i t s name ; o t h e r w i s e u s e p r i n t ( ) , e . g . p r i n t ( x+y )
i s an o b j e c t , s o can p r i n t i t
k
< k+1
# %% i s t h e modulo o p e r a t o r
311
312
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
#
>
#
#
#
#
#
FALSE a r e t r e a t e d a s 1 and 0
oddcount < f u n c t i o n ( x ) sum ( x %% 2 == 1 )
make s u r e you u n d e r s t a n d t h e s t e p s t h a t t h a t i n v o l v e s : x i s a v e c t o r ,
and t h u s x %% 2 i s a new v e c t o r , t h e r e s u l t o f a p p l y i n g t h e mod 2
o p e r a t i o n t o e v e r y e l e m e n t o f x ; then x %% 2 == 1 a p p l i e s t h e == 1
o p e r a t i o n t o each e l e m e n t o f t h a t r e s u l t , y i e l d i n g a new v e c t o r o f TRUE
and FALSE v a l u e s ; sum ( ) then adds them ( a s 1 s and 0 s )
$numodds
[1] 2
> ocy$odds
[ 1 ] 5 13
> ocy [ [ 1 ] ]
[ 1 ] 5 13
> ocy [ [ 2 ] ]
[1] 2
# can g e t l i s t e l e m e n t s u s i n g [ [
] ] instead of $
Note that the function of the R function function() is to produce functions! Thus assignment is
used. For example, here is what odd.R looked like at the end of the above session:
1
2
3
4
oddcount < f u n c t i o n ( x ) {
x1 < x [ x %% 2 == 1 ]
r e t u r n ( l i s t ( odds=x1 , numodds=l e n g t h ( x1 ) ) )
}
313
We created some code, and then used function() to create a function object, which we assigned
to oddcount.
Note that we eventually vectorized our function oddcount(). This means taking advantage of
the vector-based, functional language nature of R, exploiting Rs built-in functions instead of loops.
This changes the venue from interpreted R to C level, with a potentially large increase in speed.
For example:
1 > x < r u n i f ( 1 0 0 0 0 0 0 ) # 1000000 random numbers from t h e i n t e r v a l ( 0 , 1 )
2 > system . time ( sum ( x ) )
3
u s e r system e l a p s e d
4
0.008
0.000
0.006
5 > system . time ( { s < 0 ; f o r ( i i n 1 : 1 0 0 0 0 0 0 ) s < s + x [ i ] } )
6
u s e r system e l a p s e d
7
2.776
0.004
2.859
C.4
A matrix is a special case of a vector, with added class attributes, the numbers of rows and columns.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
314
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
[1 ,]
[2 ,]
[ ,1] [ ,2]
3
5
4
6
> m1 m3 # e l e m e n t w i s e m u l t i p l i c a t i o n
[ ,1] [ ,2]
[1 ,]
3
10
[2 ,]
20
48
> 2 . 5 m3 # s c a l a r m u l t i p l i c a t i o n ( but s e e below )
[ ,1] [ ,2]
[1 ,]
7.5 12.5
[ 2 , ] 10.0 15.0
> m1 %% m3 # l i n e a r a l g e b r a matrix m u l t i p l i c a t i o n
[ ,1] [ ,2]
[1 ,]
11
17
[2 ,]
47
73
> # m a t r i c e s a r e s p e c i a l c a s e s o f v e c t o r s , s o can t r e a t them a s v e c t o r s
> sum (m1)
[ 1 ] 16
> i f e l s e (m2 %%3 == 1 , 0 ,m2) # ( s e e below )
[ ,1] [ ,2] [ ,3]
[1 ,]
0
3
5
[2 ,]
2
0
6
The scalar multiplication above is not quite what you may think, even though the result may
be. Heres why:
In R, scalars dont really exist; they are just one-element vectors. However, R usually uses recycling, i.e. replication, to make vector sizes match. In the example above in which we evaluated
the express 2.5 * m3, the number 2.5 was recycled to the matrix
2.5 2.5
2.5 2.5
(C.1)
All three vector expressions must be the same length, though R will lengthen some via recycling.
The action will be to return a vector of the same length (and if matrices are involved, then the
result also has the same shape). Each element of the result will be set to its corresponding element
in vectorexpression2 or vectorexpression3, depending on whether the corresponding element
in vectorexpression1 is TRUE or FALSE.
315
T
F
F
T
F
F
(C.2)
0 0 0
0 0 0
C.5
(C.3)
316
C.6
The R list type is, after vectors, the most important R construct. A list is like a vector, except
that the components are generally of mixed types.
C.6.1
The Basics
C.6.2
One often needs to combine elements of a list in some way. One approach to this is to use Reduce():
> x < l i s t ( 4 : 6 , c ( 1 , 6 , 8 ) )
> x
[[1]]
[1] 4 5 6
317
[[2]]
[1] 1 6 8
> sum ( x )
E r r o r i n sum ( x ) : i n v a l i d type ( l i s t ) o f argument
> Reduce ( sum , x )
[ 1 ] 30
Here Reduce() cumulatively applied Rs sum() to x. Of course, you can use it with functions you
write yourself too.
Continuing the above example:
> Reduce ( c , x )
[1] 4 5 6 1 6 8
C.6.3
S3 Classes
R is an object-oriented (and functional) language. It features two types of classes, S3 and S4. Ill
introduce S3 here.
An S3 object is simply a list, with a class name added as an attribute:
>
>
>
>
So now we have two objects of a class weve chosen to name employee. Note the quotation
marks.
We can write class generic functions:
> p r i n t . employee < f u n c t i o n ( wrkr ) {
+
c a t ( wrkr$name , \ n )
+
cat ( s a l a r y , wrkr$salary ,\ n)
+
c a t ( union member , wrkr$union , \ n )
+ }
> print ( j )
Joe
s a l a r y 55000
union member TRUE
> j
Joe
s a l a r y 55000
union member TRUE
318
What just happened? Well, print() in R is a generic function, meaning that it is just a placeholder
for a function specific to a given class. When we printed j above, the R interpreter searched for a
function print.employee(), which we had indeed created, and that is what was executed. Lacking
this, R would have used the print function for R lists, as before:
> rm ( p r i n t . employee )
> j
$name
[ 1 ] Joe
$salary
[ 1 ] 55000
$union
[ 1 ] TRUE
a t t r ( , c l a s s )
[ 1 ] employee
C.6.4
Handy Utilities
R functions written by others, e.g. in base R or in the CRAN repository for user-contributed code,
often return values which are class objects. It is common, for instance, to have lists within lists. In
many cases these objects are quite intricate, and not thoroughly documented. In order to explore
the contents of an objecteven one you write yourselfhere are some handy utilities:
names(): Returns the names of a list.
str(): Shows the first few elements of each component.
summary(): General function. The author of a class x can write a version specific to x,
i.e. summary.x(), to print out the important parts; otherwise the default will print some
bare-bones information.
For example:
> z < l i s t ( a = r u n i f ( 5 0 ) , b
> z
$a
[ 1 ] 0.301676229 0.679918518
0.412388038
[ 7 ] 0.900498062 0.119936222
0.979945937
[ 1 3 ] 0.902377363 0.941813898
0.049504986
[ 1 9 ] 0.092011899
0.562163986
[ 2 5 ] 0.360718988
0.148819125
[ 3 1 ] 0.381143870
0.417984331
[ 3 7 ] 0.777219084
0.856198893
[ 4 3 ] 0.629269146
0.940457376
[ 4 9 ] 0.228829858
319
$b
$b$u
[ 1 ] 33 67 32 76 29
86 40 43
3 42 54 97 41 57 87 36 92 81 31 78 12 85 73 26 44
$b$v
[ 1 ] b l u e sky
> names ( z )
[ 1 ] a b
> str (z)
List of 2
$ a : num [ 1 : 5 0 ] 0 . 3 0 2 0 . 6 8 0 . 2 0 9 0 . 5 1 0 . 4 0 5 . . .
$ b : List of 2
. . $ u : i n t [ 1 : 2 5 ] 33 67 32 76 29 3 42 54 97 41 . . .
. . $ v : c h r b l u e sky
> names ( z$b )
[ 1 ] u v
> summary ( z )
Length C l a s s Mode
a 50
none numeric
b 2
none l i s t
C.7
Data Frames
Another workhorse in R is the data frame. A data frame works in many ways like a matrix, but
differs from a matrix in that it can mix data of different modes. One column may consist of integers,
while another can consist of character strings and so on. Within a column, though, all elements
must be of the same mode, and all columns must have the same length.
We might have a 4-column data frame on people, for instance, with columns for height, weight, age
and name3 numeric columns and 1 character string column.
Technically, a data frame is an R list, with one list element per column; each column is a vector.
Thus columns can be referred to by name, using the $ symbol as with all lists, or by column number,
320
as with matrices. The matrix a[i,j] notation for the element of a in row i, column j, applies to
data frames. So do the rbind() and cbind() functions, and various other matrix operations, such
as filtering.
Here is an example using the dataset airquality, built in to R for illustration purposes. You can
learn about the data through Rs online help, i.e.
> ? airquality
C.8. GRAPHICS
C.8
321
Graphics
R excels at graphics, offering a rich set of capabilities, from beginning to advanced. In addition to
the functions in base R, extensive graphics packages are available, such as lattice and ggplot2.
One point of confusion for beginniners involves saving an R graph that is currently displayed on
the screen to a file. Here is a function for this, which I include in my R startup file, .Rprofile, in
my home directory:
pr2file
function ( filename )
{
o r i g d e v < dev . c u r ( )
p a r t s < s t r s p l i t ( f i l e n a m e , . , f i x e d = TRUE)
n p a r t s < l e n g t h ( p a r t s [ [ 1 ] ] )
s u f f < p a r t s [ [ 1 ] ] [ n p a r t s ]
i f ( s u f f == pdf ) {
pdf ( f i l e n a m e )
}
e l s e i f ( s u f f == png ) {
png ( f i l e n a m e )
}
e l s e jpeg ( filename )
devnum < dev . c u r ( )
dev . s e t ( o r i g d e v )
dev . copy ( which = devnum )
dev . s e t ( devnum )
dev . o f f ( )
dev . s e t ( o r i g d e v )
}
The code, which I wont go into here, mostly involves manipulation of various R graphics devices.
Ive set it up so that you can save to a file of type either PDF, PNG or JPEG, implied by the file
name you give.
C.9
Packages
The analog of a library in C/C++ in R is called a package (and often loosely referred to as a
library). Some are already included in base R, while others can be downloaded, or written by
yourself.
> l i b r a r y ( p a r a l l e l ) # l o a d t h e package named p a r a l l e l
> l s ( package : p a r a l l e l ) # l e t s s e e what f u n c t i o n s i t gave us
[ 1 ] clusterApply
clusterApplyLB
clusterCall
[ 4 ] clusterEvalQ
clusterExport
clusterMap
[ 7 ] clusterSetRNGStream c l u s t e r S p l i t
detectCores
322
[ 1 0 ] makeCluster
makeForkCluster
[ 1 3 ] mc . r e s e t . stream
mcaffinity
[ 1 6 ] mclapply
mcMap
[19] mcparallel
nextRNGStream
[ 2 2 ] parApply
parCapply
[ 2 5 ] parLapplyLB
parRapply
[ 2 8 ] parSapplyLB
pvec
[31] splitIndices
stopCluster
> ? pvec # l e t s s e e how one o f them works
makePSOCKcluster
mccollect
mcmapply
nextRNGSubStream
parLapply
p a rS a p p ly
setDefaultCluster
The CRAN repository of contributed R code has thousands of R packages available. It also includes
a number of tables of contents for specific areas, say time series, in the form of CRAN Task Views.
See the R home page, or simply Googl CRAN Task View.
> i n s t a l l . p a c k a g e s ( c t s , / myr ) # download i n t o d e s i r e d d i r e c t o r y
P l e a s e s e l e c t a CRAN m i r r o r f o r u s e i n t h i s s e s s i o n
...
downloaded 533 Kb
The downloaded b i n a r y p a c k a g e s a r e i n
/ var / f o l d e r s / j k / d h 9 z k d s 9 7 s j 2 3 k j c f k r 5 v 6 q 0 0 0 0 0 g n /T//RtmplkKzOU/ d o w n l o a d e d p a c k a g e s
> ? library
> l i b r a r y ( c t s , l i b . l o c =/myr )
A t t a c h i n g package :
...
C.10
c t s
There are tons of resources for R on the Web. You may wish to start with the links at http:
//heather.cs.ucdavis.edu/~matloff/r.html.
C.11
Online Help
Rs help() function, which can be invoked also with a question mark, gives short descriptions of
the R functions. For example, typing
> ?rep
C.12. DEBUGGING IN R
323
An especially nice feature of R is its example() function, which gives nice examples of whatever
function you wish to query. For instance, typing
> example ( w i r e f r a m e ( ) )
will show examplesR code and resulting picturesof wireframe(), one of Rs 3-dimensional
graphics functions.
C.12
Debugging in R
The internal debugging tool in R, debug(), is usable but rather primitive. Here are some alternatives:
The RStudio IDE has a built-in debugging tool.
The StatET IDE for R on Eclipse has a nice debugging tool. Works on all major platforms,
but can be tricky to install.
My own debugging tool, debugR, is extensive and easy to install, but for the time being is limited to Linux, Mac and other Unix-family systems. See http://heather.cs.ucdavis.edu/debugR.html.
C.13
Complex Numbers
If you have need for complex numbers, R does handle them. Here is a sample of use of the main
functions of interest:
> za < complex ( r e a l =2, i m a g i n a r y =3.5)
> za
[ 1 ] 2+3.5 i
> zb < complex ( r e a l =1, i m a g i n a r y=5)
> zb
[ 1 ] 15 i
> za zb
[ 1 ] 19 .5 6.5 i
> Re ( za )
[1] 2
> Im ( za )
[ 1 ] 3.5
> za 2
[ 1 ] 8.25+14 i
> abs ( za )
[ 1 ] 4.031129
> exp ( complex ( r e a l =0 , i m a g i n a r y=p i / 4 ) )
324
[ 1 ] 0.7071068+0.7071068 i
> cos ( pi /4)
[ 1 ] 0.7071068
> s i n ( pi /4)
[ 1 ] 0.7071068
Note that operations with complex-valued vectors and matrices work as usual; there are no special
complex functions.
C.14
Further Reading
For further information about R as a programming language, there is my book, The Art of R
Programming: a Tour of Statistical Software Design, NSP, 2011.
For Rs statistical functions, a plethora of excellent books is available. such as The R Book (2nd
Ed.), Michael Crowley, Wiley, 2012. I also very much like R in a Nutshell (2nd Ed.), Joseph Adler,
OReilly, 2012.