Parallel Optimization Theory Algorithms
Parallel Optimization Theory Algorithms
Foreword
This book is a must for anyone interested in entering the fascinating new
world of parallel optimization using parallel processors | computers cap-
able of doing an enormous number of complex operations in a nanosecond.
The authors are among the pioneers of this fascinating new world and
they tell us what new applications they explored, what algorithms appear
to work best, how parallel processors dier in their design, and what the
comparative results were using dierent types of algorithms on dierent
types of parallel processors to solve them.
According to an old adage, the whole can sometimes be much more
than the sum of its parts. I am thoroughly in agreement with the authors'
belief in the added value of bringing together Applications, Mathematical
Algorithms and Parallel Computing techniques. This is exactly what they
found true in their own research and report on in the book.
Many years ago, I, too, experienced the thrill of combining three di-
verse disciplines: the Application (in my case Linear Programs), the Solu-
tion Algorithm (the Simplex Method), and the then New Tool (the Serial
Computer). The union of the three made possible the optimization of many
real-world problems. Parallel processors are the new generation and they
have the power to tackle applications which require solution in real time, or
have model parameters which are not known with certainty, or have a vast
number of variables and constraints.
Image restoration tomography, radiation therapy, nance, industrial
planning, transportation and economics are the sources for many of the
interesting practical problems used by the authors to test the methodology.
George B. Dantzig
Stanford University, 1996
Preface vii
Preface
As the sun eclipses the stars by his brilliancy so the one of
knowledge will eclipse the fame of the assemblies of the people
if he proposes algebraic problems, and still more if he solves
them.
Brahmagupta, 650 AD.
ization and combinatorial optimization see Section 1.5 for references. This
book focuses on parallel optimization methods for large-scale constrained
optimization problems and structured linear programs. Hence, it provides
a comprehensive chart of part of the vast intersection between parallel com-
puting and optimization. We set out to describe a domain where parallel
computing is having a great impact | precisely because of the large-scale
nature of the applications | and where many of the recent research devel-
opments have occurred. Even within this domain we do not claim that the
material about theory, parallel algorithms and applications presented here
is exhaustive. However, related developments that are not treated in the
book are discussed in extensive \Notes and References" sections at the end
of each chapter.
What, then, has determined our choice of theory, algorithms and ap-
plications that were included in the book? We have focused on methods
where substantial computational experiences have been accumulated over
the years, and where, we feel, substantial integration has been achieved
between the theory, the algorithms and the applications.
Quite often the implementation of an algorithm changes one's perspect-
ive of what are the important features of the algorithm, and such accumu-
lated experience, that we have acquired through our own work in the eld,
is reected in our treatment. The intricacies of exploiting the problem
structure are also fully revealed only during an implementation. Finally, it
is only with computational experiments that we can have full con dence in
the e
ciency and robustness of an algorithm. The material presented in
this book leads to implementable parallel algorithms that have undergone
the scrutiny of implementation on a variety of parallel architectures. In
addition, our choice of topics is broad enough so that readers can get a
comprehensive view of the landscape of parallel optimization methods.
While not all currently known parallel algorithms are discussed, the
book introduces algorithms from three broad families of algorithms for con-
strained optimization. Those are de ned later in the book as (i) iterative
projection algorithms, (ii) model decomposition algorithms, and (iii) in-
terior point algorithms. When viewed from the proper perspective these
algorithms satisfy the design characteristics of \good" parallel algorithms.
The book starts with a basic introduction to parallel computers: what
they are, how to assess their performance, how to design and implement par-
allel algorithms. This core knowledge on parallel computers is then linked
with the theoretical algorithms. The combined mathematical algorithms and
parallel computing techniques are brought together to bear on the solution
of several important applications: image reconstruction from projections,
matrix balancing, network optimization, nonlinear programming for plan-
ning under uncertainty, and nancial planning. We also address implement-
ation issues and study results from recent numerical works that highlight
the e
ciency of the developed algorithms, when implemented on suitable
Preface ix
sparse and structured matrices and graph problems (e.g., the multicommod-
ity transportation problems and the stochastic networks of Sections 12.2.2,
12.4.3, and 13.8, respectively). Hence, the material can be used to intro-
duce students to the art of implementing algorithms on parallel machines.
Such a course will provide some motivation by introducing applications
from Chapters 9, 10, 12 and 13. For each application an algorithm can
be introduced (speci c implementable algorithms are found in the applic-
ations chapters), and references made to the implementation techniques of
Chapter 14. The course should follow the sequence of topics as discussed in
Chapter 14, but before each section of this chapter is presented in class the
material from the corresponding application chapter should be introduced
rst. Finally, Chapter 15 can be used as a reference for students who wish
to test the e
ciency of their implementation, or the performance of their
parallel machine.
Acknowledgments
Part of the material presented in this book is based on our own published
work. We express our appreciation to our past and present collaborators
from whom we learned a great deal. Particular thanks for many useful
discussions and support, and for reading various parts of the book, are
extended to Marty Altschuler, Dimitri Bertsekas, Dan Butnariu, Charlie
Byrne, Alvaro De Pierro, Jitka Dupacova, Jonathan Eckstein, Tommy
Elfving, Gabor Herman, Dan Gordon, Alfredo Iusem, Elizabeth Jessup,
Arnold Lent, Robert Lewitt, Olvi Mangasarian, Jill Mesirov, Bob Meyer,
John Mulvey, Soren Nielsen, Mustafa Pinar, Simeon Reich, Uri Rothblum,
Michael Schneider, Jay Udupa, Paul Tseng and Dafeng Yang.
A draft version of the book was read by Dimitri Bertsekas, Jonathan
Eckstein, Tommy Elfving, Michael Ferris and Gabor Herman. We thank
them for their constructive comments. Any remaining errors or imperfec-
tions are, of course, our sole responsibility.
Part of the work of Yair Censor in this area was done in collaboration
with the Medical Image Processing Group (MIPG) at the Department of
Radiology, University of Pennsylvania. The support and encouragement of
Gabor Herman for this continued collaboration is gratefully acknowledged.
The work of Stavros A. Zenios was done while he was on the faculty at
the Department of Operations and Information Management, The Wharton
School, University of Pennsylvania, and while on leave at the Operations
Research Center of the Sloan School, Massachussetts Institute of Techno-
logy, and with Thinking Machines Corporation, Cambridge, MA. Substan-
tial parts of Stavros Zenios' work on this book were done during his visits
to the University of Bergamo, Italy, and the support of Marida Bertocchi
in making these visits possible is gratefully acknowledged. We express our
appreciation to these organizations as well as to our current institutions,
Preface xiii
the University of Haifa, Haifa, Israel and the University of Cyprus, Nico-
sia, Cyprus, for creating an environment where international collaborations
can be fostered and where long-term undertakings, such as the writing of
this book, are encouraged and supported.
This text grew out of lecture notes that we prepared and delivered at the
19th Brazilian Mathematical Colloquium, that took place in Rio de Janeiro
in the summer of 1993. We thank the organizers of that Colloquium and
Professor Jacob Palis, the Director of the Instituto de Mathematica Pura
e Aplicada (IMPA) where the Colloquium took place, for giving us the
opportunity to present our work at this forum, and thereby giving us the
impetus to complete the project of writing this book.
This work was supported by grants from the National Institutes of
Health (HL{28438), the Air-Force O
ce of Scienti c Research (AFOSR-
91{0168) and the National Science Foundation (CCR{9104042 and SES{
9100216) in the USA, and the National Research Council (CNR), Italy.
Computing resources were made available through the North-East Paral-
lel Architectures Center (NPAC) at Syracuse University, the Army High-
Performance Computing Research Center (AHPCRC) at the University
of Minnesota, Thinking Machines Corporation, Cray Research Inc., and
Argonne National Laboratory. We also thank Ms. Danielle Friedlander
and Ms. Giuseppina La Mantia for their work in processing parts of the
manuscript in LATEX. Finally, we express our appreciation to Senior Ed-
itor Bill Zobrist and Editorial Assistant Krysia Bebick from the Oxford
University Press o
ce in New York for their help.