Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
191 views

Parallel Optimization Theory Algorithms

Uploaded by

Raja
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
191 views

Parallel Optimization Theory Algorithms

Uploaded by

Raja
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

PARALLEL OPTIMIZATION

Theory, Algorithms and Applications


Series on Numerical Mathematics and Scienti c Computation
PARALLEL OPTIMIZATION
Theory, Algorithms and Applications
Yair Censor
Department of Mathematics and Computer Science
University of Haifa
Stavros A. Zenios
Department of Public and Business Administration
University of Cyprus

New York Oxford


Oxford University Press
1997
Oxford University Press
Material to be provided by OUP.
To Erga, Aviv, Nitzan and Keren | Y.C.
To Christiana, Efy and Elena | S.A.Z.
vi

Foreword
This book is a must for anyone interested in entering the fascinating new
world of parallel optimization using parallel processors | computers cap-
able of doing an enormous number of complex operations in a nanosecond.
The authors are among the pioneers of this fascinating new world and
they tell us what new applications they explored, what algorithms appear
to work best, how parallel processors dier in their design, and what the
comparative results were using dierent types of algorithms on dierent
types of parallel processors to solve them.
According to an old adage, the whole can sometimes be much more
than the sum of its parts. I am thoroughly in agreement with the authors'
belief in the added value of bringing together Applications, Mathematical
Algorithms and Parallel Computing techniques. This is exactly what they
found true in their own research and report on in the book.
Many years ago, I, too, experienced the thrill of combining three di-
verse disciplines: the Application (in my case Linear Programs), the Solu-
tion Algorithm (the Simplex Method), and the then New Tool (the Serial
Computer). The union of the three made possible the optimization of many
real-world problems. Parallel processors are the new generation and they
have the power to tackle applications which require solution in real time, or
have model parameters which are not known with certainty, or have a vast
number of variables and constraints.
Image restoration tomography, radiation therapy, nance, industrial
planning, transportation and economics are the sources for many of the
interesting practical problems used by the authors to test the methodology.

George B. Dantzig
Stanford University, 1996
Preface vii

Preface
As the sun eclipses the stars by his brilliancy so the one of
knowledge will eclipse the fame of the assemblies of the people
if he proposes algebraic problems, and still more if he solves
them.
Brahmagupta, 650 AD.

Problems of mathematical optimization are encountered in diverse areas of


the exact sciences, the natural sciences, the social sciences and engineer-
ing. Many of them are rooted in real-world applications. Developments
in the vast eld of optimization are, to a great extent, motivated by these
applications and have drawn, over the years, both from mathematics and
from computer science. Mathematics creates the foundation for the design
and analysis of optimization algorithms. Computer science provides the
tools for the design of data-structures, and for the translation of the math-
ematical algorithms into numerical procedures that are implementable on
a computer. The e
cient and robust implementation of an optimization
algorithm becomes crucial when one deals with the solution of large-scale,
real-world applications.
Recent technological innovations, with the introduction of parallel com-
puter architectures, are having a signi cant impact on every area of sci-
enti c computing where large-scale problems are attacked. In this book
we give an introduction to methods of parallel optimization. We do so by
introducing parallel computing ideas and techniques into both optimization
theory and into numerical algorithms for large-scale optimization problems.
We also examine signi cant broad areas of application where the prob-
lems are particularly suitable for solution on parallel machines, and where
substantial progress has been made in recent years with the application of
parallel optimization algorithms.
Some mathematical algorithms that are recognized today as being par-
allel algorithms date back to the 1920s and some eorts in using parallel
computers to solve optimization problems were made in the late 1970s, with
the introduction of the Illiac IV array processor at the University of Illinois.
However, it was in the early 1980s that concentrated and systematic eorts
started by several researchers in the eld of parallel optimization. Several
of the contributions that were made over the last two decades have matured
to the point where a coherent theoretical framework has been developed, ex-
tensive numerical experiments have been carried out, and large-scale prob-
lems from diverse areas of application have been solved successfully.
This book gives a comprehensive account of these developments. The
coverage is unavoidably not exhaustive, since parallel computing technology
has inuenced recent developments in all areas of optimization. A series of
books could be written on parallel computing for linear programming, large-
scale constrained optimization, unconstrained optimization, global optim-
viii Preface

ization and combinatorial optimization see Section 1.5 for references. This
book focuses on parallel optimization methods for large-scale constrained
optimization problems and structured linear programs. Hence, it provides
a comprehensive chart of part of the vast intersection between parallel com-
puting and optimization. We set out to describe a domain where parallel
computing is having a great impact | precisely because of the large-scale
nature of the applications | and where many of the recent research devel-
opments have occurred. Even within this domain we do not claim that the
material about theory, parallel algorithms and applications presented here
is exhaustive. However, related developments that are not treated in the
book are discussed in extensive \Notes and References" sections at the end
of each chapter.
What, then, has determined our choice of theory, algorithms and ap-
plications that were included in the book? We have focused on methods
where substantial computational experiences have been accumulated over
the years, and where, we feel, substantial integration has been achieved
between the theory, the algorithms and the applications.
Quite often the implementation of an algorithm changes one's perspect-
ive of what are the important features of the algorithm, and such accumu-
lated experience, that we have acquired through our own work in the eld,
is reected in our treatment. The intricacies of exploiting the problem
structure are also fully revealed only during an implementation. Finally, it
is only with computational experiments that we can have full con dence in
the e
ciency and robustness of an algorithm. The material presented in
this book leads to implementable parallel algorithms that have undergone
the scrutiny of implementation on a variety of parallel architectures. In
addition, our choice of topics is broad enough so that readers can get a
comprehensive view of the landscape of parallel optimization methods.
While not all currently known parallel algorithms are discussed, the
book introduces algorithms from three broad families of algorithms for con-
strained optimization. Those are de ned later in the book as (i) iterative
projection algorithms, (ii) model decomposition algorithms, and (iii) in-
terior point algorithms. When viewed from the proper perspective these
algorithms satisfy the design characteristics of \good" parallel algorithms.
The book starts with a basic introduction to parallel computers: what
they are, how to assess their performance, how to design and implement par-
allel algorithms. This core knowledge on parallel computers is then linked
with the theoretical algorithms. The combined mathematical algorithms and
parallel computing techniques are brought together to bear on the solution
of several important applications: image reconstruction from projections,
matrix balancing, network optimization, nonlinear programming for plan-
ning under uncertainty, and nancial planning. We also address implement-
ation issues and study results from recent numerical works that highlight
the e
ciency of the developed algorithms, when implemented on suitable
Preface ix

parallel computer platforms.


We believe that the value of bringing together applications, mathem-
atical algorithms and parallel computing techniques, extends beyond the
successful solution of the speci c problems at hand. Namely, it introduces
the reader to the complete process from the modeling of a problem through
the design of solution algorithms, and to the art and science of parallel
computations. It is not possible to study these three disciplinary eorts |
modeling, mathematics of algorithms and parallel computing | in isolation
from each other. The successful solution of real-world problems in scienti c
computing is the result of coordinated eorts across all three fronts. We
hope that this book will help the reader to develop such a broad perspective
and, thus, follow Brahmagupta's admonishment.
To keep the size of the book reasonable we had to make some decisions
on what topics to exclude and about the prerequisites that are assumed by
the reader. Many important topics related to the subject matter of the book
have been left out or are only mentioned casually. These include questions
of rate of convergence, computational complexity, stopping criteria, beha-
vior of the algorithms in inconsistent cases and so on.
Regarding prerequisites we assume that the reader has been systemat-
ically exposed to dierential and integral calculus, linear algebra, convex
analysis and optimizationtheory. Sections 10.2, 10.3 and Chapter 13 assume
familiarity with notions from probability theory.
Finally, inspite of the large bibliography included at the end of the book
we might have missed relevant references or erred in crediting work done
by others. We will be grateful to readers who bring to our attention such
ommissions so that we can correct them in the future.
Organization of the Book
The material of this book is organized in three parts. First, Chapter 1
introduces the fundamental topics on parallel computing.
Part I: Theory, develops the theory of generalized distances and gener-
alized projections (Chapter 2) and the theory for their use in solving linear
programming problems via proximal minimization (Chapter 3). The theory
of penalty and barrier methods and augmented Lagrangians is developed in
(Chapter 4). This material provides the theoretical foundation upon which
the algorithms in Chapters 5 to 8 are developed.
Part II: Algorithms, develops iterative projection algorithms, model de-
composition algorithms, and interior point algorithms. Chapter 5 discusses
iterative algorithms for the solution of convex feasibility problems, using
the theory of generalized projections. Similarly, Chapter6 uses the theory
of generalized distances and generalized projections to develop algorithms
for linearly constrained optimization problems. Chapter 7 develops model
decomposition algorithms, based on the theory of penalty methods and
augmented Lagrangians. Chapter 8 introduces interior point algorithms for
x Preface

Fig. 0.1 Organization of the book.


linear and quadratic programming, and explains ways in which the struc-
ture of some large-scale optimization problems can be exploited by these
algorithms for parallel computations.
Part III: Applications, discusses applications from several diverse real-
world domains where the parallel algorithms are applicable. Each chapter
contains a description of the real-world application, it develops one or more
mathematical models for each problem, and discusses solution algorithms
from one or more of the algorithm classes of Part II. Chapter 9 discusses
problems of matrix estimation. Chapter 10 discusses problems of image
reconstruction from projections. Chapter 11 discusses the problem of radi-
ation therapy treatment planning. Chapter 12 discusses problems in trans-
portation and the multicommodity ow problem. Chapter 13 discusses
problems of planning under uncertainty using stochastic programming and
robust optimization models.
Finally, two chapters are devoted to the parallel implementation and
testing of the algorithms. Implementations are discussed in Chapter 14.
The issue of implementations is an important one, but it is bound to be
linked closely with the computer architecture. To the extent possible our
discussion is linked to a whole class of machines and not just to a speci c
hardware model. Chapter15 summarizes numerical experiences with several
of the algorithms that demonstrate their eectiveness for solving large-scale
problems when implemented on parallel machines.
Figure 0.1 illustrates the interdependencies among the chapters. The
sequence of chapters indicated in this diagram must be followed in order to
appreciate fully a line of development from its theoretical foundations, to
the algorithms and their applications. While we emphasize the importance
of studying the continuum of theory-algorithms-applications, the chapters
of the book are written in such a way that they can be used as a reference,
without the need to study them sequentially. Readers who are interested
only in the applications and the mathematical models may read the relevant
chapters from Part III, without reading rst the chapters on algorithms
from Part II. Of course, in order to fully appreciate the solution algorithms
for the models one has to read the earlier chapters as well. But even then,
a solution algorithm can be understood without referring to the relevant
chapters on theory from Part I, unless the reader wishes to understand the
proof of convergence as well. The book can, therefore, be used either as a
textbook or as a reference book.

Suggested course outlines


The book is organized in a way that allows it to be used as a text for
graduate courses in large-scale optimization, parallel computing or large-
Preface xi

scale mathematical modeling. There are three dierent avenues that an


instructor may follow in teaching this material, especially bearing in mind
that the whole book can not be covered in the usual time frame of a one-
semester course. No matter which avenue an instructor may decide to
follow, Chapter 1 gives a general introduction to the material and should
be covered rst.
One approach is to teach a course on theory and algorithms for con-
strained optimization and feasibility problems. Such a course will cover the
theory part of the book, Chapters2, 3 and 4, followed by the algorithms in
Chapters 5, 6, 7 and 8. Any one of the Chapters 3, 7 or 8 could be omitted
without loss of continuity, but a balanced treatment of dierent families of
optimization algorithms should include Chapters 6, 7 and 8. This course
focuses on the theoretical aspects of parallel optimization, and references
to parallel computations can be cursory.
A second approach is to teach a course on numerical methods for large-
scale structured optimization problems. Such a course will focus on the
algorithms part, without prior introduction to the theory that is essential
for establishing convergence. The emphasis is on developing the students'
understanding of the structure of an algorithm, taking for granted the theory
that provides the foundation for its correctness. Such a course will cover
the material from Chapters 6, 7 and 8. These chapters present general
algorithms. Selected sections from Chapters 9, 10, 12 and 13 illustrate the
development of algorithms for speci c problems, starting from the general
algorithms. In this course the exploitation of special structure is a key
issue, and references to parallel computing become crucial. This course
could also discuss the implementation of algorithms on parallel machines,
with coverage of the material in Chapter 14.
Yet a third approach is to focus on applications of optimization, and
present | in a cookbook fashion | implementable algorithms for the solu-
tion of real-world instances of large-scale problems. Such a course will teach
material from the chapters on applications, and refer to the corresponding
chapters in Part II where speci c algorithms have been developed for the
applications at hand. This course will start with the iterative optimization
algorithms of Chapter 6 and move on to the applications Chapters 9, 10, 12
and 13. As in the previous course, the exploitation of special structure is a
key issue and references to parallel computing become crucial. This course
could also discuss the implementation of algorithms on parallel machines as
given in Chapter 14.
The material of the book can also be used for a course on parallel com-
puting. Several of the algorithms are simple enough so that students with
little background in optimization can readily understand them, and such
algorithms can be the focus for implementation exercises. Furthermore, the
structures of the underlying models vary from the very simple dense matrix
(e.g., the dense transportation problems of Sections 12.2.1 and 12.4.1, to
xii Preface

sparse and structured matrices and graph problems (e.g., the multicommod-
ity transportation problems and the stochastic networks of Sections 12.2.2,
12.4.3, and 13.8, respectively). Hence, the material can be used to intro-
duce students to the art of implementing algorithms on parallel machines.
Such a course will provide some motivation by introducing applications
from Chapters 9, 10, 12 and 13. For each application an algorithm can
be introduced (speci c implementable algorithms are found in the applic-
ations chapters), and references made to the implementation techniques of
Chapter 14. The course should follow the sequence of topics as discussed in
Chapter 14, but before each section of this chapter is presented in class the
material from the corresponding application chapter should be introduced
rst. Finally, Chapter 15 can be used as a reference for students who wish
to test the e
ciency of their implementation, or the performance of their
parallel machine.

Acknowledgments
Part of the material presented in this book is based on our own published
work. We express our appreciation to our past and present collaborators
from whom we learned a great deal. Particular thanks for many useful
discussions and support, and for reading various parts of the book, are
extended to Marty Altschuler, Dimitri Bertsekas, Dan Butnariu, Charlie
Byrne, Alvaro De Pierro, Jitka Dupacova, Jonathan Eckstein, Tommy
Elfving, Gabor Herman, Dan Gordon, Alfredo Iusem, Elizabeth Jessup,
Arnold Lent, Robert Lewitt, Olvi Mangasarian, Jill Mesirov, Bob Meyer,
John Mulvey, Soren Nielsen, Mustafa Pinar, Simeon Reich, Uri Rothblum,
Michael Schneider, Jay Udupa, Paul Tseng and Dafeng Yang.
A draft version of the book was read by Dimitri Bertsekas, Jonathan
Eckstein, Tommy Elfving, Michael Ferris and Gabor Herman. We thank
them for their constructive comments. Any remaining errors or imperfec-
tions are, of course, our sole responsibility.
Part of the work of Yair Censor in this area was done in collaboration
with the Medical Image Processing Group (MIPG) at the Department of
Radiology, University of Pennsylvania. The support and encouragement of
Gabor Herman for this continued collaboration is gratefully acknowledged.
The work of Stavros A. Zenios was done while he was on the faculty at
the Department of Operations and Information Management, The Wharton
School, University of Pennsylvania, and while on leave at the Operations
Research Center of the Sloan School, Massachussetts Institute of Techno-
logy, and with Thinking Machines Corporation, Cambridge, MA. Substan-
tial parts of Stavros Zenios' work on this book were done during his visits
to the University of Bergamo, Italy, and the support of Marida Bertocchi
in making these visits possible is gratefully acknowledged. We express our
appreciation to these organizations as well as to our current institutions,
Preface xiii

the University of Haifa, Haifa, Israel and the University of Cyprus, Nico-
sia, Cyprus, for creating an environment where international collaborations
can be fostered and where long-term undertakings, such as the writing of
this book, are encouraged and supported.
This text grew out of lecture notes that we prepared and delivered at the
19th Brazilian Mathematical Colloquium, that took place in Rio de Janeiro
in the summer of 1993. We thank the organizers of that Colloquium and
Professor Jacob Palis, the Director of the Instituto de Mathematica Pura
e Aplicada (IMPA) where the Colloquium took place, for giving us the
opportunity to present our work at this forum, and thereby giving us the
impetus to complete the project of writing this book.
This work was supported by grants from the National Institutes of
Health (HL{28438), the Air-Force O
ce of Scienti c Research (AFOSR-
91{0168) and the National Science Foundation (CCR{9104042 and SES{
9100216) in the USA, and the National Research Council (CNR), Italy.
Computing resources were made available through the North-East Paral-
lel Architectures Center (NPAC) at Syracuse University, the Army High-
Performance Computing Research Center (AHPCRC) at the University
of Minnesota, Thinking Machines Corporation, Cray Research Inc., and
Argonne National Laboratory. We also thank Ms. Danielle Friedlander
and Ms. Giuseppina La Mantia for their work in processing parts of the
manuscript in LATEX. Finally, we express our appreciation to Senior Ed-
itor Bill Zobrist and Editorial Assistant Krysia Bebick from the Oxford
University Press o
ce in New York for their help.

Haifa, Nicosia and Philadelphia Yair Censor


December 1996 Stavros A. Zenios
Contents
Foreword, by George B. Dantzig vi
Preface vii
Glossary of Symbols xxii
1 Introduction 2
1.1 Parallel Computers 4
1.1.1 Taxonomy of parallel architectures 5
1.1.2 Unifying concepts of parallel computing 7
1.1.3 Control and data parallelism 10
1.2 How Does Parallelism Aect Computing? 12
1.3 A Classi cation of Parallel Algorithms 14
1.3.1 Parallelism due to algorithm structure: Iter-
ative projection algorithms 15
1.3.2 Parallelism due to problem structure: Model
decomposition and interior point algorithms 18
1.4 Measuring the Performance of Parallel Algorithms 21
1.5 Notes and References 24
PART I THEORY
2 Generalized Distances and Generalized Projec-
tions 29
2.1 Bregman Functions and Generalized Projections 30
2.2 Generalized Projections onto Hyperplanes 35
2.3 Bregman Functions on the Whole Space 39
2.4 Characterization of Generalized Projections 42
2.5 Csiszar '-divergences 44
2.6 Notes and References 47
3 Proximal Minimization with D-Functions 49
3.1 The Proximal Minimization Algorithm 50
3.2 Convergence Analysis of the PMD Algorithm 51
3.3 Special Cases: Quadratic and Entropic PMD 57
3.4 Notes and References 58
Contents xv

4 Penalty Methods, Barrier Methods and Augmen-


ted Lagrangians 60
4.1 Penalty Methods 60
4.2 Barrier Methods 63
4.3 The Primal-Dual Algorithmic Scheme 65
4.4 Augmented Lagrangian Methods 69
4.5 Notes and References 74
PART II ALGORITHMS
5 Iterative Methods for Convex Feasibility Prob-
lems 79
5.1 Preliminaries: Control Sequences and Relaxation
Parameters 80
5.2 The Method of Successive Orthogonal Projections 82
5.3 The Cyclic Subgradient Projections Method 83
5.4 The Relationship of CSP with Other Methods 86
5.4.1 The method of successive orthogonal projec-
tions 87
5.4.2 A remotest-set-controlled subgradient projec-
tions method 88
5.4.3 The scheme of Oettli 88
5.4.4 The linear feasibility problem: solving linear
inequalities 89
5.4.5 Kaczmarz's algorithm for systems of linear
equations and its nonlinear extension 90
5.5 The (   )-Algorithm 92
5.6 The Block-Iterative Projections Algorithm 100
5.6.1 Convergence of the BIP algorithm 102
5.7 The Block-Iterative (   )-Algorithm 106
5.8 The Method of Successive Generalized Projections 107
5.9 The Multiprojections Algorithm 110
5.9.1 The product space setup 110
5.9.2 Generalized projections in the product space 112
5.9.3 The simultaneous multiprojections algorithm
and the split feasibility problem 114
5.10 Automatic Relaxation for Linear Interval Feasibility
Problems 116
5.11 Notes and References 122
6 Iterative Algorithms for Linearly Constrained Op-
timization Problems 127
xvi Contents

6.1 The Problem, Solution Concepts and the Special


Environment 128
6.1.1 The problem 128
6.1.2 Approaches and solution concepts 128
6.1.3 The special computational environment 131
6.2 Row-Action Methods 131
6.3 Bregman's Algorithm for Inequality Constrained Prob-
lems 133
6.4 Algorithm for Interval-Constrained Problems 142
6.5 Row-Action Algorithms for Norm Minimization 147
6.5.1 The algorithm of Kaczmarz 148
6.5.2 The algorithm of Hildreth 149
6.5.3 ART4 { An algorithmfor norm-minimization
over linear intervals 150
6.6 Row-Action Algorithms for Shannon's Entropy Op-
timization 153
6.7 Block-Iterative MART Algorithm 155
6.8 Underrelaxation Parameters and Extension of the
Family of Bregman Functions 160
6.9 The Hybrid Algorithm: A Computational Simpli-
cation 172
6.9.1 Hybrid algorithms for Shannon's entropy 177
6.9.2 Algorithms for the Burg entropy function 179
6.9.3 Renyi's entropy function 185
6.10 Notes and References 187
7 Model Decomposition Algorithms 191
7.1 General Framework of Model Decompositions 192
7.1.1 Problem modi ers 193
7.1.2 Solution algorithms 196
7.2 The Linear-Quadratic Penalty (LQP) Algorithm 202
7.2.1 Analysis of the -smoothed linear-quadratic
penalty function 205
7.2.2 -exactness properties of the LQP function 210
7.3 Notes and References 215
8 Decompositions in Interior Point Algorithms 218
8.1 The Primal-Dual Path Following Algorithm for Lin-
ear Programming 219
8.1.1 Choosing the step lengths 223
8.1.2 Choosing the barrier parameter 224
8.2 The Primal-DualPath FollowingAlgorithm for Quad-
Contents xvii

ratic Programming 224


8.3 Parallel Matrix Factorization Procedures for the In-
terior Point Algorithm 228
8.3.1 The matrix factorization procedure for the
dual step calculation 229
8.4 Notes and References 234
PART III APPLICATIONS
9 Matrix Estimation Problems 239
9.1 Applications of Matrix Balancing 240
9.2 Mathematical Models for Matrix Balancing 246
9.2.1 Matrix estimation formulations 246
9.2.2 Entropy optimization models for matrix bal-
ancing 250
9.3 Iterative Algorithms for Matrix Balancing 252
9.3.1 The range-RAS algorithm (RRAS) 253
9.3.2 The RAS scaling algorithm 257
9.3.3 The range-DSS algorithm (RDSS) 258
9.3.4 The diagonal similarityscaling (DSS) algorithm
261
9.4 Notes and References 262
10 Image Reconstruction from Projections 265
10.1 Transform Methods and the Fully Discretized Model
267
10.2 A Fully Discretized Model for Positron Emission
Tomography 276
10.2.1 The Expectation-Maximization algorithm 279
10.3 A Justi cation for Entropy Maximization in Image
Reconstruction 280
10.4 Algebraic Reconstruction Technique (ART) for Sys-
tems of Equations 284
10.5 Iterative Data Re nement in Image Reconstruction 286
10.5.1 The fundamentals of iterative data re nement.
288
10.5.2 Applications in medical imaging 296
10.6 On the Selective Use of Iterative Algorithms for In-
version Problems in Image Reconstruction 302
10.7 Notes and References 305
11 The Inverse Problem in Radiation Therapy Treat-
ment Planning 309
xviii Contents

11.1 Problem De nition and the Continuous Model 311


11.1.1 The continuous forward problem 312
11.1.2 The continuous inverse problem. 314
11.2 Discretization of the Feasibility Problem 315
11.3 Computational Inversion of the Data 321
11.4 Consequences and Limitations 322
11.5 Experimental Results 323
11.6 Combination of Plans in Radiotherapy 330
11.6.1 Basic de nitions and mathematical modeling 331
11.6.2 The feasible case 333
11.6.3 The infeasible case 336
11.7 Notes and References 341
12 Multicommodity Network Flow Problems 344
12.1 Preliminaries 345
12.2 Problem Formulations 346
12.2.1 Transportation problems 347
12.2.2 Multicommodity network ow problems 349
12.3 Sample Applications 352
12.3.1 Example 1: Covering positions in stock op-
tions 352
12.3.2 Example 2: Air-tra
c control 353
12.3.3 Example 3: Routing of tra
c 354
12.4Iterative Algorithms for MulticommodityNetwork Flow
Problems 355
12.4.1 Row-action algorithmfor quadratic transport-
ation problems 355
12.4.2 Extensions to generalized networks 359
12.4.3Row-action algorithm for quadratic multicom-
modity transportation problems 365
12.5 A Model Decomposition Algorithm for Multicom-
modity Network Flow Problems 368
12.5.1 The linear-quadratic penalty (LQP) algorithm
369
12.6 Notes and References 371
13 Planning Under Uncertainty 375
13.1 Preliminaries 376
13.2 The Newsboy Problem 377
13.3 Stochastic Programming Problems 378
13.3.1 Anticipative models 379
13.3.2 Adaptive models 380
Contents xix

13.3.3 Recourse models 381


13.4 Robust Optimization Problems 385
13.5 Applications 388
13.5.1 Robust optimization for the diet problem 389
13.5.2 Robust optimization for planning capacity
expansion 391
13.5.3 Robust optimization for matrix balancing 396
13.6 Stochastic Programming for Portfolio Management 400
13.6.1 Notation 401
13.6.2 Model formulation 403
13.7 Stochastic Network Models 405
13.7.1 Split variable formulation of stochastic net-
work models 407
13.7.2 Algebraic representation of the stochastic net-
work problem 409
13.8 Iterative Algorithm for Stochastic Network Optim-
ization 410
13.9 Notes and References 418
14 Decompositions for Parallel Computing 422
14.1 Vector-Random Access Machine (V-RAM) 423
14.1.1 Parallel pre x operations 424
14.2 Mapping Data to Processors 425
14.2.1 Mapping a dense matrix 425
14.2.2 Mapping a sparse matrix 426
14.3 Parallel Computing for Matrix Balancing 430
14.3.1 Data parallel computing with RAS 430
14.3.2 Control parallel computing with RAS 432
14.4 Parallel Computing for Image Reconstruction 433
14.4.1 Parallelism within a block 435
14.4.2 Parallelism with independent blocks 436
14.4.3 Parallelism between views 437
14.5 Parallel Computing for Network-structured Prob-
lems 439
14.5.1 Solving dense transportation problems 439
14.5.2 Solving sparse transportation problems 440
14.5.3 Solving sparse transshipment graphs 443
14.5.4 Solving network structured problems 443
14.6 Parallel Computing with Interior Point Algorithms 443
14.6.1 The communication schemes on a hypercube 444
14.6.2 The parallel implementation on a hypercube 446
14.6.3 An alternative parallel implementation 447
xx Contents

14.7 Notes and References 448


15 Numerical Investigations 451
15.1 Reporting Computational Experiments on Parallel
Machines 453
15.2 Matrix Balancing 454
15.2.1 Data parallel implementations 455
15.2.2 Control parallel implementations 456
15.3 Image Reconstruction 458
15.4 Multicommodity Network Flows 461
15.4.1 Row-action algorithmfor transportation prob-
lems 461
15.4.2 Row-action algorithmfor multicommoditytrans-
portation problems 463
15.4.3 Linear-quadratic penalty (LQP) algorithm
for multicommodity network ow problems 464
15.5 Planning Under Uncertainty 466
15.5.1 Interior point algorithm 468
15.5.2 Row-action algorithmfor nonlinear stochastic
networks 471
15.6 Proximal Minimization with D-functions 475
15.6.1 Solving linear network problems 476
15.6.2 Solving linear stochastic network problems 477
15.7 Description of Parallel Machines 483
15.7.1 Alliant FX/8 483
15.7.2 Connection Machine CM{2 483
15.7.3 Connection Machine CM{5 483
15.7.4 CRAY X-MP and Y-MP 484
15.7.5 Intel iPSC/860 484
15.8 Notes and References 484
Bibliography 487
PARALLEL OPTIMIZATION
Theory, Algorithms and Applications

You might also like