Bundle Adjustment - A Modern Synthesis: Bill - Triggs@
Bundle Adjustment - A Modern Synthesis: Bill - Triggs@
Bundle Adjustment - A Modern Synthesis: Bill - Triggs@
Introduction
This paper is a survey of the theory and methods of bundle adjustment aimed at the computer
vision community, and more especially at potential implementors who already know a little
about bundle methods. Most of the results appeared long ago in the photogrammetry and
geodesy literatures, but many seem to be little known in vision, where they are gradually
being reinvented. By providing an accessible modern synthesis, we hope to forestall some
of this duplication of effort, correct some common misconceptions, and speed progress in
visual reconstruction by promoting interaction between the vision and photogrammetry
communities.
Bundle adjustment is the problem of rening a visual reconstruction to produce jointly
optimal 3D structure and viewing parameter (camera pose and/or calibration) estimates.
Optimal means that the parameter estimates are found by minimizing some cost function
that quanties the model tting error, and jointly that the solution is simultaneously optimal
with respect to both structure and camera variations. The name refers to the bundles
This work was supported in part by the European Commission Esprit LTR project Cumuli (B. Triggs), the UK EPSRC project GR/L34099 (P. McLauchlan), and the Royal Society
(A. Fitzgibbon). We would like to thank A. Zisserman, A. Grun and W. Forstner for valuable
comments and references.
B. Triggs, A. Zisserman, R. Szeliski (Eds.): Vision Algorithms99, LNCS 1883, pp. 298372, 2000.
c Springer-Verlag Berlin Heidelberg 2000
299
of light rays leaving each 3D feature and converging on each camera centre, which are
adjusted optimally with respect to both feature and camera positions. Equivalently
unlike independent model methods, which merge partial reconstructions without updating
their internal structure all of the structure and camera parameters are adjusted together
in one bundle.
Bundle adjustment is really just a large sparse geometric parameter estimation problem,
the parameters being the combined 3D feature coordinates, camera poses and calibrations.
Almost everything that we will say can be applied to many similar estimation problems in
vision, photogrammetry, industrial metrology, surveying and geodesy. Adjustment computations are a major common theme throughout the measurement sciences, and once the
basic theory and methods are understood, they are easy to adapt to a wide variety of problems. Adaptation is largely a matter of choosing a numerical optimization scheme that
exploits the problem structure and sparsity. We will consider several such schemes below
for bundle adjustment.
Classically, bundle adjustment and similar adjustment computations are formulated
as nonlinear least squares problems [19, 46, 100, 21, 22, 69, 5, 73, 109]. The cost function
is assumed to be quadratic in the feature reprojection errors, and robustness is provided
by explicit outlier screening. Although it is already very exible, this model is not really
general enough. Modern systems often use non-quadratic M-estimator-like distributional
models to handle outliers more integrally, and many include additional penalties related to
overtting, model selection and system performance (priors, MDL). For this reason, we
will not assume a least squares / quadratic cost model. Instead, the cost will be modelled
as a sum of opaque contributions from the independent information sources (individual
observations, prior distributions, overtting penalties . . . ). The functional forms of these
contributions and their dependence on xed quantities such as observations will usually be
left implicit. This allows many different types of robust and non-robust cost contributions to
be incorporated, without unduly cluttering the notation or hiding essential model structure.
It ts well with modern sparse optimization methods (cost contributions are usually sparse
functions of the parameters) and object-centred software organization, and it avoids many
tedious displays of chain-rule results. Implementors are assumed to be capable of choosing
appropriate functions and calculating derivatives themselves.
One aim of this paper is to correct a number of misconceptions that seem to be common
in the vision literature:
Optimization / bundle adjustment is slow: Such statements often appear in papers
introducing yet another heuristic Structure from Motion (SFM) iteration. The claimed
slowness is almost always due to the unthinking use of a general-purpose optimization routine that completely ignores the problem structure and sparseness. Real bundle
routines are much more efcient than this, and usually considerably more efcient and
exible than the newly suggested method (6, 7). That is why bundle adjustment remains the dominant structure renement technique for real applications, after 40 years
of research.
Only linear algebra is required: This is a recent variant of the above, presumably
meant to imply that the new technique is especially simple. Virtually all iterative renement techniques use only linear algebra, and bundle adjustment is simpler than many
in that it only solves linear systems: it makes no use of eigen-decomposition or SVD,
which are themselves complex iterative methods.
300
B. Triggs et al.
Any sequence can be used: Many vision workers seem to be very resistant to the idea
that reconstruction problems should be planned in advance (11), and results checked
afterwards to verify their reliability (10). System builders should at least be aware of the
basic techniques for this, even if application constraints make it difcult to use them.
The extraordinary extent to which weak geometry and lack of redundancy can mask
gross errors is too seldom appreciated, c.f . [34, 50, 30, 33].
Point P is reconstructed accurately: In reconstruction, just as there are no absolute
references for position, there are none for uncertainty. The 3D coordinate frame is
itself uncertain, as it can only be located relative to uncertain reconstructed features or
cameras. All other feature and camera uncertainties are expressed relative to the frame
and inherit its uncertainty, so statements about them are meaningless until the frame
and its uncertainty are specied. Covariances can look completely different in different
frames, particularly in object-centred versus camera-centred ones. See 9.
There is a tendency in vision to develop a profusion of ad hoc adjustment iterations. Why
should you use bundle adjustment rather than one of these methods? :
Flexibility: Bundle adjustment gracefully handles a very wide variety of different 3D
feature and camera types (points, lines, curves, surfaces, exotic cameras), scene types
(including dynamic and articulated models, scene constraints), information sources (2D
features, intensities, 3D information, priors) and error models (including robust ones).
It has no problems with missing data.
Accuracy: Bundle adjustment gives precise and easily interpreted results because it uses
accurate statistical error models and supports a sound, well-developed quality control
methodology.
Efciency: Mature bundle algorithms are comparatively efcient even on very large
problems. They use economical and rapidly convergent numerical methods and make
near-optimal use of problem sparseness.
In general, as computer vision reconstruction technology matures, we expect that bundle
adjustment will predominate over alternative adjustment methods in much the same way as
it has in photogrammetry. We see this as an inevitable consequence of a greater appreciation
of optimization (notably, more effective use of problem structure and sparseness), and of
systems issues such as quality control and network design.
Coverage: We will touch on a good many aspects of bundle methods. We start by considering the camera projection model and the parametrization of the bundle problem 2, and
the choice of error metric or cost function 3. 4 gives a rapid sketch of the optimization
theory we will use. 5 discusses the network structure (parameter interactions and characteristic sparseness) of the bundle problem. The following three sections consider three
types of implementation strategies for adjustment computations: 6 covers second order
Newton-like methods, which are still the most often used adjustment algorithms; 7 covers
methods with only rst order convergence (most of the ad hoc methods are in this class);
and 8 discusses solution updating strategies and recursive ltering bundle methods. 9
returns to the theoretical issue of gauge freedom (datum deciency), including the theory
of inner constraints. 10 goes into some detail on quality control methods for monitoring
the accuracy and reliability of the parameter estimates. 11 gives some brief hints on network design, i.e. how to place your shots to ensure accurate, reliable reconstruction. 12
completes the body of the paper by summarizing the main conclusions and giving some
provisional recommendations for methods. There are also several appendices. A gives a
brief historical overview of the development of bundle methods, with literature references.
301
B gives some technical details of matrix factorization, updating and covariance calculation methods. C gives some hints on designing bundle software, and pointers to useful
resources on the Internet. The paper ends with a glossary and references.
General references: Cultural differences sometimes make it difcult for vision workers
to read the photogrammetry literature. The collection edited by Atkinson [5] and the
manual by Karara [69] are both relatively accessible introductions to close-range (rather
than aerial) photogrammetry. Other accessible tutorial papers include [46, 21, 22]. Kraus
[73] is probably the most widely used photogrammetry textbook. Browns early survey
of bundle methods [19] is well worth reading. The often-cited manual edited by Slama
[100] is now quite dated, although its presentation of bundle adjustment is still relevant.
Wolf & Ghiliani [109] is a text devoted to adjustment computations, with an emphasis
on surveying. Hartley & Zisserman [62] is an excellent recent textbook covering vision
geometry from a computer vision viewpoint. For nonlinear optimization, Fletcher [29]
and Gill et al [42] are the traditional texts, and Nocedal & Wright [93] is a good modern
introduction. For linear least squares, Bjorck [11] is superlative, and Lawson & Hanson is
a good older text. For more general numerical linear algebra, Golub & Van Loan [44] is
the standard. Duff et al [26] and George & Liu [40] are the standard texts on sparse matrix
techniques. We will not discuss initialization methods for bundle adjustment in detail, but
appropriate reconstruction methods are plentiful and well-known in the vision community.
See, e.g., [62] for references.
Notation: The structure, cameras, etc., being estimated will be parametrized by a single
large state vector x. In general the state belongs to a nonlinear manifold, but we linearize
this locally and work with small linear state displacements denoted x. Observations (e.g.
measured image features) are denoted z. The corresponding predicted values at parameter
value x are denoted z = z(x), with residual prediction error z(x) zz(x). However,
observations and prediction errors usually only appear implicitly, through their inuence
df
, and
on the cost function f(x) = f(predz(x)). The cost functions gradient is g dx
d2 f
dz
its Hessian is H dx
2 . The observation-state Jacobian is J dx . The dimensions of
x, z are nx , nz .
2
2.1
We begin the development of bundle adjustment by considering the basic image projection
model and the issue of problem parametrization. Visual reconstruction attempts to recover a
model of a 3D scene from multiple images. As part of this, it usually also recovers the poses
(positions and orientations) of the cameras that took the images, and information about their
internal parameters. A simple scene model might be a collection of isolated 3D features,
e.g., points, lines, planes, curves, or surface patches. However, far more complicated scene
models are possible, involving, e.g., complex objects linked by constraints or articulations,
photometry as well as geometry, dynamics, etc. One of the great strengths of adjustment
computations and one reason for thinking that they have a considerable future in vision
is their ability to take such complex and heterogeneous models in their stride. Almost
any predictive parametric model can be handled, i.e. any model that predicts the values
of some known measurements or descriptors on the basis of some continuous parametric
representation of the world, which is to be estimated from the measurements.
302
B. Triggs et al.
Similarly, many possible camera models exist. Perspective projection is the standard,
but the afne and orthographic projections are sometimes useful for distant cameras, and
more exotic models such as push-broom and rational polynomial cameras are needed for
certain applications [56, 63]. In addition to pose (position and orientation), and simple
internal parameters such as focal length and principal point, real cameras also require various types of additional parameters to model internal aberrations such as radial distortion
[1719, 100, 69, 5].
For simplicity, suppose that the scene is modelled by individual static 3D features Xp ,
p = 1 . . . n, imaged in m shots with camera pose and internal calibration parameters Pi ,
i = 1 . . . m. There may also be further calibration parameters Cc , c = 1 . . . k, constant
across several images (e.g., depending on which of several cameras was used). We are
given uncertain measurements xip of some subset of the possible image features xip (the
true image of feature Xp in image i). For each observation xip , we assume that we have
a predictive model xip = x(Cc , Pi , Xp ) based on the parameters, that can be used to
derive a feature prediction error:
xip (Cc , Pi , Xp ) xip x(Cc , Pi , Xp )
(1)
In the case of image observations the predictive model is image projection, but other
observation types such as 3D measurements can also be included.
To estimate the unknown 3D feature and camera parameters from the observations,
and hence reconstruct the scene, we minimize some measure (discussed in 3) of their total
prediction error. Bundle adjustment is the model renement part of this, starting from given
initial parameter estimates (e.g., from some approximate reconstruction method). Hence,
it is essentially a matter of optimizing a complicated nonlinear cost function (the total
prediction error) over a large nonlinear parameter space (the scene and camera parameters).
We will not go into the analytical forms of the various possible feature and image
projection models, as these do not affect the general structure of the adjustment network,
and only tend to obscure its central simplicity. We simply stress that the bundle framework
is exible enough to handle almost any desired model. Indeed, there are so many different
combinations of features, image projections and measurements, that it is best to regard
them as black boxes, capable of giving measurement predictions based on their current
parameters. (For optimization, rst, and possibly second, derivatives with respect to the
parameters are also needed).
For much of the paper we will take quite an abstract view of this situation, collecting the
scene and camera parameters to be estimated into a large state vector x, and representing
the cost (total tting error) as an abstract function f(x). The cost is really a function of
the feature prediction errors xip = xip x(Cc , Pi , Xp ). But as the observations xip are
constants during an adjustment calculation, we leave the costs dependence on them and
on the projection model x() implicit, and display only its dependence on the parameters
x actually being adjusted.
2.2
Bundle Parametrization
303
304
B. Triggs et al.
Projectively, innity is just like any other place. Afne parametrization (X Y Z 1) is
acceptable for points near the origin with close-range convergent camera geometries, but
it is disastrous for distant ones because it articially cuts away half of the natural parameter
space, and hides the fact by sending the resulting edge to innite parameter values. Instead,
you should use a homogeneous
parametrization (X Y Z W ) for distant points, e.g. with
spherical normalization Xi2 = 1.
Rotations: Similarly, experience suggests that quasi-global 3 parameter rotation parametrizations such as Euler angles cause numerical problems unless one can be certain to
avoid their singularities and regions of uneven coverage. Rotations should be parametrized
using either quaternions subject to q2 = 1, or local perturbations R R or R R of
an existing rotation R, where R can be any well-behaved 3 parameter small rotation
approximation, e.g. R = (I + [ r ] ), the Rodriguez formula, local Euler angles, etc.
State updates: Just as state vectors x represent points in some nonlinear space, state
updates x x + x represent displacements in this nonlinear space that often can not
be represented exactly by vector addition. Nevertheless, we assume that we can locally
linearize the state manifold, locally resolving any internal constraints and freedoms that
it may be subject to, to produce an unconstrained vector x parametrizing the possible
local state displacements. We can then, e.g., use Taylor expansion in x to form a local
df
d2 f
cost model f(x + x) f(x) + dx
x + 12 x dx
2 x, from which we can estimate the
state update x that optimizes this model (4). The displacement x need not have the
same structure or representation as x indeed, if a well-behaved local parametrization is
used to represent x, it generally will not have but we must at least be able to update
the state with the displacement to produce a new state estimate. We write this operation
as x x + x, even though it may involve considerably more than vector addition. For
example, apart from the change of representation, an updated quaternion q q + dq will
need to have its normalization q2 = 1 corrected, and a small rotation update of the form
R R(1 + [ r ] ) will not in general give an exact rotation matrix.
Error Modelling
We now turn to the choice of the cost function f(x), which quanties the total prediction
(image reprojection) error of the model parametrized by the combined scene and camera
parameters x. Our main conclusion will be that robust statistically-based error metrics
based on total (inlier + outlier) log likelihoods should be used, to correctly allow for the
presence of outliers. We will argue this at some length as it seems to be poorly understood.
The traditional treatments of adjustment methods consider only least squares (albeit with
data trimming for robustness), and most discussions of robust statistics give the impression
that the choice of robustier or M-estimator is wholly a matter of personal whim rather
than data statistics.
Bundle adjustment is essentially a parameter estimation problem. Any parameter estimation paradigm could be used, but we will consider only optimal point estimators,
whose output is by denition the single parameter vector that minimizes a predened cost
function designed to measure how well the model ts the observations and background
knowledge. This framework covers many practical estimators including maximum likelihood (ML) and maximum a posteriori (MAP), but not explicit Bayesian model averaging.
Robustication, regularization and model selection terms are easily incorporated in the
cost.
305
A typical ML cost function would be the summed negative log likelihoods of the
prediction errors of all the observed image features. For Gaussian error distributions,
this reduces to the sum of squared covariance-weighted prediction errors (3.2). A MAP
estimator would typically add cost terms giving certain structure or camera calibration
parameters a bias towards their expected values.
The cost function is also a tool for statistical interpretation. To the extent that lower
costs are uniformly better, it provides a natural model preference ordering, so that cost
iso-surfaces above the minimum dene natural condence regions. Locally, these regions
are nested ellipsoids centred on the cost minimum, with size and shape characterized by
d2 f
the dispersion matrix (the inverse of the cost function Hessian H = dx
2 at the minimum).
Also, the residual cost at the minimum can be used as a test statistic for model validity
(10). E.g., for a negative log likelihood cost model with Gaussian error distributions,
twice the residual is a 2 variable.
3.1
Cost is additive, so as measurements of the same type are added the entire cost surface grows in
direct proportion to the amount of data nz . This means that the relative sizes of the cost and all of
306
B. Triggs et al.
Under mild regularity conditions on the observation distributions, the posterior distribution of the ML estimate converges asymptotically in probability to a Gaussian with
covariance equal to the dispersion matrix.
The ML estimate asymptotically has zero bias and the lowest variance that any unbiased
estimator can have. So in this sense, ML estimation is at least as good as any other
method2 .
Non-asymptotically, the dispersion is not necessarily a good approximation for the
covariance of the ML estimator. The asymptotic limit is usually assumed to be a valid
for well-designed highly-redundant photogrammetric measurement networks, but recent
sampling-based empirical studies of posterior likelihood surfaces [35, 80, 68] suggest that
the case is much less clear for small vision geometry problems and weaker networks. More
work is needed on this.
The effect of incorrect error models: It is clear that incorrect modelling of the observation
distributions is likely to disturb the ML estimate. Such mismodelling is to some extent
inevitable because error distributions stand for inuences that we can not fully predict or
control. To understand the distortions that unrealistic error models can cause, rst realize
that geometric tting is really a special case of parametric probability density estimation.
For each set of parameter values, the geometric image projection model and the assumed
observation error models combine to predict a probability density for the observations.
Maximizing the likelihood corresponds to tting this predicted observation density to the
observed data. The geometry and camera model only enter indirectly, via their inuence
on the predicted distributions.
Accurate noise modelling is just as critical to successful estimation as accurate geometric modelling. The most important mismodelling is failure to take account of the
possibility of outliers (aberrant data values, caused e.g., by blunders such as incorrect
feature correspondences). We stress that so long as the assumed error distributions model
the behaviour of all of the data used in the t (including both inliers and outliers), the
above properties of ML estimation including asymptotic minimum variance remain valid
in the presence of outliers. In other words, ML estimation is naturally robust : there is no
its derivatives and hence the size r of the region around the minimum over which the second
order Taylor terms dominate all higher order ones remain roughly constant as nz increases.
Within this region, the total cost is roughly quadratic, so if the cost function was taken to be the
posterior log likelihood, the posterior distribution is roughly Gaussian. However the curvature of
the quadratic (i.e. the inverse
matrix) increases as data is added, so the posterior standard
dispersion
deviation shrinks as O / nz nx , where O() characterizes the average standard deviation
from a single observation. For nz nx (/r)2 , essentially the entire posterior probability
mass lies inside the quadratic region, so the posterior distribution converges asymptotically in
probability to a Gaussian. This happens at any proper isolated cost minimum at which second
order Taylor expansion is locally valid. The approximation gets better with more data (stronger
curvature) and smaller higher order Taylor terms.
This result follows from the Cramer-Rao bound (e.g. [23]), which says that the covariance of any
unbiased estimator is bounded below by the Fisher information or mean curvature of the posterior
2
d log p
x x) dx2 where p is the posterior probability, x the
log likelihood surface (
x x)(
parameters being estimated,
x the estimate given by any unbiased estimator, x the true underlying
x value, and A B denotes positive semideniteness of A B. Asymptotically, the posterior
distribution becomes Gaussian and the Fisher information converges to the inverse dispersion (the
curvature of the posterior log likelihood surface at the cost minimum), so the ML estimate attains
the Cramer-Rao bound.
307
0.4
Gaussian PDF
Cauchy PDF
0.35
0.3
100
0.25
Cauchy
Gaussian
0.2
80
0.15
0.1
0.05
0
-10
60
-5
10
8
7
40
6
5
20
4
3
2
0
0
1
0
-10
-5
100
200
300
400
500
600
700
800
900 1000
10
Fig. 2. Beware of treating any bell-shaped observation distribution as a Gaussian. Despite being
narrowerin the peak and broader in the tails, the probability density function of a Cauchy distribution,
1
p(x) = (1 + x2 ) , does not look so very different from that of a Gaussian (top left). But their
negative log likelihoods are very different (bottom left), and large deviations (outliers) are much
more probable for Cauchy variates than for Gaussian ones (right). In fact, the Cauchy distribution
has innite covariance.
need to robustify it so long as realistic error distributions were used in the rst place. A
distribution that models both inliers and outliers is called a total distribution. There is no
need to separate the two classes, as ML estimation does not care about the distinction. If the
total distribution happens to be an explicit mixture of an inlier and an outlier distribution
(e.g., a Gaussian with a locally uniform background of outliers), outliers can be labeled
after tting using likelihood ratio tests, but this is in no way essential to the estimation
process.
It is also important to realize the extent to which supercially similar distributions can
differ from a Gaussian, or equivalently, how extraordinarily rapidly the tails of a Gaussian
distribution fall away compared to more realistic models of real observation errors. See
gure 2. In fact, unmodelled outliers typically have very severe effects on the t. To see this,
suppose that the real observations are drawn from a xed (but perhaps unknown) underlying
distribution p0 (z). The law of large numbers says that their empirical distributions (the observed distribution of each set of samples) converge asymptotically
in probability to p0 (z).
So for each
model,
the
negative
log
likelihood
cost
sum
log
pmodel (zi |x) converges
i
to nz p0 (z) log(pmodel (z|x)) dz. Up to a model-independent
constant,
this is nz times
the relative entropy or Kullback-Leibler divergence p0 (z) log(p0 (z)/pmodel (z|x)) dz
of the model distribution w.r.t. the true one p0 (z). Hence, even if the model family does
not include p0 , the ML estimate converges asymptotically to the model whose predicted
observation distribution has minimum relative entropy w.r.t. p0 . (See, e.g. [96, proposition
2.2]). It follows that ML estimates are typically very sensitive to unmodelled outliers, as
regions which are relatively probable under p0 but highly improbable under the model
make large contributions to the relative entropy. In contrast, allowing for outliers where
308
B. Triggs et al.
none actually occur causes relatively little distortion, as no region which is probable under
p0 will have large log pmodel .
In summary, if there is a possibility of outliers, non-robust distribution models such
as Gaussians should be replaced with more realistic long-tailed ones such as: mixtures of
a narrow inlier and a wide outlier density, Cauchy or -densities, or densities dened
piecewise with a central peaked inlier region surrounded by a constant outlier region3 .
We emphasize again that poor robustness is due entirely to unrealistic distributional assumptions: the maximum likelihood framework itself is naturally robust provided that the
total observation distribution including both inliers and outliers is modelled. In fact, real
observations can seldom be cleanly divided into inliers and outliers. There is a hard core
of outliers such as feature correspondence errors, but there is also a grey area of features
that for some reason (a specularity, a shadow, poor focus, motion blur . . . ) were not as
accurately located as other features, without clearly being outliers.
3.2
One of the most basic parameter estimation methods is nonlinear least squares. Suppose
that we have vectors of observations zi predicted by a model zi = zi (x), where x is a
vector of model parameters. Then nonlinear least squares takes as estimates the parameter
values that minimize the weighted Sum of Squared Error (SSE) cost function:
f(x)
1
2
(2)
Here, zi (x) is the feature prediction error and Wi is an arbitrary symmetric positive
denite (SPD) weight matrix. Modulo normalization terms independent of x, the weighted
SSE cost function coincides with the negative log likelihood for observations zi perturbed
by Gaussian noise of mean zero and covariance Wi 1. So for least squares to have a useful
statistical interpretation, the Wi should be chosen to approximate the inverse measurement
covariance of zi . Even for non-Gaussian noise with this mean and covariance, the GaussMarkov theorem [37, 11] states that if the models zi (x) are linear, least squares gives the
Best Linear Unbiased Estimator (BLUE), where best means minimum variance4 .
Any weighted least squares model can be converted to an unweighted one (Wi = 1)
by pre-multiplying zi , zi , zi by any Li satisfying Wi = Li Li . Such an Li can be calculated efciently from Wi or Wi 1 using Cholesky decomposition (B.1). zi = Li zi
is calleda standardized residual, and the resulting unweighted least squares problem
minx 12 i zi (x)2 is said to be in standard form. One advantage of this is that optimization methods based on linear least squares solvers can be used in place of ones based
on linear (normal) equation solvers, which allows ill-conditioned problems to be handled
more stably (B.2).
Another peculiarity of the SSE cost function is its indifference to the natural boundaries between the observations. If observations zi from any sources are assembled into a
3
The latter case corresponds to a hard inlier / outlier decision rule: for any observation in the outlier
region, the density is constant so the observation has no inuence at all on the t. Similarly, the
mixture case corresponds to a softer inlier / outlier decision rule.
It may be possible (and even useful) to do better with either biased (towards the correct solution),
or nonlinear estimators.
309
compound observation vector z (z1 , . . . , zk ), and their weight matrices Wi are assembled into compound block diagonal weight matrix W diag(W1 , . . . , Wk ), the weighted
squared error f(x) 12 z(x) W z(x) is the same as the original SSE cost function,
1
i zi (x) Wi zi (x). The general quadratic form of the SSE cost is preserved under
2
such compounding, and also under arbitrary linear transformations of z that mix components from different observations. The only place that the underlying structure is visible
is in the block structure of W. Such invariances do not hold for essentially any other cost
function, but they simplify the formulation of least squares considerably.
3.3
The main problem with least squares is its high sensitivity to outliers. This happens because
the Gaussian has extremely small tails compared to most real measurement error distributions. For robust estimates, we must choose a more realistic likelihood model (3.1). The
exact functional form is less important than the general way in which the expected types
of outliers enter. A single blunder such as a correspondence error may affect one or a few
of the observations, but it will usually leave all of the others unchanged. This locality is
the whole basis of robustication. If we can decide which observations were affected, we
can down-weight or eliminate them and use the remaining observations for the parameter
estimates as usual. If all of the observations had been affected about equally (e.g. as by
an incorrect projection model), we might still know that something was wrong, but not be
able to x it by simple data cleaning.
We will adopt a single layer robustness model, in which the observations are partitioned into independent groups zi , each group being irreducible in the sense that it is
accepted, down-weighted or rejected as a whole, independently of all the other groups.
The partitions should reect the types of blunders that occur. For example, if feature correspondence errors are the most common blunders, the two coordinates of a single image
point would naturally form a group as both would usually be invalidated by such a blunder,
while no other image point would be affected. Even if one of the coordinates appeared to
be correct, if the other were incorrect we would usually want to discard both for safety.
On the other hand, in stereo problems, the four coordinates of each pair of corresponding
image points might be a more natural grouping, as a point in one image is useless without
its correspondent in the other one.
Henceforth, when we say observation we mean irreducible group of observations
treated as a unit by the robustifying model. I.e., our observations need not be scalars, but
they must be units, probabilistically independent of one another irrespective of whether
they are inliers or outliers.
As usual, each independent observation zi contributes an independent term fi (x | zi ) to
the total cost function. This could have more or less any form, depending on the expected
total distribution of inliers and outliers for the observation. One very natural family are the
radial distributions, which have negative log likelihoods of the form:
fi (x)
1
2
(3)
d
Here, i (s) can be any increasing function with i (0) = 0 and ds
i (0) = 1. (These
d 2 fi
guarantee that at zi = 0, f vanishes and dz2 = Wi ). Weighted SSE has i (s) = s, while
i
more robust variants have sublinear i , often tending to a constant at so that distant
310
B. Triggs et al.
outliers are entirely ignored. The dispersion matrix Wi 1 determines the spatial spread of zi ,
and up to scale its covariance (if this is nite). The radial form is preserved under arbitrary
afne transformations of zi , so within a group, all of the observations are on an equal
footing in the same sense as in least squares. However, non-Gaussian radial distributions
are almost never separable: the observations in zi can neither be split into independent
subgroups, nor combined into larger groups, without destroying the radial form. Radial
cost models do not have the remarkable isotropy of non-robust SSE, but this is exactly
what we wanted, as it ensures that all observations in a group will be either left alone, or
down-weighted together.
As an example of this, for image features polluted with occasional large outliers caused
by correspondence errors, we might model the error distribution as a Gaussian central peak
plus a uniform background of outliers.
This would give negative log likelihood contribu
tions of the form f(x) = log exp( 12 2ip ) + instead of the non-robust weighted
SSE model f(x) = 12 2ip , where 2ip = xip Wip xip is the squared weighted residual
error (which is a 2 variable for a correct model and Gaussian error distribution), and
parametrizes the frequency of outliers.
8
7
6
5
4
3
2
1
0
-10
3.4
-5
10
Intensity-Based Methods
The above models apply not only to geometric image features, but also to intensity-based
matching of image patches. In this case, the observables are image gray-scales or colors
I rather than feature coordinates u, and the error model is based on intensity residuals.
To get from a point projection model u = u(x) to an intensity based one, we simply
compose with the assumed local intensity model I = I(u) (e.g. obtained from an image
template or another image that we are matching against), premultiply point Jacobians by
dI
point-to-intensity Jacobians du
, etc. The full range of intensity models can be implemented
within this framework: pure translation, afne, quadratic or homographic patch deformation models, 3D model based intensity predictions, coupled afne or spline patches for
surface coverage, etc., [1, 52, 55, 9, 110, 94, 53, 97, 76, 104, 102]. The structure of intensity
based bundle problems is very similar to that of feature based ones, so all of the techniques
studied below can be applied.
We will not go into more detail on intensity matching, except to note that it is the
real basis of feature based methods. Feature detectors are optimized for detection not
localization. To localize a detected feature accurately we need to match (some function of)
311
the image intensities in its region against either an idealized template or another image of
the feature, using an appropriate geometric deformation
model, etc. For example, suppose
that the intensity matching model is f(u) = 12
(I(u)2 ) where the integration is
over some image patch, I is the current intensity prediction error, u parametrizes the local
geometry (patch translation & warping), and () is some intensity error robustier. Then
dI
df
the cost gradient in terms of u is gu = du
=
I du . Similarly, the cost Hessian in
dI dI
d2 f
( du ) du . In a feature based
u in a Gauss-Newton approximation is Hu = du2
we have
model, we express u = u(x) as a function of the bundle parameters, so if Ju = du
dx
a corresponding cost gradient and Hessian contribution gx = gu Ju and Hx = Ju Hu Ju .
In other words, the intensity matching model is locally equivalent to a quadratic feature
matching one on the features u(x), with effective weight (inverse covariance) matrix
Wu = Hu . All image feature error models in vision are ultimately based on such an
underlying intensity matching model. As feature covariances are a function of intensity
dI dI
gradients
( du ) du , they can be both highly variable between features (depending
on how much local gradient there is), and highly anisotropic (depending on how directional
the gradients are). E.g., for points along a 1D intensity edge, the uncertainty is large in the
along edge direction and small in the across edge one.
3.5
Implicit Models
Sometimes observations are most naturally expressed in terms of an implicit observationconstraining model h(x, z) = 0, rather than an explicit observation-predicting one z =
z(x). (The associated image error still has the form f(z z)). For example, if the model
is a 3D curve and we observe points on it (the noisy images of 3D points that may lie
anywhere along the 3D curve), we can predict the whole image curve, but not the exact
position of each observation along it. We only have the constraint that the noiseless image
of the observed point would lie on the noiseless image of the curve, if we knew these. There
are basically two ways to handle implicit models: nuisance parameters and reduction.
Nuisance parameters: In this approach, the model is made explicit by adding additional
nuisance parameters representing something equivalent to model-consistent estimates
of the unknown noise free observations, i.e. to z with h(x, z) = 0. The most direct way
to do this is to include the entire parameter vector z as nuisance parameters, so that we
have to solve a constrained optimization problem on the extended parameter space (x, z),
minimizing f(z z) over (x, z) subject to h(x, z) = 0. This is a sparse constrained
problem, which can be solved efciently using sparse matrix techniques (6.3). In fact,
for image observations, the subproblems in z (optimizing f(z z) over z for xed z
and x) are small and for typical f rather simple. So in spite of the extra parameters z,
optimizing this model is not signicantly more expensive than optimizing an explicit one
z = z(x) [14, 13, 105, 106]. For example, when estimating matching constraints between
image pairs or triplets [60, 62], instead of using an explicit 3D representation, pairs or
triplets of corresponding image points can be used as features zi , subject to the epipolar
or trifocal geometry contained in x [105, 106].
However, if a smaller nuisance parameter vector than z can be found, it is wise to use
it. In the case of a curve, it sufces to include just one nuisance parameter per observation,
saying where along the curve the corresponding noise free observation is predicted to
lie. This model exactly satises the constraints, so it converts the implicit model to an
unconstrained explicit one z = z(x, ), where are the along-curve nuisance parameters.
312
B. Triggs et al.
The advantage of the nuisance parameter approach is that it gives the exact optimal
parameter estimate for x, and jointly, optimal x-consistent estimates for the noise free
observations z.
Reduction: Alternatively, we can regard h(x, z) rather than z as the observation vector,
and hence t the parameters to the explicit log likelihood model for h(x, z). To do this,
we must transfer the underlying error model / distribution f(z) on z to one f(h) on
h(x, z). In principle, this should be done by marginalization: the density for h is given
by integrating that for z over all z giving the same h. Within the point estimation
framework, it can be approximated by replacing the integration with maximization. Neither
calculation is easy in general, but in the asymptotic limit where rst order Taylor expansion
h(x, z) = h(x, z + z) 0 + dh
z is valid, the distribution of h is a marginalization or
dz
maximization of that of z over afne subspaces. This can be evaluated in closed form for
some robust distributions. Also, standard covariance propagation gives (more precisely,
this applies to the h and z dispersions):
h(x, z)
0 ,
= dh
W1 dh
dz
dz
(4)
where W1 is the covariance of z. So at least for an outlier-free Gaussian model, the
reduced distribution remains Gaussian (albeit with x-dependent covariance).
Having chosen a suitable model quality metric, we must optimize it. This section gives a
very rapid sketch of the basic local optimization methods for differentiable functions. See
[29, 93, 42] for more details. We need to minimize a cost function f(x) over parameters x,
starting from some given initial estimate x of the minimum, presumably supplied by some
approximate visual reconstruction method or prior knowledge of the approximate situation.
As in 2.2, the parameter space may be nonlinear, but we assume that local displacements
can be parametrized by a local coordinate system / vector of free parameters x. We try
to nd a displacement x x + x that locally minimizes or at least reduces the cost
function. Real cost functions are too complicated to minimize in closed form, so instead
we minimize an approximate local model for the function, e.g. based on Taylor expansion
or some other approximation at the current point x. Although this does not usually give the
exact minimum, with luck it will improve on the initial parameter estimate and allow us to
iterate to convergence. The art of reliable optimization is largely in the details that make
this happen even without luck: which local model, how to minimize it, how to ensure that
the estimate is improved, and how to decide when convergence has occurred. If you not
are interested in such subjects, use a professionally designed package (C.2): details are
important here.
4.1
The reference for all local models is the quadratic Taylor series one:
f(x + x) f(x) + g x + 12 x H x
quadratic local model
df
g dx
(x)
gradient vector
d f
H dx
2 (x)
Hessian matrix
(5)
313
For now, assume that the Hessian H is positive denite (but see below and 9). The local
model is then a simple quadratic with a unique global minimum, which can be found
df
(x + x) H x + g to zero for the stationary
explicitly using linear algebra. Setting dx
point gives the Newton step:
x = H1 g
(6)
Step Control
Unfortunately, Newtons method can fail in several ways. It may converge to a saddle
point rather than a minimum, and for large steps the second order cost prediction may be
inaccurate, so there is no guarantee that the true cost will actually decrease. To guarantee
convergence to a minimum, the step must follow a local descent direction (a direction
with a non-negligible component down the local cost gradient, or if the gradient is zero
314
B. Triggs et al.
near a saddle point, down a negative curvature direction of the Hessian), and it must make
reasonable progress in this direction (neither so little that the optimization runs slowly
or stalls, nor so much that it greatly overshoots the cost minimum along this direction).
It is also necessary to decide when the iteration has converged, and perhaps to limit any
over-large steps that are requested. Together, these topics form the delicate subject of step
control.
To choose a descent direction, one can take the Newton step direction if this descends
(it may not near a saddle point), or more generally some combination of the Newton and
gradient directions. Damped Newton methods solve a regularized system to nd the step:
(H + W) x = g
(7)
Here, is some weighting factor and W is some positive denite weight matrix (often
the identity, so becomes gradient descent x g). can be chosen to limit
the step to a dynamically chosen maximum size (trust region methods), or manipulated
more heuristically, to shorten the step if the prediction is poor (Levenberg-Marquardt
methods).
Given a descent direction, progress along it is usually assured by a line search method,
of which there are many based on quadratic and cubic 1D cost models. If the suggested
(e.g. Newton) step is x, line search nds the that actually minimizes f along the line
x + x, rather than simply taking the estimate = 1.
There is no space for further details on step control here (again, see [29, 93, 42]). However note that poor step control can make a huge difference in reliability and convergence
rates, especially for ill-conditioned problems. Unless you are familiar with these issues, it
is advisable to use professionally designed methods.
4.3
Consider the nonlinear weighted SSE cost model f(x) 12 z(x) W z(x) (3.2) with
prediction error z(x) = zz(x) and weight matrix W. Differentiation gives the gradient
dz
:
and Hessian in terms of the Jacobian or design matrix of the predictive model, J dx
df
g dx
= z W J
d f
H dx
2 = J WJ +
i (z
2
W)i ddxz2i
(8)
These formulae could be used directly in a damped Newton method, but the ddxz2i term in H
is likely to be small in comparison to the corresponding components of J W J if either: (i)
2
the prediction error z(x) is small; or (ii) the model is nearly linear, ddxz2i 0. Dropping
the second term gives the Gauss-Newton approximation to the least squares Hessian,
H J W J. With this approximation, the Newton step prediction equations become the
Gauss-Newton or normal equations:
(J W J) x = J W z
(9)
315
In fact, the normal equations are just one of many methods of solving the weighted
linear least squares problem5 min x 21 (J x z) W (J x z). Another notable
method is that based on QR decomposition (B.2, [11, 44]), which is up to a factor of two
slower than the normal equations, but much less sensitive to ill-conditioning in J 6 .
Whichever solution method is used, the main disadvantage of the Gauss-Newton approximation is that when the discarded terms are not negligible, the convergence rate is
greatly reduced (7.2). In our experience, such reductions are indeed common in highly
nonlinear problems with (at the current step) large residuals. For example, near a saddle
point the Gauss-Newton approximation is never accurate, as its predicted Hessian is always at least positive semidenite. However, for well-parametrized (i.e. locally near linear,
2.2) bundle problems under an outlier-free least squares cost model evaluated near the cost
minimum, the Gauss-Newton approximation is usually very accurate. Feature extraction
errors and hence z and W1 have characteristic scales of at most a few pixels. In contrast,
the nonlinearities of z(x) are caused by nonlinear 3D feature-camera geometry (perspective effects) and nonlinear image projection (lens distortion). For typical geometries and
lenses, neither effect varies signicantly on a scale of a few pixels. So the nonlinear corrections are usually small compared to the leading order linear terms, and bundle adjustment
behaves as a near-linear small residual problem.
However note that this does not extend to robust cost models. Robustication works
by introducing strong nonlinearity into the cost function at the scale of typical feature
reprojection errors. For accurate step prediction, the optimization routine must take account
of this. For radial cost functions (3.3), a reasonable compromise is to take account of
the exact second order derivatives of the robustiers i (), while retaining only the rst
order Gauss-Newton approximation for the predicted observations zi (x). If i and are
respectively the rst and second derivatives of i at the current evaluation point, we have
a robustied Gauss-Newton approximation:
gi = i Ji Wi zi
(10)
So robustication has two effects: (i) it down-weights the entire observation (both gi and
Hi ) by i ; and (ii) it makes a rank-one reduction7 of the curvature Hi in the radial (zi )
direction, to account for the way in which the weight changes with the residual. There
are reweighting-based optimization methods that include only the rst effect. They still
nd the true cost minimum g = 0 as the gi are evaluated exactly8 , but convergence may
5
7
8
d2 z
Here, the dependence of J on x is ignored, which amounts to the same thing as ignoring the dx2i
term in H.
The QR method gives the solution to a relative error of about O(C ), as compared to O C 2
for the normal equations, where C is the condition number (the ratio of the largest to the smallest
singular value) of J, and is the machine precision (1016 for double precision oating point).
The useful robustiers i are sublinear, with i < 1 and i < 0 in the outlier region.
Reweighting is also sometimes used in vision to handle projective homogeneous scale factors
rather than error weighting. E.g., suppose that image points (u/w, v/w) are generated by a
homogeneous projection equation (u, v, w) = P (X, Y, Z, 1), where P is the 3 4 homogeneous image projection matrix. A scale factor reweighting scheme might take derivatives w.r.t.
u, v while treating the inverse weight w as a constant within each iteration. Minimizing the resulting globally bilinear linear least squares error model over P and (X, Y, Z) does not give
the true cost minimum: it zeros the gradient-ignoring-w-variations, not the true cost gradient.
Such schemes should not be used for precise work as the bias can be substantial, especially for
wide-angle lenses and close geometries.
316
B. Triggs et al.
be slowed owing to inaccuracy of H, especially for the mainly radial deviations produced
by non-robust initializers containing outliers. Hi has a direction of negative curvature if
i zi Wi zi < 12 i , but if not we can even reduce the robustied Gauss-Newton
model to a local unweighted SSE one for which linear least squares methods can be used.
For simplicity suppose that Wi has already reduced to 1 by premultiplying zi and Ji by Li
where Li Li = Wi . Then minimizing the effective squared error 12 zi Ji x2 gives
the correct second order robust state update, where RootOf( 12 2 i /i zi 2 )
and:
i
zi zi
Ji
zi
Ji i 1
(11)
zi (x)
1
zi 2
In practice, if i zi 2 12 i , we can use the same formulae but limit 1 for
some small . However, the full curvature correction is not applied in this case.
4.4
Constrained Problems
More generally, we may want to minimize a function f(x) subject to a set of constraints
c(x) = 0 on x. These might be scene constraints, internal consistency constraints on the
parametrization (2.2), or constraints arising from an implicit observation model (3.5).
Given an initial estimate x of the solution, we try to improve this by optimizing the
quadratic local model for f subject to a linear local model of the constraints c. This linearly
constrained quadratic problem has an exact solution in linear algebra. Let g, H be the
gradient and Hessian of f as before, and let the rst order expansion of the constraints be
dc
. Introduce a vector of Lagrange multipliers for c.
c(x+x) c(x)+C x where C dx
d
(f+c )(x+x)
We seek the x+x that optimizes f+c subject to c = 0, i.e. 0 = dx
g+H x +C and 0 = c(x +x) c(x)+C x. Combining these gives the Sequential
Quadratic Programming (SQP) step:
H C 1 g
H C
x
g
1
(12)
=
, f(x + x) f(x) 2 g c
c
C 0
c
C 0
1
1
H C
H H1 C D1 C H1 H1 C D1
,
D C H1 C
=
(13)
C 0
D1 C H1
D1
(14)
(15)
(These matrices can be evaluated efciently using simple matrix factorization schemes
[11]). This method is stable provided that the chosen C1 is well-conditioned. It works well
317
for dense problems, but is not always suitable for sparse ones because if C is dense, the
reduced Hessian H22 becomes dense too.
For least squares cost models, constraints can also be handled within the linear least
squares framework, e.g. see [11].
4.5
Before going into details, we mention a few points of good numerical practice for largescale optimization problems such as bundle adjustment:
Exploit the problem structure: Large-scale problems are almost always highly structured
and bundle adjustment is no exception. In professional cartography and photogrammetric
site-modelling, bundle problems with thousands of images and many tens of thousands
of features are regularly solved. Such problems would simply be infeasible without a
thorough exploitation of the natural structure and sparsity of the bundle problem. We will
have much to say about sparsity below.
Use factorization effectively: Many of above formulae contain matrix inverses. This is
a convenient short-hand for theoretical calculations, but numerically, matrix inversion is
almost never used. Instead, the matrix is decomposed into its Cholesky, LU, QR, etc.,
factors and these are used directly, e.g. linear systems are solved using forwards and
backwards substitution. This is much faster and numerically more accurate than explicit
use of the inverse, particularly for sparse matrices such as the bundle Hessian, whose
factors are still quite sparse, but whose inverse is always dense. Explicit inversion is
required only occasionally, e.g. for covariance estimates, and even then only a few of
the entries may be needed (e.g. diagonal blocks of the covariance). Factorization is the
heart of the optimization iteration, where most of the time is spent and where most can be
done to improve efciency (by exploiting sparsity, symmetry and other problem structure)
and numerical stability (by pivoting and scaling). Similarly, certain matrices (subspace
projectors, Householder matrices) have (diagonal)+(low rank) forms which should not be
explicitly evaluated as they can be applied more efciently in pieces.
Use stable local parametrizations: As discussed in 2.2, the parametrization used for
step prediction need not coincide with the global one used to store the state estimate. It is
more important that it should be nite, uniform and locally as nearly linear as possible.
If the global parametrization is in some way complex, highly nonlinear, or potentially
ill-conditioned, it is usually preferable to use a stable local parametrization based on
perturbations of the current state for step prediction.
Scaling and preconditioning: Another parametrization issue that has a profound and toorarely recognized inuence on numerical performance is variable scaling (the choice of
units or reference scale to use for each parameter), and more generally preconditioning
(the choice of which linear combinations of parameters to use). These represent the linear
part of the general parametrization problem. The performance of gradient descent and most
other linearly convergent optimization methods is critically dependent on preconditioning,
to the extent that for large problems, they are seldom practically useful without it.
One of the great advantages of the Newton-like methods is their theoretical independence of such scaling issues9 . But even for these, scaling makes itself felt indirectly in
9
318
B. Triggs et al.
Network
graph
A
1
C
Parameter
connection
graph
D
B
K1
2
A
3
12 34
K
K
12
AB C DE
A1
C3
D2
D3
D4
E3
E4
KK
12
B2
C1
K2
B1
B4
12 34
A2
J =
E
AB C DE
D
B
H =
E
1
2
3
4
K1
K2
Fig. 3. The network graph, parameter connection graph, Jacobian structure and Hessian structure for
a toy bundle problem with ve 3D features AE, four images 14 and two camera calibrations K1
(shared by images 1,2) and K2 (shared by images 3,4). Feature A is seen in images 1,2; B in 1,2,4;
C in 1,3; D in 24; and E in 3,4.
several ways: (i) Step control strategies including convergence tests, maximum step size
limitations, and damping strategies (trust region, Levenberg-Marquardt) are usually all
based on some implicit norm x2 , and hence change under linear transformations of x
(e.g., damping makes the step more like the non-invariant gradient descent one). (ii) Pivoting strategies for factoring H are highly dependent on variable scaling, as they choose
large elements on which to pivot. Here, large should mean in which little numerical
cancellation has occurred but with uneven scaling it becomes with the largest scale. (iii)
The choice of gauge (datum, 9) may depend on variable scaling, and this can signicantly
inuence convergence [82, 81].
For all of these reasons, it is important to choose variable scalings that relate meaningfully to the problem structure. This involves a judicious comparison of the relative
inuence of, e.g., a unit of error on a nearby point, a unit of error on a very distant one,
a camera rotation error, a radial distortion error, etc. For this, it is advisable to use an
ideal Hessian or weight matrix rather than the observed one, otherwise the scaling might
break down if the Hessian happens to become ill-conditioned or non-positive during a few
iterations before settling down.
Network Structure
Adjustment networks have a rich structure, illustrated in gure 3 for a toy bundle problem.
The free parameters subdivide naturally into blocks corresponding to: 3D feature coordinates A, . . . , E; camera poses and unshared (single image) calibration parameters 1,
. . . , 4; and calibration parameters shared across several images K1 , K2 . Parameter blocks
varies incorrectly as x T x. The Newton and gradient descent steps agree only when
T T = H.
319
interact only via their joint inuence on image features and other observations, i.e. via their
joint appearance in cost function contributions. The abstract structure of the measurement
network can be characterized graphically by the network graph (top left), which shows
which features are seen in which images, and the parameter connection graph (top right)
which details the sparse structure by showing which parameter blocks have direct interactions. Blocks are linked if and only if they jointly inuence at least one observation. The
cost function Jacobian (bottom left) and Hessian (bottom right) reect this sparse structure.
The shaded boxes correspond to non-zero blocks of matrix entries. Each block of rows in
the Jacobian corresponds to an observed image feature and contains contributions from
each of the parameter blocks that inuenced this observation. The Hessian contains an
off-diagonal block for each edge of the parameter connection graph, i.e. for each pair of
parameters that couple to at least one common feature / appear in at least one common
cost contribution10 .
Two layers of structure are visible in the Hessian. The primary structure consists of
the subdivision into structure (AE) and camera (14, K1 K2 ) submatrices. Note that the
structure submatrix is block diagonal: 3D features couple only to cameras, not to other
features. (This would no longer hold if inter-feature measurements such as distances or
angles between points were present). The camera submatrix is often also block diagonal,
but in this example the sharing of unknown calibration parameters produces off-diagonal
blocks. The secondary structure is the internal sparsity pattern of the structure-camera
Hessian submatrix. This is dense for small problems where all features are seen in all
images, but in larger problems it often becomes quite sparse because each image only sees
a fraction of the features.
All worthwhile bundle methods exploit at least the primary structure of the Hessian,
and advanced methods exploit the secondary structure as well. The secondary structure is
particularly sparse and regular in surface coverage problems such grids of photographs in
aerial cartography. Such problems can be handled using a xed nested dissection variable
reordering (6.3). But for the more irregular connectivities of close range problems, general
sparse factorization methods may be required to handle secondary structure.
Bundle problems are by no means limited to the above structures. For example, for
more complex scene models with moving or articulated objects, there will be additional
connections to object pose or joint angle nodes, with linkages reecting the kinematic
chain structure of the scene. It is often also necessary to add constraints to the adjustment,
e.g. coplanarity of certain points. One of the greatest advantages of the bundle technique is
its ability to adapt to almost arbitrarily complex scene, observation and constraint models.
10
The Jacobian structure can be described more directly by a bipartite graph whose nodes correspond
on one side to the observations, and on the other to the parameter blocks that inuence them. The
parameter connection graph is then obtained by deleting each observation node and linking each
pair of parameter nodes that it connects to. This is an example of elimination graph processing
(see below).
320
B. Triggs et al.
The next three sections cover implementation strategies for optimizing the bundle adjustment cost function f(x) over the complete set of unknown structure and camera parameters
x. This section is devoted to second-order Newton-style approaches, which are the basis
of the great majority of current implementations. Their most notable characteristics are
rapid (second order) asymptotic convergence but relatively high cost per iteration, with
d2 f
an emphasis on exploiting the network structure (the sparsity of the Hessian H = dx
2)
for efciency. In fact, the optimization aspects are more or less standard (4, [29, 93, 42]),
so we will concentrate entirely on efcient methods for solving the linearized Newton
step prediction equations x = H1 g, (6). For now, we will assume that the Hessian
H is non-singular. This will be amended in 9 on gauge freedom, without changing the
conclusions reached here.
6.1
( CA DB ) =
1 A1 B A1
0
0
1
0 D
1
0
C A1 1
A1+A1 B D C A1
1
D C A1
A1 B D
1
D
(17)
Here A must be square and invertible, and for (17), the whole matrix must also be square
and invertible. D is called the Schur complement of A in M. If both A and D are invertible,
complementing on D rather than A gives
1
( CA DB )
A B D1
1
D1+D1 C A B D1
A
1
D C A
A = A B D1 C
(A B D1 C)
= A1 A1 B (D C A1 B) C A1
(18)
This is the usual method of updating the inverse of a nonsingular matrix A after an update
(especially a low rank one) A A B D1 C . (See 8.1).
A B )( x1 ) =
Reduction: Now consider the linear system ( C
x2
D
1
0
C A1 1
gives
A B
( xx12
0 D
)=
b1
b2
b1
b2
. Pre-multiplying by
D x2 = b2
reduced system
A x1 = b1 B x2
back-substitution
(19)
321
Note that the reduced system entirely subsumes the contribution of the x1 rows and columns
to the network. Once we have reduced, we can pretend that the problem does not involve
x1 at all it can be found later by back-substitution if needed, or ignored if not. This is
the basis of all recursive ltering methods. In bundle adjustment, if we use the primary
subdivision into feature and camera variables and subsume the structure ones, we get the
reduced camera system HCC xC = gC , where:
1
HCC HCC HCS HSS
HSC = HCC p HCp Hpp1 HpC
(20)
1
gC gC HCS HSS
gS = gC p HCp Hpp1 gp
Here, S selects the structure block and C the camera one. HSS is block diagonal,
so the reduction can be calculated rapidly by a sum of contributions from the individual
3D features p in S. Browns original 1958 method for bundle adjustment [16, 19, 100]
was based on nding the reduced camera system as above, and solving it using Gaussian
elimination. Prole Cholesky decomposition (B.3) offers a more streamlined method of
achieving this.
Occasionally, long image sequences have more camera parameters than structure ones.
In this case it is more efcient to reduce the camera parameters, leaving a reduced structure
system.
6.2
Triangular Decompositions
If D in (16) is further subdivided into blocks, the factorization process can be continued recursively. In fact, there is a family of block (lower triangular)*(diagonal)*(upper
triangular) factorizations A = L D U:
A11
A21
..
.
A12 A1n
A22 A2n
.. . . ..
. .
.
L11
L21 L22
..
= .
..
.
.. . .
. .
..
..
.
.
D
1
D2
...
U11
U12 U1n
U22 U2n
...
Dr
..
.
Urn
(21)
See B.1 for computational details. The main advantage of triangular factorizations is that
they make linear algebra computations with the matrix much easier. In particular, if the
input matrix A is square and nonsingular, linear equations A x = b can be solved by a
sequence of three recursions that implicitly implement multiplication by A1 = U1 D1 L1 :
forward substitution (22)
Lc = b
ci Lii1 bi j<i Lij cj
Dd = c
Ux = d
di Di 1 ci
xi Uii1 di j>i Uij xj
diagonal solution
(23)
back-substitution
(24)
Forward substitution corrects for the inuence of earlier variables on later ones, diagonal
solution solves the transformed system, and back-substitution propagates corrections due
to later variables back to earlier ones. In practice, this is usual method of solving linear
equations such as the Newton step prediction equations. It is stabler and much faster than
explicitly inverting A and multiplying by A1.
322
B. Triggs et al.
The diagonal blocks Lii , Di , Uii can be set arbitrarily provided that the product Lii Di Uii
remains constant. This gives a number of well-known factorizations, each optimized for a
different class of matrices. Pivoting (row and/or column exchanges designed to improve
the conditioning of L and/or U, B.1) is also necessary in most cases, to ensure stability.
Choosing Lii = Dii = 1 gives the (block) LU decomposition A = L U, the matrix representation of (block) Gaussian elimination. Pivoted by rows, this is the standard method for
non-symmetric matrices. For symmetric A, roughly half of the work of factorization can
be saved by using a symmetry-preserving LDL factorization, for which D is symmetric
and U = L. The pivoting strategy must also preserve symmetry in this case, so it has to
permute columns in the same way as the corresponding rows. If A is symmetric positive
denite we can further set D = 1 to get the Cholesky decomposition A = L L. This is
stable even without pivoting, and hence extremely simple to implement. It is the standard
decomposition method for almost all unconstrained optimization problems including bundle adjustment, as the Hessian is positive denite near a non-degenerate cost minimum
(and in the Gauss-Newton approximation, almost everywhere else, too). If A is symmetric
but only positive semidenite, diagonally pivoted Cholesky decomposition can be used.
This is the case, e.g. in subset selection methods of gauge xing (9.5). Finally, if A is
symmetric but indenite, it is not possible to reduce D stably to 1. Instead, the BunchKaufman method is used. This is a diagonally pivoted LDL method,
where
D has a
mixture of 1 1 and 2 2 diagonal blocks. The augmented Hessian CH C
of the La0
grange multiplier method for constrained optimization problems (12) is always symmetric
indenite, so Bunch-Kaufman is the recommended method for solving constrained bundle
problems. (It is something like 40% faster than Gaussian elimination, and about equally
stable).
Another use of factorization is matrix inversion. Inverses can be calculated by factoring,
inverting each triangular factor by forwards or backwards substitution (52), and multiplying
out: A1 = U1 D1 L1. However, explicit inverses are rarely used in numerical analysis,
it being both stabler and much faster in almost all cases to leave them implicit and work
by forward/backward substitution w.r.t. a factorization, rather than multiplication by the
inverse. One place where inversion is needed in its own right, is to calculate the dispersion
matrix (inverse Hessian, which asymptotically gives the posterior covariance) as a measure
of the likely variability of parameter estimates. The dispersion can be calculated by explicit
inversion of the factored Hessian, but often only a few of its entries are needed, e.g. the
diagonal blocks and a few key off-diagonal parameter covariances. In this case (53) can be
used, which efciently calculates the covariance entries corresponding to just the nonzero
elements of L, D, U.
6.3
Sparse Factorization
To apply the above decompositions to sparse matrices, we must obviously avoid storing
and manipulating the zero blocks. But there is more to the subject than this. As a sparse
matrix is decomposed, zero positions tend to rapidly ll in (become non-zero), essentially
because decomposition is based on repeated linear combination of matrix rows, which
is generically non-zero wherever any one of its inputs is. Fill-in depends strongly on the
order in which variables are eliminated, so efcient sparse factorization routines attempt
to minimize either operation counts or ll-in by re-ordering the variables. (The Schur
process is xed in advance, so this is the only available freedom). Globally minimizing
either operations or ll-in is NP complete, but reasonably good heuristics exist (see below).
323
Variable order affects stability (pivoting) as well as speed, and these two goals conict to
some extent. Finding heuristics that work well on both counts is still a research problem.
Algorithmically, ll-in is characterized by an elimination graph derived from the parameter coupling / Hessian graph [40, 26, 11]. To create this, nodes (blocks of parameters)
are visited in the given elimination ordering, at each step linking together all unvisited
nodes that are currently linked to the current node. The coupling of block i to block j via
visited block k corresponds to a non-zero Schur contribution Lik Dk1 Ukj , and at each stage
the subgraph on the currently unvisited nodes is the coupling graph of the current reduced
Hessian. The amount of ll-in is the number of new graph edges created in this process.
Pattern Matrices We seek variable orderings that approximately minimize the total
operation count or ll-in over the whole elimination chain. For many problems a suitable
ordering can be xed in advance, typically giving one of a few standard pattern matrices
such as band or arrowhead matrices, perhaps with such structure at several levels.
bundle Hessian
arrowhead matrix
(25)
The most prominent pattern structure in bundle adjustment is the primary subdivision of
the Hessian into structure and camera blocks. To get the reduced camera system (19),
we treat the Hessian as an arrowhead matrix with a broad nal column containing all of
the camera parameters. Arrowhead matrices are trivial to factor or reduce by block 2 2
Schur complementation, c.f . (16, 19). For bundle problems with many independent images
and only a few features, one can also complement on the image parameter block to get a
reduced structure system.
Another very common pattern structure is the block tridiagonal one which characterizes
all singly coupled chains (sequences of images with only pairwise overlap, Kalman ltering
and other time recursions, simple kinematic chains). Tridiagonal matrices are factored or
reduced by recursive block 2 2 Schur complementation starting from one end. The L
and U factors are also block tridiagonal, but the inverse is generally dense.
Pattern orderings are often very natural but it is unwise to think of them as immutable:
structure often occurs at several levels and deeper structure or simply changes in the relative
sizes of the various parameter classes may make alternative orderings preferable. For more
difcult problems there are two basic classes of on-line ordering strategies. Bottom-up
methods try to minimize ll-in locally and greedily at each step, at the risk of global shortsightedness. Top-down methods take a divide-and-conquer approach, recursively splitting
the problem into smaller sub-problems which are solved quasi-independently and later
merged.
Top-Down Ordering Methods The most common top-down method is called nested dissection or recursive partitioning [64, 57, 19, 38, 40, 11]. The basic idea is to recursively
split the factorization problem into smaller sub-problems, solve these independently, and
324
B. Triggs et al.
Hessian
Natural Cholesky
Minimum Degree
Reverse Cuthill-McKee
Fig. 4. A bundle Hessian for an irregular coverage problem with only local connections, and its
Cholesky factor in natural (structure-then-camera), minimum degree, and reverse Cuthill-McKee
ordering.
then glue the solutions together along their common boundaries. Splitting involves choosing a separating set of variables, whose deletion will separate the remaining variables into
two or more independent subsets. This corresponds to nding a (vertex) graph cut of the
elimination graph, i.e. a set of vertices whose deletion will split it into two or more disconnected components. Given such a partitioning, the variables are reordered into connected
components, with the separating set ones last. This produces an arrowhead matrix, e.g. :
A11
A12
A21
A22
A32
A23
A33
A11
A33
A21
A23
A12
A32
A22
(26)
The arrowhead matrix is factored by blocks, as in reduction or prole Cholesky, taking account of any internal sparsity in the diagonal blocks and the borders. Any suitable
factorization method can be used for the diagonal blocks, including further recursive partitionings.
325
Nested dissection is most useful when comparatively small separating sets can be found.
A trivial example is the primary structure of the bundle problem: the camera variables
separate the 3D structure into independent features, giving the standard arrowhead form of
the bundle Hessian. More interestingly, networks with good geometric or temporal locality
(surface- and site-covering networks, video sequences) tend to have small separating sets
based on spatial or temporal subdivision. The classic examples are geodesic and aerial
cartography networks with their local 2D connections spatial bisection gives simple
and very efcient recursive decompositions for these [64, 57, 19].
For sparse problems with less regular structure, one can use graph partitioning algorithms to nd small separating sets. Finding a globally minimal partition sequence is NP
complete but several effective heuristics exist. This is currently an active research eld.
One promising family are multilevel schemes [70, 71, 65, 4] which decimate (subsample)
the graph, partition using e.g. a spectral method, then rene the result to the original graph.
(These algorithms should also be very well-suited to graph based visual segmentation and
matching).
Bottom-Up Ordering Methods Many bottom-up variable ordering heuristics exist. Probably the most widespread and effective is minimum degree ordering. At each step, this
eliminates the variable coupled to the fewest remaining ones (i.e. the elimination graph
node with the fewest unvisited neighbours), so it minimizes the number O(n2neighbours ) of
changed matrix elements and hence FLOPs for the step. The minimum degree ordering
can also be computed quite rapidly without explicit graph chasing. A related ordering,
minimum deciency, minimizes the ll-in (newly created edges) at each step, but this is
considerably slower to calculate and not usually so effective.
Fill-in or operation minimizing strategies tend to produce somewhat fragmentary matrices that require pointer- or index-based sparse matrix implementations (see g. 4). This
increases complexity and tends to reduce cache locality and pipeline-ability. An alternative
is to use prole matrices which (for lower triangles) store all elements in each row between
the rst non-zero one and the diagonal in a contiguous block. This is easy to implement
(see B.3), and practically efcient so long as about 30% or more of the stored elements are
actually non-zero. Orderings for this case aim to minimize the sum of the prole lengths
rather than the number of non-zero elements. Proling enforces a multiply-linked chain
structure on the variables, so it is especially successful for linear / chain-like / one dimensional problems, e.g. space or time sequences. The simplest proling strategy is reverse
Cuthill-McKee which chooses some initial variable (very preferably one from one end
of the chain), adds all variables coupled to that, then all variables coupled to those, etc.,
then reverses the ordering (otherwise, any highly-coupled variables get eliminated early
on, which causes disastrous ll-in). More sophisticated are the so-called bankers strategies, which maintain an active set of all the variables coupled to the already-eliminated
ones, and choose the next variable from the active set (King [72]), it and its neighbours
(Snay [101]) or all uneliminated variables (Levy [75]) to minimize the new size of the
active set at each step. In particular, Snays bankers algorithm is reported to perform
well on geodesy and aerial cartography problems [101, 24].
For all of these automatic ordering methods, it often pays to do some of the initial work
by hand, e.g. it might be appropriate to enforce the structure / camera division beforehand
and only order the reduced camera system. If there are nodes of particularly high degree
such as inner gauge constraints, the ordering calculation will usually run faster and the
326
B. Triggs et al.
quality may also be improved by removing these from the graph and placing them last by
hand.
The above ordering methods apply to both Cholesky / LDL decomposition of the
Hessian and QR decomposition of the least squares Jacobian. Sparse QR methods can be
implemented either with Givens rotations or (more efciently) with sparse Householder
transformations. Row ordering is important for the Givens methods [39]. For Householder
ones (and some Givens ones too) the multifrontal organization is now usual [41, 11], as
it captures the natural parallelism of the problem.
We have seen that for large problems, factoring the Hessian H to compute the Newton
step can be both expensive and (if done efciently) rather complex. In this section we
consider alternative methods that avoid the cost of exact factorization. As the Newton step
can not be calculated, such methods generally only achieve rst order (linear) asymptotic
convergence: when close to the nal state estimate, the error is asymptotically reduced by a
constant (and in practice often depressingly small) factor at each step, whereas quadratically
convergent Newton methods roughly double the number of signicant digits at each step.
So rst order methods require more iterations than second order ones, but each iteration
is usually much cheaper. The relative efciency depends on the relative sizes of these
two effects, both of which can be substantial. For large problems, the reduction in work
per iteration is usually at least O(n),
where
n is the problem size. But whereas Newton
methods converge from O(1) to O 1016 in about 1 + log2 16 = 5 iterations, linearly
convergent ones take respectively log 1016 / log(1 ) = 16, 350, 3700 iterations for
reduction = 0.9, 0.1, 0.01 per iteration. Unfortunately, reductions of only 1% or less are
by no means unusual in practice (7.2), and the reduction tends to decrease as n increases.
7.1
We rst consider a number of common rst order methods, before returning to the question
of why they are often slow.
Gradient descent: The simplest rst order method is gradient descent, which slides
down the gradient by taking x g or Ha = 1. Line search is needed, to nd an appropriate scale for the step. For most problems, gradient descent is spectacularly inefcient
unless the Hessian actually happens to be very close to a multiple of 1. This can be arranged
by preconditioning with a linear transform L, x L x, g L g and H L H L1,
where L L H is an approximate Cholesky factor (or other left square root) of H, so that
H L H L1 1. In this very special case, preconditioned gradient descent approximates the Newton method. Strictly speaking, gradient descent is a cheat: the gradient is a
covector (linear form on vectors) not a vector, so it does not actually dene a direction in
the search space. Gradient descents sensitivity to the coordinate system is one symptom
of this.
Alternation: Another simple approach is alternation: partition the variables into groups
and cycle through the groups optimizing over each in turn, with the other groups held
xed. This is most appropriate when the subproblems are signicantly easier to optimize
than the full one. A natural and often-rediscovered alternation for the bundle problem is
327
resection-intersection, which interleaves steps of resection (nding the camera poses and
if necessary calibrations from xed 3D features) and intersection (nding the 3D features
from xed camera poses and calibrations). The subproblems for individual features and
cameras are independent, so only the diagonal blocks of H are required.
Alternation can be used in several ways. One extreme is to optimize (or perhaps only
perform one step of optimization) over each group in turn, with a state update and reevaluation of (the relevant components of) g, H after each group. Alternatively, some of
the re-evaluations can be simulated by evaluating the linearized effects of the parameter
group update on the other groups. E.g., for resection-intersection with structure update
xS = HSS gS (xS , xC ) (where S selects the structure variables and C the camera
ones), the updated camera gradient is exactly the gradient of the reduced camera system,
1
gC (xS + xS , xC ) gC (xS , xC ) + HCS xS = gC
HCS HSS
gC . So the total update
for the cycle is
xS
xC
1
H
0
SS
1
1
HCS H
SS HCC
1
H
CC
gS
gC
HSS
0
HCS HCC
1 gS
gC
. In
general, this correction propagation amounts to solving the system as if the above-diagonal
triangle of H were zero. Once we have cycled through the variables, we can update the
full state and relinearize. This is the nonlinear Gauss-Seidel method. Alternatively, we
can split the above-diagonal
term
HSS triangle
xofS H off as a correction
0 (back-propagation)
xS
gS
0
HSC
and continue iterating HCS HCC
= gC 0 0
until
xC
xC
x
(k)
(k1)
(hopefully) xCS converges to the full Newton step x = H1g. This is the linear
Gauss-Seidel method applied to solving the Newton step prediction equations. Finally,
alternation methods always tend to underestimate the size of the Newton step because
they fail to account for the cost-reducing effects of including the back-substitution terms.
Successive Over-Relaxation (SOR) methods improve the convergence rate by articially
lengthening the update steps by a heuristic factor 1 < < 2.
Most if not all of the above alternations have been applied to both the bundle problem
and the independent model one many times, e.g. [19, 95, 2, 108, 91, 20]. Brown considered
the relatively sophisticated SOR method for aerial cartography problems as early as 1964,
before developing his recursive decomposition method [19]. None of these alternations are
very effective for traditional large-scale problems, although 7.4 below shows that they
can sometimes compete for smaller highly connected ones.
Krylov subspace methods: Another large family of iterative techniques are the Krylov
subspace methods, based on the remarkable properties of the power subspaces
Span({Ak b|k = 0 . . . n}) for xed A, b as n increases. Krylov iterations predominate
in many large-scale linear algebra applications, including linear equation solving.
The earliest and greatest Krylov method is the conjugate gradient iteration for solving
a positive denite linear system or optimizing a quadratic cost function. By augmenting the
gradient descent step with a carefully chosen multiple of the previous step, this manages
to minimize the quadratic model function over the entire k th Krylov subspace at the k th
iteration, and hence (in exact arithmetic) over the whole space at the nth
x one. This no
longer holds when there is round-off error,
but
O(n
)
iterations
usually
still
sufce to nd
x
the Newton step. Each iteration is O n2x so this is not in itself a large gain over explicit
factorization. However convergence is signicantly faster if the eigenvalues of H are tightly
clustered away from
zero: if the eigenvalues
are covered by intervals [ai , bi ]i=1...k , conver
k
gence occurs in O
bi /ai iterations [99, 47, 48]11 . Preconditioning (see below)
i=1
11
For other eigenvalue based based analyses of the bundle adjustment covariance, see [103, 92].
328
B. Triggs et al.
0.05
0.03
0.02
0.01
0.01
0.02
0.03
0.04
0.05
0.05
0.04
0.03
0.02
0.01
0.01
0.02
0.03
0.04
0.05
Fig. 5. An example of the typical behaviour of rst and second order convergent methods near the
minimum. This is a 2D projection of a small but ill-conditioned bundle problem along the two
most variable directions. The second order methods converge quite rapidly, whether they use exact
(Gauss-Newton) or iterative (diagonally preconditioned conjugate gradient) linear solver for the
Newton equations. In contrast, rst order methods such as resection-intersection converge slowly
near the minimum owing to their inaccurate model of the Hessian. The effects of mismodelling can
be reduced to some extent by adding a line search.
aims at achieving such clustering. As with alternation methods, there is a range of possible
update / re-linearization choices, ranging from a fully nonlinear method that relinearizes
after each step, to solving the Newton equations exactly using many linear iterations. One
major advantage of conjugate gradient is its simplicity: there is no factorization, all that is
needed is multiplication by H. For the full nonlinear method, H is not even needed one
simply makes a line search to nd the cost minimum along the direction dened by g and
the previous step.
One disadvantage of nonlinear conjugate gradient is its high sensitivity to the accuracy
of the line search. Achieving the required accuracy may waste several function evaluations
at each step. One way to avoid this is to make the information obtained by the conjugation
process more explicit by building up an explicit approximation to H or H1. Quasi-Newton
methods such as the BFGS method do this, and hence need less accurate line searches.
The quasi-Newton approximation to H or H1 is dense and hence expensive to store and
manipulate, but Limited Memory Quasi-Newton (LMQN) methods often get much of
the desired effect by maintaining only a low-rank approximation.
There are variants of all of these methods for least squares (Jacobian rather than Hessian
based) and for constrained problems (non-positive denite matrices).
7.2
To understand why rst order methods often have slow convergence, consider the effect of
approximating the Hessian in Newtons method. Suppose that in some local parametrization x centred at a cost minimum x = 0, the cost function is well approximated by a
329
quadratic near 0: f(x) 12 x H x and hence g(x) H x, where H is the true Hessian.
For most rst order methods, the predicted step is linear in the gradient g. If we adopt a
Newton-like state update x = Ha1 g(x) based on some approximation Ha to H, we get
an iteration:
k+1
x0
(27)
330
B. Triggs et al.
either zigzag or converge slowly and monotonically, depending on the exact method and
parameter values.
Line search: The above behaviour can often be improved signicantly by adding a line
search to the method. In principle, the resulting method converges for any positive denite
Ha . However, accurate modelling of H is still highly desirable. Even with no rounding
errors, an exactly quadratic (but otherwise unknown) cost function and exact line searches
(i.e. the minimum along the line is found exactly), the most efcient generic line search
based methods such as conjugate gradient or quasi-Newton require at least O(nx ) iterations
to converge. For large bundle problems with thousands of parameters, this can already be
prohibitive. However, if knowledge about H is incorporated via a suitable preconditioner,
the number of iterations can often be reduced substantially.
7.3
Preconditioning
Gradient descent and Krylov methods are sensitive to the coordinate system and their
practical success depends critically on good preconditioning. The aim is to nd a linear
transformation x T x and hence g T g and H T H T for which the transformed H is near 1, or at least has only a few clusters of eigenvalues well separated from
the origin. Ideally, T should be an accurate, low-cost approximation to the left Cholesky
factor of H. (Exactly evaluating this would give the expensive Newton method again). In
the experiments below, we tried conjugate gradient with preconditioners based on the diagonal blocks of H, and on partial Cholesky decomposition, dropping either all lled-in
elements, or all that are smaller than a preset size when performing Cholesky decomposition. These methods were not competitive with the exact Gauss-Newton ones in the strip
experiments below, but for large enough problems it is likely that a preconditioned Krylov
method would predominate, especially if more effective preconditioners could be found.
An exact Cholesky factor of H from a previous iteration is often a quite effective
preconditioner. This gives hybrid methods in which H is only evaluated and factored every
few iterations, with the Newton step at these iterations and well-preconditioned gradient
descent or conjugate gradient at the others.
7.4
Experiments
Figure 6 shows the relative performance of several methods on two synthetic projective
bundle adjustment problems. In both cases, the number of 3D points
increases in proportion
to the number of images, so the dense factorization time is O n3 where n is the number
of points or images. The following methods are shown: Sparse Gauss-Newton sparse
Cholesky decomposition with variables ordered naturally (features then cameras); Dense
Gauss-Newton the same, but (inefciently) ignoring all sparsity of the Hessian; Diag.
Conj. Gradient the Newton step is found by an iterative conjugate gradient linear
system solver, preconditioned using the Cholesky factors of the diagonal blocks of the
Hessian; Resect-Intersect the state is optimized by alternate steps of resection and
intersection, with relinearization after each. In the spherical cloud problem, the points
are uniformly distributed within a spherical cloud, all points are visible in all images,
and the camera geometry is strongly convergent. These are ideal conditions, giving a low
diameter network graph and a well-conditioned, nearly diagonal-dominant Hessian. All
of the methods converge quite rapidly. Resection-intersection is a competitive method for
331
total operations
1e+10
1e+09
1e+08
1e+07
1e+06
2
8
16
32
64
no. of images
Computation vs. Bundle Size -- Weak Geometry
1e+11
Resect-Intersect (941 steps)
Dense Gauss-Newton (5.9 steps)
Diag. Conj. Gradient (5.4 steps)
Sparse Gauss-Newton (5.7 steps)
total operations
1e+10
1e+09
1e+08
1e+07
1e+06
2
16
32
no. of images
64
128
Fig. 6. Relative speeds of various bundle optimization methods for strong spherical cloud and weak
strip geometries.
larger problems owing to its low cost per iteration. Unfortunately, although this geometry
is often used for testing computer vision algorithms, it is atypical for large-scale bundle
problems. The strip experiment has a more representative geometry. The images are
arranged in a long strip, with each feature seen in about 3 overlapping images. The strips
long thin weakly-connected network structure gives it large scale low stiffness exing
modes, with correspondingly poor Hessian conditioning. The off-diagonal terms are critical
here, so the approximate methods perform very poorly. Resection-intersection is slower
even than dense Cholesky decomposition ignoring all sparsity. For 16 or more images
it fails to converge even after 3000 iterations. The sparse Cholesky methods continue to
perform reasonably well, with the natural, minimum degree and reverse Cuthill-McKee
orderings all giving very similar run times in this case. For all of the methods that we
tested, including resection-intersection with its linear
cost, the total run time
per-iteration
for long chain-like geometries scaled roughly as O n3 .
332
8
8.1
B. Triggs et al.
333
For updates involving a previously unseen 3D feature or image, new variables must
also be added to the system. This is easy. We simply choose where to put the variables in
the elimination sequence, and extend H and its L,D,U factors with the corresponding rows
and columns, setting all of the newly created positions to zero (except for the unit diagonals
of LDLs and LUs L factor). The factorization can then be updated as usual, presumably
adding enough cost terms to make the extended Hessian nonsingular and couple the new
parameters into the old network. If a direct covariance update is needed, the Woodbury
formula (18) can be used on the old part of the matrix, then (17) to ll in the new blocks
(equivalently, invert (54), with D1 A representing the old blocks and D2 0 the new
ones).
Conversely, it may be necessary to delete parameters, e.g. if an image or 3D feature
has lost most or all of its support. The corresponding rows and columns of the Hessian
H (and rows of g, columns of J) must be deleted, and all cost contributions involving the
deleted parameters must also be removed using the usual factorization downdates (55, 56).
To delete the rows and columns of block b in a matrix A, we rst delete the b rows and
columns of L, D, U. This maintains triangularity and
gives the correct trimmed A, except
that the blocks in the lower right corner Aij =
kmin(i,j) Lik Dk Ukj , i, j > b are
missing a term Lib Db Ubj from the deleted column b of L / row b of U. This is added using
an update +Lb Db Ub , > b. To update A1 when rows and columns of A are deleted,
permute the deleted rows and columns to the end and use (17) backwards: (A11 )1 =
(A1)11 (A1)12 (A1)221 (A1)21 .
It is also possible to freeze some live parameters at xed (current or default) values,
or to add extra parameters / unfreeze some previously frozen ones, c.f . (48, 49) below. In
this case, rows and columns corresponding to the frozen parameters must be deleted or
added, but no other change to the cost function is required. Deletion is as above. To insert
rows and columns Ab , Ab at block b of matrix A, we open space in row and column b of
L, D, U and ll these positions with the usual recursively dened values (51). For i, j > b,
the sum (51) will now have a contribution Lib Db Ubj that it should not have, so to correct
this we downdate the lower right submatrix > b with a cost cancelling contribution
Lb Db Ub .
8.2
Each update computation is roughly quadratic in the size of the state vector, so if new
features and images are continually added the situation will eventually become unmanageable. We must limit what we compute. In principle parameter renement never stops:
each observation update affects all components of the state estimate and its covariance.
However, the renements are in a sense trivial for parameters that are not directly coupled
to the observation. If these parameters are eliminated using reduction (19), the observation update can be applied directly to the reduced Hessian and gradient12 . The eliminated
parameters can then be updated by simple back-substitution (19) and their covariances by
(17). In particular, if we cease to receive new information relating to a block of parameters
(an image that has been fully treated, a 3D feature that has become invisible), they and
all the observations relating to them can be subsumed once-and-for-all in a reduced Hessian and gradient on the remaining parameters. If required, we can later re-estimate the
12
In (19), only D and b2 are affected by the observation as it is independent of the subsumed
components A, B, C, b1 . So applying the update to D, b2 has the same effect as applying it to
D, b2 .
334
B. Triggs et al.
335
reconstruction error
2000
30% of data
85% of data
100% of data
1500
1000
500
0
simple
batch
Fig. 7. The residual state estimation error of the VSDF sequential bundle algorithm for progressively
increasing sizes of rolling time window. The residual error at image t = 16 is shown for rolling
windows of 15 previous images, and also for a batch method (all previous images) and a simple
one (reconstruction / intersection is performed independently of camera location / resection). To
simulate the effects of decreasing amounts of image data, 0%, 15% and 70% of the image measurements are randomly deleted to make runs with 100%, 85% and only 30% of the supplied image data.
The main conclusion is that window size has little effect for strong data, but becomes increasingly
important as the data becomes weaker.
nonlinearly over just the current state, assuming all previous ones to be linearized). The
effects of variable window size on the Variable State Dimension Filter (VSDF) sequential
bundle algorithm [85, 86, 83, 84] are shown in gure 7.
Gauge Freedom
Coordinates are a very convenient device for reducing geometry to algebra, but they come
at the price of some arbitrariness. The coordinate system can be changed at any time,
without affecting the underlying geometry. This is very familiar, but it leaves us with two
problems: (i) algorithmically, we need some concrete way of deciding which particular
coordinate system to use at each moment, and hence breaking the arbitrariness; (ii) we
need to allow for the fact that the results may look quite different under different choices,
even though they represent the same underlying geometry.
Consider the choice of 3D coordinates in visual reconstruction. The only objects in the
3D space are the reconstructed cameras and features, so we have to decide where to place
the coordinate system relative to these . . . Or in coordinate-centred language, where to
place the reconstruction relative to the coordinate system. Moreover, bundle adjustment
updates and uncertainties can perturb the reconstructed structure almost arbitrarily, so
we must specify coordinate systems not just for the current structure, but also for every
possible nearby one. Ultimately, this comes down to constraining the coordinate values
of certain aspects of the reconstructed structure features, cameras or combinations of
these whatever the rest of the structure might be. Saying this more intrinsically, the
coordinate frame is specied and held xed with respect to the chosen reference elements,
336
B. Triggs et al.
and the rest of the geometry is then expressed in this frame as usual. In measurement
science such a set of coordinate system specifying rules is called a datum, but we will
follow the wider mathematics and physics usage and call it a gauge13 . The freedom in the
choice of coordinate xing rules is called gauge freedom.
As a gauge anchors the coordinate system rigidly to its chosen reference elements, perturbing the reference elements has no effect on their own coordinates. Instead, it changes
the coordinate system itself and hence systematically changes the coordinates of all the
other features, while leaving the reference coordinates xed. Similarly, uncertainties in
the reference elements do not affect their own coordinates, but appear as highly correlated
uncertainties in all of the other reconstructed features. The moral is that structural perturbations and uncertainties are highly relative. Their form depends profoundly on the gauge,
and especially on how this changes as the state varies (i.e. which elements it holds xed).
The effects of disturbances are not restricted to the coordinates of the features actually
disturbed, but may appear almost anywhere depending on the gauge.
In visual reconstruction, the differences between object-centred and camera-centred
gauges are often particularly marked. In object-centred gauges, object points appear to be
relatively certain while cameras appear to have large and highly correlated uncertainties.
In camera-centred gauges, it is the camera that appears to be precise and the object points
that appear to have large correlated uncertainties. One often sees statements like the
reconstructed depths are very uncertain. This may be true in the camera frame, yet the
object may be very well reconstructed in its own frame it all depends on what fraction
of the total depth uctuations are simply due to global uncertainty in the camera location,
and hence identical for all object points.
Besides 3D coordinates, many other types of geometric parametrization in vision involve arbitrary choices, and hence are subject to gauge freedoms [106]. These include the
choice of: homogeneous scale factors in homogeneous-projective representations; supporting points in supporting-point based representations of lines and planes; reference
plane in plane + parallax representations; and homographies in homography-epipole representations of matching tensors. In each case the symptoms and the remedies are the
same.
9.1
General Formulation
The general set up is as follows: We take as our state vector x the set of all of the 3D feature
coordinates, camera poses and calibrations, etc., that enter the problem. This state space
has internal symmetries related to the arbitrary choices of 3D coordinates, homogeneous
scale factors, etc., that are embedded in x. Any two state vectors that differ only by such
choices represent the same underlying 3D geometry, and hence have exactly the same image
projections and other intrinsic properties. So under change-of-coordinates equivalence, the
state space is partitioned into classes of intrinsically equivalent state vectors, each class
representing exactly one underlying 3D geometry. These classes are called gauge orbits.
Formally, they are the group orbits of the state space action of the relevant gauge group
(coordinate transformation group), but we will not need the group structure below. A state
space function represents an intrinsic function of the underlying geometry if and only if
it is constant along each gauge orbit (i.e. coordinate system independent). Such quantities
13
Here, gauge just means reference frame. The sense is that of a reference against which something
is judged (O.Fr. jauger, gauger). Pronounce g
ei dj.
337
are called gauge invariants. We want the bundle adjustment cost function to quantify
intrinsic merit, so it must be chosen to be gauge invariant.
In visual reconstruction, the principal gauge groups are the 3 + 3 + 1 = 7 dimensional group of 3D similarity (scaled Euclidean) transformations for Euclidean reconstruction, and the 15 dimensional group of projective 3D coordinate transformations for
projective reconstruction. But other gauge freedoms are also present. Examples include:
(i) The arbitrary scale factors of homogeneous projective feature representations, with
their 1D rescaling gauge groups. (ii) The arbitrary positions of the points in two point
line parametrizations, with their two 1D motion-along-line groups. (iii) The underspecied
33 homographies used for homography + epipole parametrizations of matching tensors
[77, 62, 106]. For example, the fundamental matrix can be parametrized as F = [ e ] H
where e is its left epipole and H is the inter-image homography induced by any 3D plane.
The choice of plane gives a freedom H H + e a where a is an arbitrary 3-vector, and
hence a 3D linear gauge group.
Now consider how to specify a gauge, i.e. a rule saying how each possible underlying
geometry near the current one should be expressed in coordinates. Coordinatizations are
represented by state space points, so this is a matter of choosing exactly one point (structure
coordinatization) from each gauge orbit (underlying geometry). Mathematically, the gauge
orbits foliate (ll without crossing) the state space, and a gauge is a local transversal
cross-section G through this foliation. See g. 8. Different gauges represent different but
geometrically equivalent coordinatization rules. Results can be mapped between gauges
by pushing them along gauge orbits, i.e. by applying local coordinate transformations that
vary depending on the particular structure involved. Such transformations are called Stransforms (similarity transforms) [6, 107, 22, 25]. Different gauges through the same
central state represent coordinatization rules that agree for the central geometry but differ
for perturbed ones the S-transform is the identity at the centre but not elsewhere.
Given a gauge, only state perturbations that lie within the gauge cross-section are authorized. This is what we want, as such state perturbations are in one-to-one correspondence
with perturbations of the underlying geometry. Indeed, any state perturbation is equivalent
to some on-gauge one under the gauge group (i.e. under a small coordinate transformation
that pushes the perturbed state along its gauge orbit until it meets the gauge cross-section).
State perturbations along the gauge orbits are uninteresting, because they do not change
the underlying geometry at all.
338
B. Triggs et al.
Covariances are averages of squared perturbations and must also be based on on-gauge
perturbations (they would be innite if we permitted perturbations along the gauge orbits,
as there is no limit to these they do not change the cost at all). So covariance matrices
are gauge-dependent and in fact represent ellipsoids tangent to the gauge cross-section at
the cost minimum. They can look very different for different gauges. But, as with states, Stransforms map them between gauges by projecting along gauge orbits / state equivalence
classes.
Note that there is no intrinsic notion of orthogonality on state space, so it is meaningless
to ask which state-space directions are orthogonal to the gauge orbits. This would involve deciding when two different structures have been expressed in the same coordinate
system, so every gauge believes its own cross section to be orthogonal and all others to
be skewed.
9.2
Gauge Constraints
We will work near some point x of state space, perhaps a cost minimum or a running state
estimate. Let nx be the dimension of x and ng the dimension of the gauge orbits. Let f, g, H
be the cost function and its gradient and Hessian, and G be any nx ng matrix whose
columns span the local gauge orbit directions at x 14 . By the exact gauge invariance of f,
its gradient and Hessian vanish along orbit directions: g G = 0 and H G = 0. Note that
the gauged Hessian H is singular with (at least) rank deciency ng and null space G. This
is called gauge deciency. Many numerical optimization routines assume nonsingular H,
and must be modied to work in gauge invariant problems. The singularity is an expression
of indifference: when we come to calculate state updates, any two updates ending on the
same gauge orbit are equivalent, both in terms of cost and in terms of the change in the
underlying geometry. All that we need is a method of telling the routine which particular
update to choose.
Gauge constraints are the most direct means of doing this. A gauge cross-section G
can be specied in two ways: (i) constrained form: specify ng local constraints d(x)
with d(x) = 0 for points on G ; (ii) parametric form: specify a function x(y) of nx ng
independent local parameters y, with x = x(y) being the points of G. For example, a
trivial gauge is one that simply freezes the values of ng of the parameters in x (usually
feature or camera coordinates). In this case we can take d(x) to be the parameter freezing
constraints and y to be the remaining unfrozen parameters. Note that once the gauge is
xed the problem is no longer gauge invariant the whole purpose of d(x), x(y) is to
break the underlying gauge invariance.
Examples of trivial gauges include: (i) using several visible 3D points as a projective
basis for reconstruction (i.e. xing their projective 3D coordinates to simple values, as
in [27]); and (ii) xing the components of one projective 3 4 camera matrix as (I 0),
as in [61] (this only partially xes the 3D projective gauge 3 projective 3D degrees of
freedom remain unxed).
14
A suitable G is easily calculated from the innitesimal action of the gauge group on x. For
example, for spatial similarities the columns of G would be the ng = 3 + 3 + 1 = 7 state velocity
vectors describing the effects of innitesimal translations, rotations and changes of spatial scale
on x.
339
Linearized gauge: Let the local linearizations of the gauge functions be:
D dd
dx
dx
Y dy
d(x + x) d(x) + D x
x(y + y) x(y) + Y y
(28)
(29)
Compatibility between the two gauge specication methods requires d(x(y)) = 0 for all
y, and hence D Y = 0. Also, since G must be transversal to the gauge orbits, D G must
have full rank ng and (Y G) must have full rank nx . Assuming that x itself is on G, a
perturbation x + xG is on G to rst order iff D xG = 0 or xG = Y y for some y.
Two nx nx rank nx ng matrices characterize G. The gauge projection matrix PG
implements linearized projection of state displacement vectors x along their gauge orbits
onto the local gauge cross-section: x xG = PG x. (The projection is usually nonorthogonal: PG = PG ). The gauged covariance matrix VG plays the role of the inverse
Hessian. It gives the cost-minimizing Newton step within G, xG = VG g, and also
the asymptotic covariance of xG . PG and VG have many special properties and equivalent
1
forms. For convenience, we display some of these now15 let V (H + D B D) where
B is any nonsingular symmetric ng ng matrix, and let G be any other gauge:
1
PG 1 G (D G) D = Y (Y H Y)1 Y H = V H = VG H = PG PG
PG G = 0 ,
PG Y = Y ,
D PG = D VG = 0
g PG = g ,
H PG = H ,
VG g = V g
(30)
(31)
(32)
(33)
(34)
These relations can be summarized by saying that VG is the G-supported generalized inverse
of H and that PG : (i) projects along gauge orbits (PG G = 0); (ii) projects onto the gauge
cross-section G (D PG = 0, PG Y = Y, PG x = xG and VG = PG VG PG); and (iii)
preserves gauge invariants (e.g. f(x + PG x) = f(x + x), g PG = g and H PG = H).
Both VG and H have rank nx ng . Their null spaces D and G are transversal but otherwise
unrelated. PG has left null space D and right null space G.
State updates: It is straightforward to add gauge xing to the bundle update equations.
First consider the constrained form. Enforcing the gauge constraints d(x + xG ) = 0 with
Lagrange multipliers gives an SQP step:
1
G (D G)1
H D
VG
xG
g
H D
=
=
,
(35)
D 0
d
D 0
(D G) G
0
so
xG = (VG g + G (D G)1 d) ,
= 0
(36)
This is a rather atypical constrained problem. For typical cost functions the gradient has a
component pointing away from the constraint surface, so g = 0 at the constrained minimum
15
1
These results are most easily proved by inserting
strategic factors
of (Y G) (Y G) and
(Y H Y)1 Y H
(D G)1 D
Y H Y
0
cluding 0, Y
H + D B D (Y G) =
0
(D G) B (D G) . If B is nonsingular,
G
1
V = H + D B D = Y (Y H Y)1 Y + G (D G)1 B1 (D G) G.
340
B. Triggs et al.
and a non-vanishing force = 0 is required to hold the solution on the constraints. Here,
the cost function and its derivatives are entirely indifferent to motions along the orbits.
Nothing actively forces the state to move off the gauge, so the constraint force vanishes
everywhere, g vanishes at the optimum, and the constrained minimum value of f is identical
to the unconstrained minimum. The only effect of the constraints is to correct any gradual
drift away from G that happens to occur, via the d term in xG .
A simpler way to get the same effect is to add a gauge-invariance breaking term such
as 12 d(x) B d(x) to the cost function, where B is some positive ng ng weight matrix.
Note that 12 d(x) B d(x) has a unique minimum of 0 on each orbit at the point d(x) = 0,
i.e. for x on G. As f is constant along gauge orbits, optimization of f(x) + 12 d(x) B d(x)
along each orbit enforces G and hence returns the orbits f value, so global optimization
will nd the global constrained minimum of f. The cost function f(x) + 12 d(x) B d(x)
is nonsingular with Newton step xG = V (g + D B d) where V = (H + D B D)1 is
the new inverse Hessian. By (34, 30), this is identical to the SQP step (36), so the SQP
and cost-modifying methods are equivalent. This strategy works only because no force is
required to keep the state on-gauge if this were not the case, the weight B would have
to be innite. Also, for dense D this form is not practically useful because H + D B D is
dense and hence slow to factorize, although updating formulae can be used.
Finally, consider the parametric form x = x(y) of G. Suppose that we already have a
current reduced state estimate y. We can approximate f(x(y + y)) to get a reduced system
for y, solve this, and nd xG afterwards if necessary:
(Y H Y) y = Y g ,
xG = Y y = VG g
(37)
341
gauge invariants e.g. f(x + PG x) = f(x + x) and it cancels the effects of projection
onto any other gauge: PG PG = PG .
9.3
Inner Constraints
Given the wide range of gauges and the signicant impact that they have on the appearance
of the state updates and covariance matrix, it is useful to ask which gauges give the
smallest or best behaved updates and covariances. This is useful for interpreting and
comparing results, and it also gives benecial numerical properties. Basically it is a matter
of deciding which features or cameras we care most about and tying the gauge to some
stable average of them, so that gauge-induced correlations in them are as small as possible.
For object reconstruction the resulting gauge will usually be object-centred, for vehicle
navigation camera-centred. We stress that such choices are only a matter of supercial
appearance: in principle, all gauges are equivalent and give identical values and covariances
for all gauge invariants.
Another way to say this is that it is only for gauge invariants that we can nd meaningful
(coordinate system independent) values and covariances. But one of the most fruitful ways
to create invariants is to locate features w.r.t. a basis of reference features, i.e. w.r.t. the
gauge based on them. The choice of inner constraints is thus a choice of a stable basis
of compound features w.r.t. which invariants can be measured. By including an average
of many features in the compound, we reduce the invariants dependence on the basis
features.
As a performance criterion we can minimize some sort of weighted average size, either
of the state update or of the covariance. Let W be an nx nx information-like weight matrix
encoding the relative importance of the various error components, and L be any left square
root for it, L L = W. The local gauge at x that minimizes the weighted size of the state
update xG W xG , the weighted covariance sum Trace(W VG ) = Trace(L VG L), and the
L2 or Frobenius norm of L VG L, is given by the inner constraints [87, 89, 6, 22, 25]16 :
D x = 0
where
D G W
(38)
The corresponding covariance VG is given by (30) with D = G W, and the state update is
xG = VG g as usual. Also, if W is nonsingular, VG is given by the weighted rank nx ng
16
g
0
( 0 00 ), we have G
( 01 )
342
B. Triggs et al.
The inner constraints are covariant under global transformations x t(x) provided
that W is transformed in the usual information matrix / Hessian way W T W T1 where
dt 17
. However, such transformations seldom preserve the form of W (diagonality,
T = dx
W = 1, etc.). If W represents an isotropic weighted sum over 3D points18 , its form is
preserved under global 3D Euclidean transformations, and rescaled under scalings. But
this extends neither to points under projective transformations, nor to camera poses, 3D
planes and other non-point-like features even under Euclidean ones. (The choice of origin
has a signicant inuence For poses, planes, etc. : changes of origin propagate rotational
uncertainties into translational ones).
Inner constraints were originally introduced in geodesy in the case W = 1 [87]. The
meaning of this is entirely dependent on the chosen 3D coordinates and variable scaling.
In bundle adjustment there is little to recommend W = 1 unless the coordinate origin has
been carefully chosen and the variables carefully pre-scaled as above, i.e. x L x and
hence H L1 H L, where W L L is a xed weight matrix that takes account of the
fact that the covariances of features, camera translations and rotations, focal lengths, aspect
ratios and lens distortions, all have entirely different units, scales and relative importances.
For W = 1, the gauge projection PG becomes orthogonal and symmetric.
9.4
Free Networks
Gauges can be divided roughly into outer gauges, which are locked to predened external
reference features giving a xed network adjustment, and inner gauges, which are locked
only to the recovered structure giving a free network adjustment. (If their weight W is
concentrated on the external reference, the inner constraints give an outer gauge). As
above, well-chosen inner gauges do not distort the intrinsic covariance structure so much
as most outer ones, so they tend to have better numerical conditioning and give a more
representative idea of the true accuracy of the network. It is also useful to make another,
slightly different xed / free distinction. In order to control the gauge deciency, any
gauge xing method must at least specify which motions are locally possible at each
iteration. However, it is not indispensable for these local decisions to cohere to enforce
a global gauge. A method is globally xed if it does enforce a global gauge (whether
inner or outer), and globally free if not. For example, the standard photogrammetric inner
constraints [87, 89, 22, 25] give a globally free inner gauge. They require that the cloud of
reconstructed points should not be translated, rotated or rescaled under perturbations (i.e.
the centroid and average directions and distances from the centroid remain unchanged).
However, they do not specify where the cloud actually is and how it is oriented and scaled,
and they do not attempt to correct for any gradual drift in the position that may occur during
the optimization iterations, e.g. owing to accumulation of truncation errors. In contrast,
McLauchlan globally xes the inner gauge by locking it to the reconstructed centroid
and scatter matrix [82, 81]. This seems to give good numerical properties (although more
testing is required to determine whether there is much improvement over a globally free
17
18
343
inner gauge), and it has the advantage of actually xing the coordinate system so that
direct comparisons of solutions, covariances, etc., are possible. Numerically, a globally
xed gauge can be implemented either by including the d term in (36), or simply by
applying a rectifying gauge transformation to the estimate, at each step or when it drifts
too far from the chosen gauge.
9.5
Given that all gauges are in principle equivalent, it does not seem worthwhile to pay a
high computational cost for gauge xing during step prediction, so methods requiring
large dense factorizations or (pseudo-)inverses should not be used directly. Instead, the
main computation can be done in any convenient, low cost gauge, and the results later
transformed into the desired gauge using the gauge projector19 PG = 1 G (D G)1 D.
It is probably easiest to use a trivial gauge for the computation. This is simply a matter
of deleting the rows and columns of g, H corresponding to ng preselected parameters,
which should be chosen to give a reasonably well-conditioned gauge. The choice can be
made automatically by a subset selection method (c.f ., e.g. [11]). H is left intact and
factored as usual, except that the nal dense (owing to ll-in) submatrix is factored using a
stable pivoted method, and the factorization is stopped ng columns before completion. The
remaining ng ng block (and the corresponding block of the forward-substituted gradient
g) should be zero owing to gauge deciency. The corresponding rows of the state update
are set to zero (or anything else that is wanted) and back-substitution gives the remaining
update components as usual. This method effectively nds the ng parameters that are least
well constrained by the data, and chooses the gauge constraints that freeze these by setting
the corresponding xG components to zero.
10
Quality Control
This section discusses quality control methods for bundle adjustment, giving diagnostic
tests that can be used to detect outliers and characterize the overall accuracy and reliability
of the parameter estimates. These techniques are not well known in vision so we will go
into some detail. Skip the technical details if you are not interested in them.
Quality control is a serious issue in measurement science, and it is perhaps here that
the philosophical differences between photogrammetrists and vision workers are greatest:
the photogrammetrist insists on good equipment, careful project planning, exploitation
of prior knowledge and thorough error analyses, while the vision researcher advocates a
more casual, exible point-and-shoot approach with minimal prior assumptions. Many
applications demand a judicious compromise between these virtues.
A basic maxim is quality = accuracy + reliability20 . The absolute accuracy of the
system depends on the imaging geometry, number of measurements, etc. But theoretical
19
20
The projector PG itself is never calculated. Instead, it is applied in pieces, multiplying by D, etc.
The gauged Newton step xG is easily found like this, and selected blocks of the covariance
VG = PG VG PG can also be found in this way, expanding PG and using (53) for the leading term,
and for the remaining ones nding L1 D, etc., by forwards substitution.
Accuracy is sometimes called precision in photogrammetry, but we have preferred to retain
the familiar meanings from numerical analysis: precision means numerical error / number of
working digits and accuracy means statistical error / number of signicant digits.
344
B. Triggs et al.
precision by itself is not enough: the system must also be reliable in the face of outliers, small modelling errors, and so forth. The key to reliability is the intelligent use of
redundancy: the results should represent an internally self-consistent consensus among
many independent observations, no aspect of them should rely excessively on just a few
observations.
The photogrammetric literature on quality control deserves to be better known in vision,
especially among researchers working on statistical issues. Forstner [33, 34] and Grun
[49, 50] give introductions with some sobering examples of the effects of poor design.
See also [7, 8, 21, 22]. All of these papers use least squares cost functions and scalar
measurements. Our treatment generalizes this to allow robust cost functions and vector
measurements, and is also slightly more self-consistent than the traditional approach. The
techniques considered are useful for data analysis and reporting, and also to check whether
design requirements are realistically attainable during project planning. Several properties
should be veried. Internal reliability is the ability to detect and remove large aberrant
observations using internal self-consistency checks. This is provided by traditional outlier
detection and/or robust estimation procedures. External reliability is the extent to which
any remaining undetected outliers can affect the estimates. Sensitivity analysis gives
useful criteria for the quality of a design. Finally, model selection tests attempt to decide
which of several possible models is most appropriate and whether certain parameters can
be eliminated.
10.1
Cost Perturbations
O g/ nz nx ,
(39)
1
Sketch proof : From the Newton steps x x x0 H
g (x0 ) at x0 , we nd
1
(x ) f (x0 ) 2 x
that
f
H x and hence f+ (x+ ) f (x ) f(x0 )
1
H
x
x
H
x
x
. is unbiased to relatively high order: by the central limit
+
+
+
2
1
property of ML estimators, the asymptotic distributions of x are Gaussian N (0, H
), so the
H
x
is
asymptotically
the
number
of
free
model
parameters
nx .
expectation of both x
345
Note that by combining values at two known evaluation points x , we simulate a value at
a third unknown one x0 . The estimate is not perfect, but it is the best that we can do in the
circumstances.
There are usually many observations to test, so to avoid having to ret the model many
times we approximate the effects of adding or removing observations. Working at x and
using the fact that g (x ) = 0, the Newton step x x+ x H1 g(x ) implies
a change in tted residual of:
f+ (x+ ) f (x ) f(x ) 12 x H x
(40)
1
(g )(H1) g and22 (H1) = (H1) H
(H1) (H1) H (H1) .
Even without the approximation, this involves at most a k k factorization or inverse.
Indeed, for least squares H is usually of even lower rank (= the number of independent
observations in f), so the Woodbury formula (18) can be used to calculate the inverse even
more efciently.
10.2
In robust cost models nothing special needs to be done with outliers they are just
normal measurements that happen be downweighted owing to their large deviations. But
in non-robust models such as least squares, explicit outlier detection and removal are
essential for inner reliability. An effective diagnostic is to estimate f(x0 ) using (39, 40),
and signicance-test it against its distribution under the null hypothesis that the observation
is an inlier. For the least squares cost model, the null distribution of 2 f(x0 ) is 2k where
k is the number of independent observations contributing to f. So if is a suitable 2k
signicance threshold, the typical one-sided signicance test is:
22
(41)
(42)
346
B. Triggs et al.
As usual we approximate H1 H1 and use x results for additions and x+ ones for
deletions. These tests require the tted covariance matrix H1 (or, if relatively few tests
will be run, an equivalent factorization of H ), but given this they are usually fairly
economical owing to the sparseness of the observation gradients g(x ). Equation (42)
is for the nonlinear least squares model with residual error zi (x) zi zi (x), cost
dzi
1
2 zi (x) Wi zi (x) and Jacobian Ji = dx . Note that even though zi induces a change in
all components of the observation residual z via its inuence on x, only the immediately
involved components zi are required in (42). The bias-correction-induced change of
weight matrix Wi Wi Wi Ji H1 Ji Wi accounts for the others. For non-quadratic
cost functions, the above framework still applies but the cost functions native distribution
of negative log likelihood values must be used instead of the Gaussians 12 2 .
In principle, the above analysis is only valid when at most one outlier causes a relatively
small perturbation x. In practice, the observations are repeatedly scanned for outliers, at
each stage removing any discovered outliers (and perhaps reinstating previously discarded
observations that have become inliers) and retting. The net result is a form of M-estimator
routine with an abruptly vanishing weight function: outlier deletion is just a roundabout
way of simulating a robust cost function. (Hard inlier/outlier rules correspond to total
likelihood functions that become strictly constant in the outlier region).
The tests (41, 42) give what is needed for outlier decisions based on tted state estimates
x , but for planning purposes it is also useful to know how large a gross error must typically
be w.r.t. the true state x0 before it is detected. Outlier detection is based on the uncertain
tted state estimates, so we can only give an average case result. No adjustment for x is
needed in this case, so the average minimum detectable gross error is simply:
10.3
(43)
Outer Reliability
Ideally, the state estimate should be as insensitive as possible to any remaining errors in
the observations. To estimate how much a particular observation inuences the nal state
estimate, we can directly monitor the displacement x x+ x H1 g (x ).
For example, we might dene an importance weighting on the state parameters with a
criterion matrix U and monitor absolute displacements U x U H1 g(x ), or
compare the displacement x to the covariance H1 of x by monitoring x H x
g (x ) H1 g (x ). A bound on g(x ) of the form23 g g V for some positive
semidenite V implies a bound x x H1 V H1 on x and hence a bound U x2
N (U H1 V H1 U) where N () can be L2 norm, trace or Frobenius norm. For a robust
23
This is a convenient intermediate form for deriving bounds. For positive semidenite matrices A, B, we say that B dominates A, B A, if B A is positive semidenite. It follows
that N (U A U) N (U B U) for any matrix U and any matrix function N () that is nondecreasing under positive additions. Rotationally invariant non-decreasing functions
N () include
all
non-decreasing functions of the eigenvalues, e.g. L2 norm max i , trace i , Frobenius norm
2i . For a vector a and positive B, aB a k if and only if a a k B1. (Proof: Conjugate by B1/2 and then by a(B1/2 a)-reducing Householder rotation to reduce the question to the
equivalence of 0 Diag k u2 , k, . . . , k and u2 k, where u2 = B1/2 a2 ). Bounds of
the form U a2 k N (U B1 U) follow for any U and any N () for which N (v v) = v2 ,
e.g. L2 norm, trace, Frobenius norm.
347
Trace
Ji H1 U U H1 Ji Wi 1, where Wi 1 is the nominal covariance of zi . Note that these bounds
are based on changes in the estimated state x . They do not directly control perturbations
w.r.t. the true one x0 . The combined inuence of several (k nz nx ) observations is
given by summing their gs.
10.4
Sensitivity Analysis
This section gives some simple gures of merit that can be used to quantify network
redundancy and hence reliability. First, in f(x0 ) f(x+ ) + 12 g(x+ ) H1 g(x+ ),
each cost contribution f(x0 ) is split into two parts: the visible residual f(x+ ) at the tted
state x+ ; and 12 x H x, the change in the base cost f (x) due to the state perturbation
x = H1 g(x+ ) induced by the observation. Ideally, we would like the state perturbation
to be small (for stability) and the residual to be large (for outlier detectability). In other
words, we would like the following masking factor to be small (mi 1) for each
observation:
mi
=
g(x+ ) H1 g(x+ )
2 f(x+ ) + g(x+ ) H1 g(x+ )
(46)
(47)
(Here, f should be normalized to have minimum value 0 for an exact t). If mi is known,
the outlier test becomes f(x+ )/(1 mi ) . The masking mi depends on the relative
size of g and f, which in general depends on the functional form of f and the specic
deviation involved. For robust cost models, a bound on g may be enough to bound mi
for outliers. However, for least squares case (z form), and more generally for quadratic
cost models (such as robust models near the origin), mi depends only on the direction of
zi , not on its size, and we have a global L2 matrix norm based bound mi 1+
where
1
1
= L Ji H Ji L2 Trace Ji H Ji W and L L = Wi is a Cholesky decomposition
of Wi . (These bounds become equalities for scalar observations).
The stability of the state estimate is determined by the total cost Hessian (information
matrix) H. A large H implies a small state estimate covariance H1 and also small responses
x H1 g to cost perturbations g. The sensitivity numbers si Trace H+1Hi
are a useful measure of the relative amount of information
contributed
to H+ by each
observation. They sum to the model dimension i si = nx because i Hi = H+
348
B. Triggs et al.
so they count how many parameters worth of the total information the observation
contributes. Some authors prefer to quote redundancy numbers ri ni si , where
ni is the effective number of independent observations contained in zi . The redundancy
numbers sum to nz nx, the total redundancy of the system. In the least squares case,
si = Trace Ji H+1 Ji W and mi = si for scalar observations, so the scalar outlier test
becomes f(x+ )/ri . Sensitivity numbers can also be dened for subgroups of the
parameters in the form Trace(U H1 H), where U is an orthogonal projection matrix that
selects the parameters of interest. Ideally, the sensitivities of each subgroup should be
spread evenly across the observations: a large si indicates a heavily weighted observation,
whose incorrectness might signicantly compromise the estimate.
10.5
Model Selection
It is often necessary to chose between several alternative models of the cameras or scene,
e.g. additional parameters for lens distortion, camera calibrations that may or may not have
changed between images, coplanarity or non-coplanarity of certain features. Over-special
models give biased results, while over-general ones tend to be noisy and unstable. We
will consider only nested models, for which a more general model is specialized to a
more specic one by freezing some of its parameters at default values (e.g. zero skew or
lens distortion, equal calibrations, zero deviation from a plane). Let: x be the parameter
vector of the more general model; f(x) be its cost function; c(x) = 0 be the parameter
freezing constraints enforcing the specialization; k be the number of parameters frozen; x0
be the true underlying state; xg be the optimal state estimate for the general model (i.e. the
unconstrained minimum of f(x)); and xs be the optimal state estimate for the specialized
one (i.e. the minimum of f(x) subject to the constraints c(x) = 0). Then, under the
null hypothesis that the specialized model is correct, c(x0 ) = 0, and in the asymptotic
limit in which xg x0 and xs x0 become Gaussian and the constraints become locally
approximately linear across the width of this Gaussian, the difference in tted residuals
2 (f(xs ) f(xg )) has a 2k distribution24 . So if 2 (f(xs ) f(xg )) is less than some suitable
2k decision threshold , we can accept the hypothesis that the additional parameters take
their default values, and use the specialized model rather than the more general one25 .
As before, we can avoid tting one of the models by using a linearized analysis. First
suppose that we start with a t of the more general model xg . Let the linearized constraints
dc
. A straightforward Lagrange
at xg be c(xg + x) c(xg ) + C x, where C dx
multiplier calculation gives:
1
xs xg H1 C (C H1 C) c(xg )
(48)
Conversely, starting from a t of the more specialized model, the unconstrained minimum is
given by the Newton step: xg xs H1g(xs ), and 2 (f(xs ) f(xg )) g(xs ) H1 g(xs ),
where g(xs ) is the residual cost gradient at xs . This requires the general-model covariance
24
25
This happens irrespective of the observation distributions because unlike the case of adding
an observation the same observations and cost function are used for both ts.
In practice, small models are preferable as they have greater stability and predictive power and
less computational cost. So the threshold is usually chosen to be comparatively large, to ensure
that the more general model will not be chosen unless there is strong evidence for it.
349
H1 (or an equivalent factorization of H), which may not have been worked out. Suppose
that the additional parameters were simply appended to the model, x (x, y) where x is
now the reduced parameter vector of the specialized model and y contains the additional
df
, and
parameters. Let the general-model cost gradient at (xs , ys ) be ( 0h ) where h = dy
xg
yg
( xyss ) +
H1 A
1
(B A H1 A) h
(49)
11
Network Design
Network design is the problem of planning camera placements and numbers of images
before a measurement project, to ensure that sufciently accurate and reliable estimates
of everything that needs to be measured are found. We will not say much about design,
merely outlining the basic considerations and giving a few useful rules of thumb. See [5,
chapter 6], [79, 78], [73, Vol.2 4] for more information.
Factors to be considered in network design include: scene coverage, occlusion / visibility and feature viewing angle; eld of view, depth of eld, resolution and workspace
constraints; and geometric strength, accuracy and redundancy. The basic quantitative aids
to design are covariance estimation in a suitably chosen gauge (see 9) and the quality
control tests from 10. Expert systems have been developed [79], but in practice most
designs are still based on personal experience and rules of thumb.
In general, geometric stability is best for convergent (close-in, wide baseline, high
perspective) geometries, using wide angle lenses to cover as much of the object as possible,
and large lm or CCD formats to maximize measurement precision. The wide coverage
maximizes the overlap between different sub-networks and hence overall network rigidity,
while the wide baselines maximize the sub-network stabilities. The practical limitations
on closeness are workspace, eld of view, depth of eld, resolution and feature viewing
angle constraints.
Maximizing the overlap between sub-networks is very important. For objects with
several faces such as buildings, images should be taken from corner positions to tie the
350
B. Triggs et al.
face sub-networks together. For large projects, large scale overview images can be used to
tie together close-in densifying ones. When covering individual faces or surfaces, overlap
and hence stability are improved by taking images with a range of viewing angles rather than
strictly fronto-parallel ones (e.g., for the same number of images, pan-move-pan-move or
interleaved left-looking and right-looking images are stabler than a simple fronto-parallel
track). Similarly, for buildings or turntable sequences, using a mixture of low and high
viewpoints helps stability.
For reliability, one usually plans to see each feature point in at least four images.
Although two images in principle sufce for reconstruction, they offer little redundancy
and no resistance against feature extraction failures. Even with three images, the internal
reliability is still poor: isolated outliers can usually be detected, but it may be difcult to
say which of the three images they occurred in. Moreover, 34 image geometries with
widely spaced (i.e. non-aligned) centres usually give much more isotropic feature error
distributions than two image ones.
If the bundle adjustment will include self-calibration, it is important to include a range
of viewing angles. For example for a at, compact object, views might be taken at regularly
spaced points along a 3045 half-angle cone centred on the object, with 90 optical axis
rotations between views.
12
This survey was written in the hope of making photogrammetric know-how about bundle
adjustment the simultaneous optimization of structure and camera parameters in visual
reconstruction more accessible to potential implementors in the computer vision community. Perhaps the main lessons are the extraordinary versatility of adjustment methods,
the critical importance of exploiting the problem structure, and the continued dominance
of second order (Newton) algorithms, in spite of all efforts to make the simpler rst order
methods converge more rapidly.
We will nish by giving a series of recommendations for methods. At present, these
must be regarded as very provisional, and subject to revision after further testing.
Parametrization: (2.2, 4.5) During step prediction, avoid parameter singularities, innities, strong nonlinearities and ill-conditioning. Use well-conditioned local (current value
+ offset) parametrizations of nonlinear elements when necessary to achieve this: the local
step prediction parametrization can be different from the global state representation one.
The ideal is to make the parameter space error function as isotropic and as near-quadratic
as possible. Residual rotation or quaternion parametrizations are advisable for rotations,
and projective homogeneous parametrizations for distant points, lines and planes (i.e. 3D
features near the singularity of their afne parametrizations, afne innity).
Cost function: (3) The cost should be a realistic approximation to the negative log
likelihood of the total (inlier + outlier) error distribution. The exact functional form of the
distribution is not too critical, however: (i) Undue weight should not be given to outliers
by making the tails of the distribution (the predicted probability of outliers) unrealistically
small. (NB: Compared to most real-world measurement distributions, the tails of a Gaussian
are unrealistically small). (ii) The dispersion matrix or inlier covariance should be a realistic
estimate of the actual inlier measurement dispersion, so that the transition between inliers
and outliers is in about the right place, and the inlier errors are correctly weighted during
tting.
351
Optimization method: (4, 6, 7) For batch problems use a second order Gauss-Newton
method with sparse factorization (see below) of the Hessian, unless:
The problem is so large that exact sparse factorization is impractical. In this case consider
either iterative linear system solvers such as Conjugate Gradient for the Newton step,
or related nonlinear iterations such as Conjugate Gradient, or preferably Limited Memory Quasi-Newton or (if memory permits) full Quasi-Newton (7, [29, 93, 42]). (None
of these methods require the Hessian). If you are in this case, it would pay to investigate professional large-scale optimization codes such as MINPACK-2, LANCELOT, or
commercial methods from NAG or IMSL (see C.2).
If the problem is medium or large but dense (which is unusual), and if it has strong
geometry, alternation of resection and intersection may be preferable to a second order
method. However, in this case Successive Over-Relaxation (SOR) would be even better,
and Conjugate Gradient is likely to be better yet.
In all of the above cases, good preconditioning is critical (7.3).
For on-line problems (rather than batch ones), use factorization updating rather than
matrix inverse updating or re-factorization (B.5). In time-series problems, investigate the
effect of changing the time window (8.2, [83, 84]), and remember that Kalman ltering
is only the rst half-iteration of a full nonlinear method.
Factorization method: (6.2, B.1) For speed, preserve the symmetry of the Hessian during factorization by using: Cholesky decomposition for positive denite Hessians (e.g.
unconstrained problems in a trivial gauge); pivoted Cholesky decomposition for positive
semi-denite Hessians (e.g. unconstrained problems with gauge xing by subset selection 9.5); and Bunch-Kauffman decomposition (B.1) for indenite Hessians (e.g. the
augmented Hessians of constrained problems, 4.4). Gaussian elimination is stable but a
factor of two slower than these.
Variable ordering: (6.3) The variables can usually be ordered by hand for regular networks, but for more irregular ones (e.g. close range site-modelling) some experimentation
may be needed to nd the most efcient overall ordering method. If reasonably compact
proles can be found, prole representations (6.3, B.3) are simpler to implement and
faster than general sparse ones (6.3).
For dense networks use a prole representation and a natural variable ordering: either
features then cameras, or cameras then features, with whichever has the fewest parameters last. An explicit reduced system based implementation such as Browns method
[19] can also be used in this case (6.1, A).
If the problem has some sort of 1D temporal or spatial structure (e.g. image streams,
turntable problems), try a prole representation with natural (simple connectivity) or
Snays bankers (more complex connectivity) orderings (6.3, [101, 24]). A recursive
on-line updating method might also be useful in this case.
If the problem has 2D structure (e.g. cartography and other surface coverage problems)
try nested dissection, with hand ordering for regular problems (cartographic blocks),
and a multilevel scheme for more complex ones (6.3). A prole representation may or
may not be suitable.
For less regular sparse networks, the choice is not clear. Try minimum degree ordering
with a general sparse representation, Snays Bankers with a prole representation, or
multilevel nested dissection.
For all of the automatic variable ordering methods, try to order any especially highly
connected variables last by hand, before invoking the method.
352
B. Triggs et al.
Gauge xing: (9) For efciency, use either a trivial gauge or a subset selection method as
a working gauge for calculations, and project the results into whatever gauge you want later
by applying a suitable gauge projector PG (32). Unless you have a strong reason to use an
external reference system, the output gauge should probably be an inner gauge centred on
the network elements you care most about, i.e. the observed features for a reconstruction
problem, and the cameras for a navigation one.
Quality control and network design: (10) A robust cost function helps, but for overall
system reliability you still need to plan your measurements in advance (until you have
developed a good intuition for this), and check the results afterwards for outlier sensitivity
and over-modelling, using a suitable quality control procedure. Do not underestimate the
extent to which either low redundancy, or weak geometry, or over-general models can
make gross errors undetectable.
Historical Overview
This appendix gives a brief history of the main developments in bundle adjustment, including literature references.
Least squares: The theory of combining measurements by minimizing the sum of their
squared residuals was developed independently by Gauss and Legendre around 17951820
[37, 74], [36, Vol.IV, 193], about 40 years after robust L1 estimation [15]. Least squares
was motivated by estimation problems in astronomy and geodesy and extensively applied
to both elds by Gauss, whose remarkable 1823 monograph [37, 36] already contains
almost the complete modern theory of least squares including elements of the theory of
probability distributions, the denition and properties of the Gaussian distribution, and a
discussion of bias and the Gauss-Markov theorem, which states that least squares gives
the Best Linear Unbiased Estimator (BLUE) [37, 11]. It also introduces the LDL form of
symmetric Gaussian elimination and the Gauss-Newton iteration for nonlinear problems,
essentially in their modern forms although without explicitly using matrices. The 1828
supplement on geodesy introduced the Gauss-Seidel iteration for solving large nonlinear
systems. The economic and military importance of surveying lead to extensive use of least
squares and several further developments: Helmerts nested dissection [64] probably the
rst systematic sparse matrix method in the 1880s, Cholesky decomposition around
1915, Baardas theory of reliability of measurement networks in the 1960s [7, 8], and
Meissl [87, 89] and Baardas [6] theories of uncertain coordinate frames and free networks
[22, 25]. We will return to these topics below.
Second order bundle algorithms: Electronic computers capable of solving reasonably
large least squares problems rst became available in the late 1950s. The basic photogrammetric bundle method was developed for the U.S. Air Force by Duane C. Brown and his
co-workers in 19579 [16, 19]. The initial focus was aerial cartography, but by the late
1960s bundle methods were also being used for close-range measurements26 . The links
with geodesic least squares and the possibility of combining geodesic and other types
of measurements with the photogrammetric ones were clear right from the start. Initially
26
Close range means essentially that the object has signicant depth relative to the camera distance,
i.e. that there is signicant perspective distortion. For aerial images the scene is usually shallow
compared to the viewing height, so focal length variations are very difcult to disentangle from
depth variations.
Meissl 1962-5
Free network adjustment
uncertain frames
Inner covariance &
constraints
Baarda 1964-75
Inner & outer reliability
Data snooping
Brown 1964-72
Self-calibration
~5x less error using
empirical camera
models
Brown 1958-9
Calibrated Bundle
Adjustment
~10s of images
Helmert ~1880s
Recursive
partitioning
1800
1970
Baarda 1973
S transforms &
criterion matrices
1980
1990
353
Gauge freedom
& uncertainty
modelling
Modern robust
statistics &
model selection
Image-based
matching
Modern sparse
matrix
techniques
2000
the cameras were assumed to be calibrated27 , so the optimization was over object points
and camera poses only. Self calibration (the estimation of internal camera parameters
during bundle adjustment) was rst discussed around 1964 and implemented by 1968
[19]. Camera models were greatly rened in the early 1970s, with the investigation of
many alternative sets of additional (distortion) parameters [1719]. Even with stable
and carefully calibrated aerial photogrammetry cameras, self calibration signicantly improved accuracies (by factors of around 210). This lead to rapid improvements in camera
design as previously unmeasurable defects like lm platten non-atness were found and
corrected. Much of this development was lead by Brown and his collaborators. See [19]
for more of the history and references.
Browns initial 1958 bundle method [16, 19] uses block matrix techniques to eliminate the structure parameters from the normal equations, leaving only the camera pose
parameters. The resulting reduced camera subsystem is then solved by dense Gaussian
elimination, and back-substitution gives the structure. For self-calibration, a second reduction from pose to calibration parameters can be added in the same way. Browns method is
probably what most vision researchers think of as bundle adjustment, following descriptions by Slama [100] and Hartley [58, 59]. It is still a reasonable choice for small dense
networks28 , but it rapidly becomes inefcient for the large sparse ones that arise in aerial
cartography and large-scale site modelling.
For larger problems, more of the natural sparsity has to be exploited. In aerial cartography, the regular structure makes this relatively straightforward. The images are arranged
in blocks rectangular or irregular grids designed for uniform ground coverage, formed
from parallel 1D strips of images with about 5070% forward overlap giving adjacent
stereo pairs or triplets, about 1020% side overlap, and a few known ground control points
27
28
Calibration always denotes internal camera parameters (interior orientation) in photogrammetric terminology. External calibration is called pose or (exterior) orientation.
A photogrammetric network is dense if most of the 3D features are visible in most of the images,
and sparse if most features appear in only a few images. This corresponds directly to the density
or sparsity of the off-diagonal block (feature-camera coupling matrix) of the bundle Hessian.
354
B. Triggs et al.
sprinkled sparsely throughout the block. Features are shared only between neighbouring
images, and images couple in the reduced camera subsystem only if they share common
features. So if the images are arranged in strip or cross-strip ordering, the reduced camera system has a triply-banded block structure (the upper and lower bands representing,
e.g., right and left neighbours, and the central band forward and backward ones). Several
efcient numerical schemes exist for such matrices. The rst was Gyer & Browns 1967 recursive partitioning method [57, 19], which is closely related to Helmerts 1880 geodesic
method [64]. (Generalizations of these have become one of the major families of modern
sparse matrix methods [40, 26, 11]). The basic idea is to split the rectangle into two halves,
recursively solving each half and gluing the two solutions together along their common
boundary. Algebraically, the variables are reordered into left-half-only, right-half-only and
boundary variables, with the latter (representing the only coupling between the two halves)
eliminated last. The technique is extremely effective for aerial blocks and similar problems
where small separating sets of variables can be found. Brown mentions adjusting a block
of 162 photos on a machine with only 8k words of memory, and 1000 photo blocks were
already feasible by mid-1967 [19]. For less regular networks such as site modelling ones
it may not be feasible to choose an appropriate variable ordering beforehand, but efcient
on-line ordering methods exist [40, 26, 11] (see 6.3).
Independent model methods: These approximate bundle adjustment by calculating a
number of partial reconstructions independently and merging them by pairwise 3D alignment. Even when the individual models and alignments are separately optimal, the result
is suboptimal because the the stresses produced by alignment are not propagated back
into the individual models. (Doing so would amount to completing one full iteration of
an optimal recursive decomposition style bundle method see 8.2). Independent model
methods were at one time the standard in aerial photogrammetry [95, 2, 100, 73], where
they were used to merge individual stereo pair reconstructions within aerial strips into
a global reconstruction of the whole block. They are always less accurate than bundle
methods, although in some cases the accuracy can be comparable.
First order & approximate bundle algorithms: Another recurrent theme is the use of
approximations or iterative methods to avoid solving the full Newton update equations.
Most of the plausible approximations have been rediscovered several times, especially
variants of alternate steps of resection (nding the camera poses from known 3D points) and
intersection (nding the 3D points from known camera poses), and the linearized version of
this, the block Gauss-Seidel iteration. Browns group had already experimented with Block
Successive Over-Relaxation (BSOR an accelerated variant of Gauss-Seidel) by 1964
[19], before they developed their recursive decomposition method. Both Gauss-Seidel and
BSOR were also applied to the independent model problem around this time [95, 2]. These
methods are mainly of historical interest. For large sparse problems such as aerial blocks,
they can not compete with efciently organized second order methods. Because some
of the inter-variable couplings are ignored, corrections propagate very slowly across the
network (typically one step per iteration), and many iterations are required for convergence
(see 7).
Quality control: In parallel with this algorithmic development, two important theoretical
developments took place. Firstly, the Dutch geodesist W. Baarda led a long-running working group that formulated a theory of statistical reliability for least squares estimation [7,
8]. This greatly claried the conditions (essentially redundancy) needed to ensure that
outliers could be detected from their residuals (inner reliability), and that any remaining
355
undetected outliers had only a limited effect on the nal results (outer reliability). A. Grun
[49, 50] and W. Forstner [30, 33, 34] adapted this theory to photogrammetry around 1980,
and also gave some early correlation and covariance based model selection heuristics designed to control over-tting problems caused by over-elaborate camera models in self
calibration.
Least squares matching: All of the above developments originally used manually extracted image points. Automated image processing was clearly desirable, but it only gradually became feasible owing to the sheer size and detail of photogrammetric images. Both
feature based, e.g. [31, 32], and direct (region based) [1, 52, 55, 110] methods were studied,
the latter especially for matching low-contrast natural terrain in cartographic applications.
Both rely on some form of least squares matching (as image correlation is called in photogrammetry). Correlation based matching techniques remain the most accurate methods
of extracting precise translations from images, both for high contrast photogrammetric
targets and for low contrast natural terrain. Starting from around 1985, Grun and his coworkers combined region based least squares matching with various geometric constraints.
Multi-photo geometrically constrained matching optimizes the match over multiple images simultaneously, subject to the inter-image matching geometry [52, 55, 9]. For each
surface patch there is a single search over patch depth and possibly slant, which simultaneously moves it along epipolar lines in the other images. Initial versions assumed known
camera matrices, but a full patch-based bundle method was later investigated [9]. Related
methods in computer vision include [94, 98, 67]. Globally enforced least squares matching [53, 97, 76] further stabilizes the solution in low-signal regions by enforcing continuity
constraints between adjacent patches. Patches are arranged in a grid and matched using
local afne or projective deformations, with additional terms to penalize mismatching at
patch boundaries. Related work in vision includes [104, 102]. The inter-patch constraints
give a sparsely-coupled structure to the least squares matching equations, which can again
be handled efciently by recursive decomposition.
356
B. Triggs et al.
Matrix Factorization
This appendix covers some standard material on matrix factorization, including the technical details of factorization, factorization updating, and covariance calculation methods.
See [44, 11] for more details.
Terminology: Depending on the factorization, L stands for lower triangular, U or R
for upper triangular, D or S for diagonal, Q or U,V for orthogonal factors.
B.1
Triangular Decompositions
A11
A12 A1n
A21 A22 A2n
..
.
L11
L21
...
=
..
.
.. . . ..
. .
.
L22
.. . .
. .
..
..
.
.
D
1
D2
..
U11
U12 U1n
U22 U2n
..
Dr
..
.
Urn
(50)
Lii Di Uii = Aii ,
Lij Aij Ujj Dj ,
1
i = j
i>j
i<j
Aij Aij
= Aij
k<min(i,j)
Lik Dk Ukj
k<min(i,j)
(51)
Here, the diagonal blocks D1 . . . Dr1 must be chosen to be square and invertible, and r
is determined by the
rank of A. The recursion (51) follows immediately from the product
Aij = (L D U)ij = kmin(i,j) Lik Dk Ukj . Given such a factorization, linear equations
can be solved by forwards and backwards substitution as in (2224).
The diagonal blocks of L, D, U can be chosen freely subject to Lii Dii Uii = Aii ,
but once this is done the factorization is uniquely dened. Choosing Lii = Dii = 1 so
that Uii = Aii gives the (block) LU decomposition A = L U, the matrix representation
of (block) Gaussian elimination. Choosing Lii = Uii = 1 so that Di = Aii gives the
LDU decomposition. If A is symmetric, the LDU decomposition preserves the symmetry
and becomes the LDL decomposition A = L D L where U = L and D = D. If A
is symmetric positive denite we can set D = 1 to get the Cholesky decomposition
A = L L, where Lii Lii = Aii (recursively) denes the Cholesky factor Lii of the positive
denite matrix Aii . (For a scalar, Chol(a) = a). If all of the blocks are chosen to be
11, we get the conventional scalar forms of these decompositions. These decompositions
are obviously equivalent, but for speed and simplicity it is usual to use the most specic
one that applies: LU for general matrices, LDL for symmetric ones, and Cholesky for
symmetric positive denite ones. For symmetric matrices such as the bundle Hessian,
LDL / Cholesky are 1.52 times faster than LDU / LU. We will use the general form (50)
below as it is trivial to specialize to any of the others.
Loop ordering: From (51), the ij block of the decomposition depends only on the the
upper left (m 1) (m 1) submatrix and the rst m elements of row i and column j of
A, where m = min(i, j). This allows considerable freedom in the ordering of operations
357
during decomposition, which can be exploited to enhance parallelism and improve memory
cache locality.
Fill in: If A is sparse, its L and U factors tend to become ever denser as the decomposition
progresses. Recursively expanding Aik and Akj in (51) gives contributions of the form
1
1
Aik Akk Akl Apq Aqq Aqj for k, l . . . p, q < min(i, j). So even if Aij is zero, if there
is any path of the form i k l . . . p q j via non-zero Akl with
k, l . . . p, q < min(i, j), the ij block of the decomposition will generically ll-in (become
non-zero). The amount of ll-in is strongly dependent on the ordering of the variables
(i.e. of the rows and columns of A). Sparse factorization methods (6.3) manipulate this
ordering to minimize either ll-in or total operation counts.
Pivoting: For positive denite matrices, the above factorizations are very stable because
the pivots Aii must themselves remain positive denite. More generally, the pivots may
become ill-conditioned causing the decomposition to break down. To deal with this, it is
usual to search the undecomposed part of the matrix for a large pivot at each step, and
permute this into the leading position before proceeding. The stablest policy is full pivoting
which searches the whole submatrix, but usually a less costly partial pivoting search
over just the current column (column pivoting) or row (row pivoting) sufces. Pivoting
ensures that L and/or U are relatively well-conditioned and postpones ill-conditioning in
D for as long as possible, but it can not ultimately make D any better conditioned than A is.
Column pivoting is usual for the LU decomposition, but if applied to a symmetric matrix
it destroys the symmetry and hence doubles the workload. Diagonal pivoting preserves
symmetry by searching for the largest remaining diagonal element and permuting both
its row and its column to the front. This sufces for positive semidenite matrices (e.g.
gauge decient
Hessians).
For general symmetric indenite matrices (e.g. the augmented
Hessians CH C
of
constrained
problems (12)), off-diagonal pivots can not be avoided29 ,
0
but there are fast, stable, symmetry-preserving pivoted LDL decompositions with block
diagonal D having 1 1 and 2 2 blocks. Full pivoting is possible (Bunch-Parlett
decomposition), but Bunch-Kaufman decomposition which searches the diagonal and
only one or at most two columns usually sufces. This method is nearly as fast as pivoted
Cholesky decomposition (to which it reduces for positive matrices), and as stable LU
decomposition with partial pivoting. Asens method
has similar speed and stability but
produces a tridiagonal D. The constrained Hessian CH C
has further special properties
0
owing to its zero block, but we will not consider these here see [44, 4.4.6 Equilibrium
Systems].
B.2
Orthogonal Decompositions
The archetypical failure is theunstable
decomposition
LDL
1 1/ of the well-conditioned symmetric
1 0
0
1
indenite matrix ( 1 0 ) = 1/ 1
, for 0. Fortunately, for small
0 1/
0 1
diagonal elements, permuting the dominant off-diagonal element next to the diagonal and leaving
the resulting 2 2 block undecomposed in D sufces for stability.
358
B. Triggs et al.
359
B.3
One of the simplest sparse methods suitable for bundle problems is prole Cholesky
decomposition. With natural (features then cameras) variable ordering, it is as efcient
as any method for dense networks (i.e. most features visible in most images, giving dense
camera-feature coupling blocks in the Hessian). With suitable variable ordering30 , it is also
efcient for some types of sparse problems, particularly ones with chain-like connectivity.
Figure 10 shows the complete implementation of prole Cholesky, including decomposition L L = A, forward substitution x = L1 b, and back substitution y = L x.
rst(b), last(b) are the indices of the rst and last nonzero entries of b, and rst(i) is the
index of the rst nonzero entry in row i of A and hence L. If desired, L, x, y can overwrite
A, b, x during decomposition to save storage. As always with factorizations, the loops can
be reordered in several ways. These have the same operation counts but different access
patterns and hence memory cache localities, which on modern machines can lead to signicant performance differences for large problems. Here we store and access A and L
consistently by rows.
B.4
When solving linear equations, forward-backward substitutions (22, 24) are much faster
than explicitly calculating and multiplying by A1, and numerically stabler too. Explicit
inverses are only rarely needed, e.g. to evaluate the dispersion (covariance) matrix H1.
Covariance calculation is expensive for bundle adjustment: no matter how sparse H may be,
H1 is always dense. Given a triangular decomposition A = L D U, the most obvious way
to calculate A1 is via the product A1 = U1 D1 L1, where L1 (which is lower triangular)
is found using a recurrence based on either L1 L = 1 or L L1 = 1 as follows (and similarly
but transposed for U):
j1
j
(L1)ii = (Lii )1,
(L1)ji = Ljj1
Ljk (L1)ki =
(L1)jk Lki Lii1
k=i
i=1...n , j=i+1...n
k=i+1
i=n...1 , j=n...i+1
(52)
30
Snays Bankers strategy (6.3, [101, 24]) seems to be one of the most effective ordering strategies.
360
B. Triggs et al.
Alternatively [45, 11], the diagonal and the (zero) upper triangle of the linear system
U A1 = D1 L1 can be combined with the (zero) lower triangle of A1 L = U1 D1 to give
the direct recursion (i = n . . . 1 and j = n . . . i + 1):
n
(A )ji =
1
(A )jk Lki
1
Lii ,
1
k=i+1
(A )ii = Uii
Di Lii
1
n
(A )ij = Uii
Uik (A )ki
1
Uik (A )kj
1
k=i+1
n
n
Uii Di
1
k=i+1
(A )ik Lki
1
Lii1
k=i+1
(53)
In the symmetric case (A1)ji = (A1)ij so we can avoid roughly half of the work. If only a
few blocks of A1 are required (e.g. the diagonal ones), this recursion has the property that
the blocks of A1 associated with the lled positions of L and U can be calculated without
calculating any blocks associated with unlled positions. More precisely, to calculate
(A1)ij for which Lji (j > i) or Uji (j < i) is non-zero, we do not need any block
(A1)kl for which Llk = 0 (l > k) or Ulk = 0 (l < k) 31 . This is a signicant saving if
L, U are sparse, as in bundle problems. In particular, given the covariance of the reduced
camera system, the 3D feature variances and feature-camera covariances can be calculated
efciently using (53) (or equivalently (17), where A Hss is the block diagonal feature
Hessian and D2 is the reduced camera one).
B.5
Factorization Updating
D1
D2
B1
B2
W ( C1 C2 ) =
D1 D1 B1 W C1
1
1
B2 W C1 D1 1
D1
D2
1 D1 B1 W C2
1
1
D2 D2 B2 W W C1 D1 B1 W C2
(54)
D2 is a low-rank update of D2 with the same C2 and B2 but a different W. Evaluating
this recursively and merging the resulting L and U factors into L and U gives the updated
31
This holds because of the way ll-in occurs in the LDU decomposition. Suppose that we want to
nd (A1)ij , where j > i and Lji = 0. For this we need (A1)kj for all non-zero Uik , k > i. But
for these Ajk = Lji Di Uik + . . . + Ajk = 0, so (A1)kj is associated with a lled position and
will already have been evaluated.
361
decomposition32 A = L D U :
W(1) W ; B(1) B ; C(1) C ;
for i = 1 to n do
(i)
(i)
Bi Bi ; Ci Ci ; Di Di + Bi W(i) Ci ;
1
(W(i) )1 + Ci Di 1 Bi
(55)
for j = i + 1 to n do
(i+1)
(i)
(i+1)
Bj
Bj Lji Bi ; Lji Lji + Bj
W(i+1) Ci Di 1 ;
(i+1)
(i)
(i+1)
1
Cj
Cj Ci Uij ; Uij Uij + Di Bi W(i+1) Cj
;
The W1 form of the W update is numerically stabler for additions (+ sign in A B W C
(i)
with positiveW), but is not
usable unless W is invertible. In either case, the update
2
2
2
takes time O (k + b )N where A is N N , W is kk and the Di are bb. So other
things being equal, k should be kept as small as possible (e.g. by splitting the update into
independent rows using an initial factorization of W, and updating for each row in turn).
The scalar Cholesky form of this method for a rank one update A A + w b b is:
w(1) w ; b(1) b ;
for i = 1 to n do
2
(i)
bi bi /Lii ; di 1 + w(i) bi ;
w(i+1) w(i) /di ;
for j = i + 1 to n do
(i+1)
bj
(i)
bj Lji bi ;
Lii Lii
Lji
di ;
(i+1)
Lji + bj
w(i+1) bi
di ;
(56)
This takes O n2 operations. The same recursion rule (and several equivalent forms) can
be derived by reducing (L b) to an upper triangular matrix using Givens rotations or
Householder transformations [43, 11].
Software
C.1
Software Organization
(i)
(i)
Here, Bj = Bj i1
= Cj i1
k=1 Ljk Bk =
k=i Ljk Bk and Cj
k=1 Ck Lkj =
j
1
1
1
(i+1)
C
U
accumulate
L
B
and
C
U
.
For
the
L,
U
updates
one
can
also
use
W
Ci D
k
kj
i =
k=i
1
1
1
(i)
(i+1)
(i)
= Di Bi W .
W Ci Di and Di Bi W
362
B. Triggs et al.
distortion models exist. If the scene is dynamic or articulated, additional nodes representing
3D transformations (kinematic chains or relative motions) may also be needed.
The main purpose of the network structure is to predict observations and their Jacobians
w.r.t. the free parameters, and then to integrate the resulting rst order parameter updates
back into the internal 3D feature and camera state representations. Prediction is essentially
a matter of systematically propagating values through the network, with heavy use of the
chain rule for derivative propagation. The network representation must interface with a
numerical linear algebra one that supports appropriate methods for forming and solving the
sparse, damped Gauss-Newton (or other) step prediction equations. A xed-order sparse
factorization may sufce for simple networks, while automatic variable ordering is needed
for more complicated networks and iterative solution methods for large ones.
Several extensible bundle codes exist, but as far as we are aware, none of them are
currently available as freeware. Our own implementations include:
Carmen [59] is a program for camera modelling and scene reconstruction using iterative nonlinear least squares. It has a modular design that allows many different feature,
measurement and camera types to be incorporated (including some quite exotic ones
[56, 63]). It uses sparse matrix techniques similar to Browns reduced camera system
method [19] to make the bundle adjustment iteration efcient.
Horatio (http://www.ee.surrey.ac.uk/Personal/P.McLauchlan/horatio/html, [85], [86],
[83], [84]) is a C library supporting the development of efcient computer vision applications. It contains support for image processing, linear algebra and visualization,
and will soon be made publicly available. The bundle adjustment methods in Horatio,
which are based on the Variable State Dimension Filter (VSDF) [83, 84], are being
commercialized. These algorithms support sparse block matrix operations, arbitrary
gauge constraints, global and local parametrizations, multiple feature types and camera
models, as well as batch and sequential operation.
vxl: This modular C++ vision environment is a new, lightweight version of the TargetJr/IUE environment, which is being developed mainly by the Universities of Oxford
and Leuven, and General Electric CRD. The initial public release on
http://www.robots.ox.ac.uk/vxl will include an OpenGL user interface and classes
for multiple view geometry and numerics (the latter being mainly C++ wrappers to well
established routines from Netlib see below). A bundle adjustment code exists for it
but is not currently planned for release [28, 62].
C.2
Software Resources
A great deal of useful numerical linear algebra and optimization software is available on
the Internet, although more commonly in Fortran than in C/C++. The main repository is Netlib at http://www.netlib.org/. Other useful sites include: the Guide to Available Mathematical Software GAMS at http://gams.nist.gov; the NEOS guide http://wwwfp.mcs.anl.gov/otc/Guide/, which is based in part on More & Wrights guide book [90]; and
the Object Oriented Numerics page http://oonumerics.org. For large-scale dense linear algebra, LAPACK (http://www.netlib.org/lapack, [3]) is the best package available. However
it is optimized for relatively large problems (matrices of size 100 or more), so if you are solving many small ones (size less than 20 or so) it may be faster to use the older LINPACK and
EISPACK routines. These libraries all use the BLAS (Basic Linear Algebra Subroutines)
interface for low level matrix manipulations, optimized versions of which are available from
363
most processor vendors. They are all Fortran based, but C/C++ versions and interfaces
exist
(CLAPACK,
http://www.netlib.org/clapack; LAPACK++, http://math.nist.gov/lapack++). For sparse
matrices there is a bewildering array of packages. One good one is Boeings SPOOLES
(http://www.netlib.org/linalg/spooles/spooles.2.2.html) which implements sparse BunchKaufman decomposition in C with several ordering methods. For iterative linear system
solvers implementation is seldom difcult, but there are again many methods and implementations. The Templates book [10] contains potted code. For nonlinear optimization
there are various older codes such as MINPACK, and more recent codes designed mainly
for very large problems such as MINPACK-2 (ftp://info.mcs.anl.gov/pub/MINPACK-2)
and LANCELOT (http://www.cse.clrc.ac.uk/Activity/LANCELOT). (Both of these latter
codes have good reputations for other large scale problems, but as far as we are aware they
have not yet been tested on bundle adjustment). All of the above packages are freely available. Commercial vendors such as NAG (ttp://www.nag.co.uk) and IMSL (www.imsl.com)
have their own optimization codes.
Glossary
This glossary includes a few common terms from vision, photogrammetry, numerical optimization
and statistics, with their translations.
Additional parameters: Parameters added to the basic perspective model to represent lens distortion and similar small image deformations.
-distribution: A family of wide tailed probability distributions, including the Cauchy distribution ( = 1) and the Gaussian ( = 2).
Alternation: A family of simplistic and largely outdated strategies for nonlinear optimization (and
also iterative solution of linear equations). Cycles through variables or groups of variables, optimizing over each in turn while holding all the others xed. Nonlinear alternation methods usually
relinearize the equations after each group, while Gauss-Seidel methods propagate rst order corrections forwards and relinearize only at the end of the cycle (the results are the same to rst
order). Successive over-relaxation adds momentum terms to speed convergence. See separable problem. Alternation of resection and intersection is a nave and often-rediscovered bundle
method.
Asymptotic limit: In statistics, the limit as the number of independent measurements is increased
to innity, or as the second order moments dominate all higher order ones so that the posterior
distribution becomes approximately Gaussian.
Asymptotic convergence: In optimization, the limit of small deviations from the solution, i.e. as
the solution is reached. Second order or quadratically convergent methods such as Newtons
method square the norm of the residual at each step, while rst order or linearly convergent
methods such as gradient descent and alternation only reduce the error by a constant factor at
each step.
Bankers strategy: See ll in, 6.3.
Block: A (possibly irregular) grid of overlapping photos in aerial cartography.
Bunch-Kauffman: A numerically efcient factorization method for symmetric indenite matrices,
A = L D L where L is lower triangular and D is block diagonal with 1 1 and 2 2 blocks
(6.2, B.1).
Bundle adjustment: Any renement method for visual reconstructions that aims to produce jointly
optimal structure and camera estimates.
364
B. Triggs et al.
Calibration: In photogrammetry, this always means internal calibration of the cameras. See inner
orientation.
Central limit theorem: States that maximum likelihood and similar estimators asymptotically have
Gaussian distributions. The basis of most of our perturbation expansions.
Cholesky decomposition: A numerically efcient factorization method for symmetric positive definite matrices, A = L L where L is lower triangular.
Close Range: Any photogrammetric problem where the scene is relatively close to the camera,
so that it has signicant depth compared to the camera distance. Terrestrial photogrammetry as
opposed to aerial cartography.
Conjugate gradient: A cleverly accelerated rst order iteration for solving positive denite linear
systems or minimizing a nonlinear cost function. See Krylov subspace.
Cost function: The function quantifying the total residual error that is minimized in an adjustment
computation.
Cramer-Rao bound: See Fisher information.
Criterion matrix: In network design, an ideal or desired form for a covariance matrix.
Damped Newton method: Newtons method with a stabilizing step control policy added. See
Levenberg-Marquardt.
Data snooping: Elimination of outliers based on examination of their residual errors.
Datum: A reference coordinate system, against which other coordinates and uncertainties are measured. Our principle example of a gauge.
Dense: A matrix or system of equations with so few known-zero elements that it may as well be
treated as having none. The opposite of sparse. For photogrammetric networks, dense means that
the off-diagonal structure-camera block of the Hessian is dense, i.e. most features are seen in most
images.
Descent direction: In optimization, any search direction with a downhill component, i.e. that locally
reduces the cost.
Design: The process of dening a measurement network (placement of cameras, number of images,
etc.) to satisfy given accuracy and quality criteria.
dz
365
Free gauge / free network: A gauge or datum that is dened internally to the measurement network, rather than being based on predened reference features like a xed gauge.
Feature based: Sparse correspondence / reconstruction methods based on geometric image features
(points, lines, homographies . . . ) rather than direct photometry. See direct method.
Filtering: In sequential problems such as time series, the estimation of a current value using all
of the previous measurements. Smoothing can correct this afterwards, by integrating also the
information from future measurements.
First order method / convergence: See asymptotic convergence.
Gauge: An internal or external reference coordinate system dened for the current state and (at least)
small variations of it, against which other quantities and their uncertainties can be measured. The
3D coordinate gauge is also called the datum. A gauge constraint is any constraint xing a
specic gauge, e.g. for the current state and arbitrary (small) displacements of it. The fact that
the gauge can be chosen arbitrarily without changing the underlying structure is called gauge
freedom or gauge invariance. The rank-deciency that this transformation-invariance of the cost
function induces on the Hessian is called gauge deciency. Displacements that violate the gauge
constraints can be corrected by applying an S-transform, whose linear form is a gauge projection
matrix PG .
Gauss-Markov theorem: This says that for a linear system, least squares weighted by the true
measurement covariances gives the Best (minimum variance) Linear Unbiased Estimator or BLUE.
Gauss-Newton method: A Newton-like method for nonlinear least squares problems, in which the
Hessian is approximated by the Gauss-Newton one H J W J where J is the design matrix
and W is a weight matrix. The normal equations are the resulting Gauss-Newton step prediction
equations (J W J) x = (J W z).
Gauss-Seidel method: See alternation.
Givens rotation: A 2 2 rotation used to as part of orthogonal reduction of a matrix, e.g. QR,
SVD. See Householder reection.
df
Gradient: The derivative of the cost function w.r.t. the parameters g = dx .
Gradient descent: Nave optimization method which consists of steepest descent (in some given
coordinate system) down the gradient of the cost function.
d2 f
Hessian: The second derivative matrix of the cost function H = dx2 . Symmetric and positive (semi)denite at a cost minimum. Measures how stiff the state estimate is against perturbations. Its
inverse is the dispersion matrix.
Householder reection: A matrix representing reection in a hyperplane, used as a tool for orthogonal reduction of a matrix, e.g. QR, SVD. See Givens rotation.
Independent model method: A suboptimal approximation to bundle adjustment developed for
aerial cartography. Small local 3D models are reconstructed, each from a few images, and then
glued together via tie features at their common boundaries, without a subsequent adjustment to
relax the internal stresses so caused.
Inner: Internal or intrinsic.
Inner constraints: Gauge constraints linking the gauge to some weighted average of the reconstructed features and cameras (rather than to an externally supplied reference system).
Inner orientation: Internal camera calibration, including lens distortion, etc.
Inner reliability: The ability to either resist outliers, or detect and reject them based on their residual
errors.
Intersection: (of optical rays). Solving for 3D feature positions given the corresponding image
features and known 3D camera poses and calibrations. See resection, alternation.
Jacobian: See design matrix.
366
B. Triggs et al.
Krylov subspace: The linear subspace spanned by the iterated products {Ak b|k = 0 . . . n} of
some square matrix A with some vector b, used as a tool for generating linear algebra and nonlinear
optimization iterations. Conjugate gradient is the most famous Krylov method.
Kullback-Leibler divergence: See relative entropy.
Least squares matching: Image matching based on photometric intensities. See direct method.
Levenberg-Marquardt: A common damping (step control) method for nonlinear least squares
problems, consisting of adding a multiple D of some positive denite weight matrix D to the
Gauss-Newton Hessian before solving for the step. Levenberg-Marquardt uses a simple rescaling
based heuristic for setting , while trust region methods use a more sophisticated step-length
based one. Such methods are called damped Newton methods in general optimization.
Local model: In optimization, a local approximation to the function being optimized, which is easy
enough to optimize that an iterative optimizer for the original function can be based on it. The
second order Taylor series model gives Newtons method.
Local parametrization: A parametrization of a nonlinear space based on offsets from some current
point. Used during an optimization step to give better local numerical conditioning than a more
global parametrization would.
LU decomposition: The usual matrix factorization form of Gaussian elimination.
Minimum degree ordering: One of the most widely used automatic variable ordering methods for
sparse matrix factorization.
Minimum detectable gross error: The smallest outlier that can be detected on average by an outlier
detection method.
Nested dissection: A top-down divide-and-conquer variable ordering method for sparse matrix
factorization. Recursively splits the problem into disconnected halves, dealing with the separating
set of connecting variables last. Particularly suitable for surface coverage problems. Also called
recursive partitioning.
Nested models: Pairs of models, of which one is a specialization of the other obtained by freezing
certain parameters(s) at prespecied values.
Network: The interconnection structure of the 3D features, the cameras, and the measurements that
are made of them (image points, etc.). Usually encoded as a graph structure.
Newton method: The basic iterative second order optimization method. The Newton step state
update x = H1g minimizes a local quadratic Taylor approximation to the cost function at
each iteration.
Normal equations: See Gauss-Newton method.
Nuisance parameter: Any parameter that had to be estimated as part of a nonlinear parameter
estimation problem, but whose value was not really wanted.
Outer: External. See inner.
Outer orientation: Camera pose (position and angular orientation).
Outer reliability: The inuence of unremoved outliers on the nal parameter estimates, i.e. the
extent to which they are reliable even though some (presumably small or lowly-weighted) outliers
may remain undetected.
Outlier: An observation that deviates signicantly from its predicted position. More generally,
any observation that does not t some preconceived notion of how the observations should be
distributed, and which must therefore be removed to avoid disturbing the parameter estimates.
See total distribution.
Pivoting: Row and/or column exchanges designed to promote stability during matrix factorization.
Point estimator: Any estimator that returns a single best parameter estimate, e.g. maximum
likelihood, maximum a posteriori.
Pose: 3D position and orientation (angle), e.g. of a camera.
367
Preconditioner: A linear change of variables designed to improve the accuracy or convergence rate
of a numerical method, e.g. a rst order optimization iteration. Variable scaling is the diagonal
part of preconditioning.
Primary structure: The main decomposition of the bundle adjustment variables into structure and
camera ones.
Prole matrix: A storage scheme for sparse matrices in which all elements between the rst and
the last nonzero one in each row are stored, even if they are zero. Its simplicity makes it efcient
even if there are quite a few zeros.
Quality control: The monitoring of an estimation process to ensure that accuracy requirements
were met, that outliers were removed or down-weighted, and that appropriate models were used,
e.g. for additional parameters.
Radial distribution: An observation error distribution which retains the Gaussian dependence on
a squared residual error r = x W x, but which replaces the exponential er/2 form with a more
robust long-tailed one.
Recursive: Used of ltering-based reconstruction methods that handle sequences of images or
measurements by successive updating steps.
Recursive partitioning: See nested dissection.
Reduced problem: Any problem where some of the variables have already been eliminated by
partial factorization, leaving only the others. The reduced camera system (20) is the result of
reducing the bundle problem to only the camera variables. (6.1, 8.2, 4.4).
Redundancy: The extent to which any one observation has only a small inuence on the results,
so that it could be incorrect or missing without causing problems. Redundant consenses are the
basis of reliability. Redundancy numbers r are a heuristic measure of the amount of redundancy
in an estimate.
Relative entropy: An information-theoretic measure of how badly a model probability density p1
ts an actual one p0 : the mean (w.r.t. p0 ) log likelihood contrast of p0 to p1 , log(p0 /p1 )p0 .
Resection: (of optical rays). Solving for 3D camera poses and possibly calibrations, given image
features and the corresponding 3D feature positions. See intersection.
Resection-intersection: See alternation.
Residual: The error z in a predicted observation, or its cost function value.
S-transformation: A transformation between two gauges, implemented locally by a gauge projection matrix PG .
Scaling: See preconditioner.
A B ) is D C A1 B. See 6.1.
Schur complement: Of A in ( C
D
Second order method / convergence: See asymptotic convergence.
Secondary structure: Internal structure or sparsity of the off-diagonal feature-camera coupling
block of the bundle Hessian. See primary structure.
Self calibration: Recovery of camera (internal) calibration during bundle adjustment.
Sensitivity number: A heuristic number s measuring the sensitivity of an estimate to a given
observation.
Separable problem: Any optimization problem in which the variables can be separated into two
or more subsets, for which optimization over each subset given all of the others is signicantly
easier than simultaneous optimization over all variables. Bundle adjustment is separable into 3D
structure and cameras. Alternation (successive optimization over each subset) is a nave approach
to separable problems.
Separating set: See nested dissection.
368
B. Triggs et al.
References
[1] F. Ackermann. Digital image correlation: Performance and potential applications in photogrammetry. Photogrammetric Record, 11(64):429439, 1984.
[2] F. Amer. Digital block adjustment. Photogrammetric Record, 4:3447, 1962.
[3] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz,
A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen.
LAPACK Users
Guide, Third Edition.
SIAM Press, Philadelphia, 1999.
LAPACK home page:
http://www.netlib.org/lapack.
[4] C. Ashcraft and J. W.-H. Liu. Robust ordering of sparse matrices using multisection. SIAM
J. Matrix Anal. Appl., 19:816832, 1998.
[5] K. B. Atkinson, editor. Close Range Photogrammetry and Machine Vision. Whittles Publishing, Roseleigh House, Latheronwheel, Caithness, Scotland, 1996.
[6] W. Baarda. S-transformations and criterion matrices. Netherlands Geodetic Commission,
Publications on Geodesy, New Series, Vol.5, No.1 (168 pages), 1967.
[7] W. Baarda. Statistical concepts in geodesy. Netherlands Geodetic Commission Publications
on Geodesy, New Series, Vol.2, No.4 (74 pages), 1967.
[8] W. Baarda. A testing procedure for use in geodetic networks. Netherlands Geodetic Commission Publications on Geodesy, New Series, Vol.2, No.5 (97 pages), 1968.
[9] E. P. Baltsavias. Multiphoto Geometrically Constrained Matching. PhD thesis, ETH-Zurich,
1992.
[10] R. Barrett, M. W. Berry, T. F. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo,
C. Romine, and H. van der Vorst. Templates for the Solution of Linear Systems: Building
Blocks for Iterative Methods. SIAM Press, Philadelphia, 1993.
369
ke Bjorck. Numerical Methods for Least Squares Problems. SIAM Press, Philadelphia, PA,
[11] A
1996.
[12] J. A. R. Blais. Linear least squares computations using Givens transformations. Canadian
Surveyor, 37(4):225233, 1983.
[13] P. T. Boggs, R. H. Byrd, J. E. Rodgers, and R. B. Schnabel. Users reference guide for ODRPACK 2.01: Software for weighted orthogonal distance regression. Technical Report NISTIR
92-4834, NIST, Gaithersburg, MD, June 1992.
[14] P. T. Boggs, R. H. Byrd, and R. B. Schnabel. A stable and efcient algorithm for nonlinear
orthogonal regression. SIAM J. Sci. Statist. Comput, 8:10521078, 1987.
[15] R. J. Boscovich. De litteraria expeditione per ponticiam ditionem, et synopsis amplioris
operis, ac habentur plura ejus ex exemplaria etiam sensorum impressa. Bononiensi Scientarum
et Artum Instituto Atque Academia Commentarii, IV:353396, 1757.
[16] D. C. Brown. A solution to the general problem of multiple station analytical stereotriangulation. Technical Report RCA-MTP Data Reduction Technical Report No. 43 (or AFMTC
TR 58-8), Patrick Airforce Base, Florida, 1958.
[17] D. C. Brown. Close range camera calibration. Photogrammetric Engineering, XXXVII(8),
August 1971.
[18] D. C. Brown. Calibration of close range cameras. Int. Archives Photogrammetry, 19(5), 1972.
Unbound paper (26 pages).
[19] D. C. Brown. The bundle adjustment progress and prospects. Int. Archives Photogrammetry, 21(3), 1976. Paper number 303 (33 pages).
[20] Q. Chen and G. Medioni. Efcient iterative solutions to m-view projective reconstruction
problem. In Int. Conf. Computer Vision & Pattern Recognition, pages II:5561. IEEE Press,
1999.
[21] M. A. R. Cooper and P. A. Cross. Statistical concepts and their application in photogrammetry
and surveying. Photogrammetric Record, 12(71):637663, 1988.
[22] M. A. R. Cooper and P. A. Cross. Statistical concepts and their application in photogrammetry
and surveying (continued). Photogrammetric Record, 13(77):645678, 1991.
[23] D. R. Cox and D. V. Hinkley. Theoretical Statistics. Chapman & Hall, 1974.
[24] P. J. de Jonge. A comparative study of algorithms for reducing the ll-in during Cholesky
factorization. Bulletin Geodesique, 66:296305, 1992.
[25] A. Dermanis. The photogrammetric inner constraints. J. Photogrammetry & Remote Sensing,
49(1):2539, 1994.
[26] I. Duff, A. M. Erisman, and J. K. Reid. Direct Methods for Sparse Matrices. Oxford University
Press, 1986.
[27] O. Faugeras. What can be seen in three dimensions with an uncalibrated stereo rig? In
G. Sandini, editor, European Conf. Computer Vision, Santa Margherita Ligure, Italy, May
1992. Springer-Verlag.
[28] A. W. Fitzgibbon and A. Zisserman. Automatic camera recovery for closed or open image
sequences. In European Conf. Computer Vision, pages 311326, Freiburg, 1998.
[29] R. Fletcher. Practical Methods of Optimization. John Wiley, 1987.
[30] W. Forstner. Evaluation of block adjustment results. Int. Arch. Photogrammetry, 23-III, 1980.
[31] W. Forstner. On the geometric precision of digital correlation. Int. Arch. Photogrammetry &
Remote Sensing, 24(3):176189, 1982.
[32] W. Forstner.
A feature-based correspondence algorithm for image matching.
Int. Arch. Photogrammetry & Remote Sensing, 26 (3/3):150166, 1984.
[33] W. Forstner. The reliability of block triangulation. Photogrammetric Engineering & Remote
Sensing, 51(8):11371149, 1985.
[34] W. Forstner. Reliability analysis of parameter estimation in linear models with applications to
mensuration problems in computer vision. Computer Vision, Graphics & Image Processing,
40:273310, 1987.
[35] D. A. Forsyth, S. Ioffe, and J. Haddon. Bayesian structure from motion. In Int. Conf. Computer
Vision, pages 660665, Corfu, 1999.
370
B. Triggs et al.
371
[61] R. Hartley, R. Gupta, and T. Chang. Stereo from uncalibrated cameras. In Int. Conf. Computer
Vision & Pattern Recognition, pages 7614, Urbana-Champaign, Illinois, 1992.
[62] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge
University Press, 2000.
[63] R. I. Hartley and T. Saxena. The cubic rational polynomial camera model. In Image Understanding Workshop, pages 649653, 1997.
[64] F. Helmert. Die Mathematischen und Physikalischen Theorien der hoheren Geodasie, volume
1 Teil. Teubner, Leipzig, 1880.
[65] B. Hendrickson and E. Rothberg. Improving the run time and quality of nested dissection
ordering. SIAM J. Sci. Comput., 20:468489, 1998.
[66] K. R. Holm. Test of algorithms for sequential adjustment in on-line triangulation. Photogrammetria, 43:143156, 1989.
[67] M. Irani, P. Anadan, and M. Cohen. Direct recovery of planar-parallax from multiple frames.
In Vision Algorithms: Theory and Practice. Springer-Verlag, 2000.
[68] K. Kanatani and N. Ohta. Optimal robot self-localization and reliability evaluation. In European Conf. Computer Vision, pages 796808, Freiburg, 1998.
[69] H. M. Karara. Non-Topographic Photogrammetry. Americal Society for Photogrammetry
and Remote Sensing, 1989.
[70] G. Karypis and V. Kumar. Multilevel k-way partitioning scheme for irregular graphs. J.
Parallel & Distributed Computing, 48:96129, 1998.
[71] G. Karypis and V. Kumar. A fast and highly quality multilevel scheme for partitioning irregular
graphs. SIAM J. Scientic Computing, 20(1):359392, 1999. For Metis code see http://wwwusers.cs.umn.edu/ karypis/.
[72] I. P. King. An automatic reordering scheme for simultaneous equations derived from network
systems. Int. J. Numer. Meth. Eng., 2:479509, 1970.
[73] K. Kraus. Photogrammetry. Dummler, Bonn, 1997. Vol.1: Fundamentals and Standard
Processes. Vol.2: Advanced Methods and Applications. Available in German, English &
several other languages.
[74] A. M. Legendre. Nouvelles methodes pour la determination des orbites des com`etes. Courcier,
Paris, 1805. Appendix on least squares.
[75] R. Levy. Restructuring the structural stiffness matrix to improve computational efciency. Jet
Propulsion Lab. Technical Review, 1:6170, 1971.
[76] M. X. Li. Hierarchical Multi-point Matching with Simultaneous Detection and Location of
Breaklines. PhD thesis, KTH Stockholm, 1989.
[77] Q.-T. Luong, R. Deriche, O. Faugeras, and T. Papadopoulo. On determining the fundamental
matrix: Analysis of different methods and experimental results. Technical Report RR-1894,
INRIA, Sophia Antipolis, France, 1993.
[78] S. Mason.
Expert system based design of close-range photogrammetric networks.
J. Photogrammetry & Remote Sensing, 50(5):1324, 1995.
[79] S. O. Mason. Expert System Based Design of Photogrammetric Networks. Ph.D. Thesis,
Institut fur Geodasie und Photogrammetrie, ETH Zurich, May 1994.
[80] B. Matei and P. Meer. Bootstrapping a heteroscedastic regression model with application to
3D rigid motion evaluation. In Vision Algorithms: Theory and Practice. Springer-Verlag,
2000.
[81] P. F. McLauchlan. Gauge independence in optimization algorithms for 3D vision. In Vision
Algorithms: Theory and Practice, Lecture Notes in Computer Science, Corfu, September
1999. Springer-Verlag.
[82] P. F. McLauchlan. Gauge invariance in projective 3D reconstruction. In Multi-View Modeling
and Analysis of Visual Scenes, Fort Collins, CO, June 1999. IEEE Press.
[83] P. F. McLauchlan. The variable state dimension lter. Technical Report VSSP 5/99, University
of Surrey, Dept of Electrical Engineering, December 1999.
[84] P. F. McLauchlan.
A batch/recursive algorithm for 3D scene reconstruction.
In
Int. Conf. Computer Vision & Pattern Recognition, Hilton Head, South Carolina, 2000.
372
B. Triggs et al.
[85] P. F. McLauchlan and D. W. Murray. A unifying framework for structure and motion recovery
from image sequences. In E. Grimson, editor, Int. Conf. Computer Vision, pages 31420,
Cambridge, MA, June 1995.
[86] P. F. McLauchlan and D. W. Murray. Active camera calibration for a Head-Eye platform using
the Variable State-Dimension lter. IEEE Trans. Pattern Analysis & Machine Intelligence,
18(1):1522, 1996.