2006 - Kampes - Radar Interferometry
2006 - Kampes - Radar Interferometry
2006 - Kampes - Radar Interferometry
Series Editor:
Freek D. van der Meer, Department of Earth Systems Analysis, International Institute for
Geo-Information Science and Earth Observation (ITC), Enschede, The Netherlands
& Department of Physical Geography, Faculty of Geosciences, Utrecht University,
The Netherlands
The titles published in this series are listed at the end of this volume
RADAR INTERFEROMETRY
Persistent Scatterer Technique
by
BERT M. KAMPES
German Aerospace Center ((DLR), Germany
A C.I.P. Catalogue record for this book is available from the Library of Congress.
Published by Springer,
P.O. Box 17, 3300 AA Dordrecht, The Netherlands.
www.springer.com
Cover image: Estimated linear displacement rates for the Berlin test site for different thresholds
on the a posteriori variance factor. Figure 6.6(c) (also see p. 96)
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
vii
viii Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Preface
Soon after the first attempts at Delft University of Technology to apply the
radar interferometric technique for the monitoring of subsidence due to gas
extraction in the province of Groningen, the Netherlands, it was recognized by
Usai and Hanssen (1997) that man-made features remained coherent in radar
interferograms over long time spans, while their surrounding was completely
decorrelated. This particular area in the northern part of the Netherlands is
well-known for its subsidence. Due to the slow subsidence rate—the maximum
is approximately 1 cm/y—long temporal baselines needed to be used. Even
though only interferograms with short perpendicular baselines were generated,
temporal decorrelation severely limited the analysis, see (Usai, 1997, 2000;
Usai and Klees, 1999). The Groningen data set was also used by Hanssen
(1998), who analyzed artifacts of atmospheric origin in coherent interfero-
grams with short temporal baselines. Aside from temporal and geometrical
decorrelation, atmospheric signal is the main problem for the interpretation
of interferometric signal of current day spaceborne sensors on board, e.g.,
ERS, ENVISAT and RADARSAT (Hanssen, 2001).
The Permanent Scatterers (PS) Technique was developed shortly after, see
(Ferretti et al., 2000a, 2001). It aims to bypass the problem of geometrical and
temporal decorrelation by considering time-coherent pixels. Furthermore, by
using a large amount of data, atmospheric signal is estimated and corrected
for. The PS technique offers a convenient processing framework that enables
the use of all acquired images, irrespective of baseline, and a parameter
estimation strategy for interferograms with low spatial coherence. The ad-
vantages of this method can be measured from the increasing attention it
has received at major conferences For example, in the proceedings of the
IGARSS conferences of 1999 to 2003 there are respectively 1, 5, 4, 17 and 26
direct references to the term Permanent Scatterer. The “Terrafirma” initiative
further underlines the high potential of this technique. This project aims to
provide a Pan-European ground motion hazard information service, to be
distributed throughout Europe via the national geological surveys. All large
towns in Europe are to be studied with the PS technique. In total, 189 towns
xi
xii Preface
in total are identified, equalling 27% of the total population. In the longer
term, areas will be included that suffer risks from ground motions caused by,
for example, landslides or mining, see (Terrafirma, 2005).
Additionally, once the PS technique demonstrated that using a large
number of images is a way to reduce atmospheric artifacts and to obtain
highly precise estimates despite decorrelation, this sparked the development
of a number of related techniques, e.g., Coherent Target Monitoring (Van
der Kooij, 2003; Van der Kooij and Lambert, 2002), Interferometric Point
Target Analysis (Wegmuller, 2003; Werner et al., 2003), Stable Point Network
analysis (Arnaud et al., 2003), Small Baseline Subset Approach (Berardino
et al., 2003, 2002; Lanari et al., 2003; Mora et al., 2002), and Corner Reflector
Interferometry and Compact Active Transponders Interferometry (Nigel Press
Associates, 2004). These techniques partly seek to improve the PS technique
using a modified approach (some even assume distributed scattering of multi-
looked pixels, although still use concepts similar to the PS technique), but also
partly try to avoid disputes over the patent of the PS technique. The term
Persistent Scatterer Interferometry (PSI) is now used to group techniques that
analyze the phase time series of individual scatterers.
This book revisits the original PS technique and presents a new PSI
algorithm, the STUN algorithm, which is developed to provide a robust and
reliable estimation of displacement parameters and their precision.
Audience
This book is intended for scientists and students who want to understand and
work with Persistent Scatterer Interferometry. Particularly of interest for this
group of readers are the derivation of the functional and stochastic model,
the description of the estimation using integer least-squares and variance
components, and the alternative hypothesis testing procedure, see Chapter 2,
3, and 4, respectively. The software toolbox on the CDROM explain these key
concepts using practical demonstrations, see also Appendix E. The modular
programs can be easily adapted and be further developed by the interested
reader for specific problems.
Secondly, this book is intended to provide insight in the problems and
pitfalls of Persistent Scatterer Interferometry for users of PSI products and
of commercially available PSI processing software, and to enhance their un-
derstanding of this technique. This group of readers includes geo-information
professionals and high level decision makers who do not perform PSI process-
ing themselves. The description of the reference PS technique and potential
improvements upon it, see Chapter 2, and Chapter 6 on real data processing
may prove to be most useful for this group.
The reader is assumed to be familiar with general radar concepts and
conventional radar interferometric processing, as for example described in
(Bamler and Hartl, 1998; Hanssen, 2001; Klees and Massonnet, 1999; Rosen
Preface xiii
Acknowledgments
This study was performed at the German Aerospace Center (DLR), Ober-
pfaffenhofen, Germany, and at Delft University of Technology, Delft, The
Netherlands. Many people helped me in various ways during this time, whom I
would like to acknowledge here. I would like to thank Roland Klees, professor
of Physical and Space Geodesy at the faculty of Aerospace Engineering at
Delft University of Technology, for supervising my doctoral research. I am also
most indebted to Ramon Hanssen, associate professor at Delft University, for
his guidance through the early years of chaos and confusion, and for the many
fun times we had discussing radar interferometry and the philosophy of life.
For the same reasons I am thankful to Richard Bamler, director of the Remote
Sensing Technology Institute at the DLR, and Michael Eineder, leader of the
Image Science and SAR processing group.
The many people I met during these years have been a great motivation
to me. I would like to mention Richard Lord, Andy Hooper, and Adele Fusco,
visiting scientists at the DLR from the universities of Cape Town, Stanford,
and Sannio, respectively. From Delft University of Technology I would like to
thank Peter Teunissen and Peter Joosten for providing source code and useful
discussions, mainly concerning the integer least-squares estimation. Also the
people of the Delft radar group helped me a lot. Many thanks go to Gini
Ketelaar, Petar Marinkovic, Yue Huanyin (Paul), and especially Freek van
Leijen, who carefully checked parts of this work. This work would not have
been possible without the inventors of the PS technique. I would like to thank
Alessandro Ferretti, Fabio Rocca, and Marco Bianchi, for their encouragement
and for providing me with reference processing results for the Las Vegas test
site. I am also grateful to my colleagues at the DLR who are always prepared
to listen to me. Particularly I learned a lot from Nico Adam, my office mate,
whom I thank for patiently sharing his great understanding and skills in
the fields of radar interferometry and software engineering. The people who
used and evaluated the software during the development are also very much
acknowledged for their feedback, specifically Michaela Kircher, Jirathana
xiv Preface
Bert Kampes
Munich, March 2006
Summary
xv
xvi Summary
and a Minimal Cost Flow sparse grid phase unwrapping algorithm is used to
obtain the unwrapped phase at these points. The final estimation is performed
using the unwrapped data. The precision of the estimated parameters is
described by the propagated variance-covariance matrix with respect to a
chosen reference point.
The STUN algorithm is successfully applied to two urban test areas.
Several tests are performed to assess the sensitivity of the algorithm to various
parameters such as the number of available interferograms, the distance
between points in the reference network, etc. The first test site, Berlin, was not
expected to undergo significant displacements. It was selected to validate the
developed algorithm and software. However, an uplift area is identified to the
west of Berlin, with a maximum displacement rate of ∼4 mm/y. Most likely,
this uplift is related to underground gas storage at that location. Data of two
adjacent tracks are used in a cross-comparison of the estimated displacement.
Contrary, the second test site, Las Vegas, undergoes significant displacements.
A combined linear and sinusoid displacement model is used to model the
displacements. The maximum estimated subsidence rate is ∼20 mm/y and
the maximum amplitude of the seasonal component is ∼20 mm. The results
compare well with estimates by the reference PS technique. Finally, combined
use of ERS and ENVISAT data is demonstrated.
1
Introduction
1
2 Chapter 1: Introduction
1.1 Objectives
The PS technique, as described by Ferretti et al. (2000a, 2001), is the basis for
this study. The principal estimation strategy is not questioned, i.e., a single
master stack of complex differential interferograms is used, and the points
are estimated using a preliminary and a final estimation step. Within this
framework the central research question is formulated as:
1.2 Outline
This book is organized as follows. A review of the reference PS technique
(Ferretti et al., 2000a, 2001) is given in Chapter 2. Potential improvements
upon the reference technique are identified and the functional and stochastic
model are derived. The next chapters focus on these improvements, making
use of the derived mathematical model. Chapter 3 introduces the integer
least-squares estimator, which is used in the developed algorithm to estimate
unknown integer ambiguities and float parameters. The Spatio-Temporal Un-
wrapping Network (STUN) algorithm, which is developed for the estimation of
1
In Ferretti et al. (2000a) the term non-linear deformation is used to indicate small
deviations from the linear model that are obtained using filtering of the residual
phase. In this study the displacement is completely parameterized.
4 Chapter 1: Introduction
The Permanent Scatterer (PS) technique has been developed in the late 1990s
by A. Ferretti, F. Rocca, and C. Prati of the Technical University of Milan
(POLIMI) to overcome the major limitations of repeat pass SAR interferom-
etry; temporal and geometrical decorrelation, and variations in atmospheric
conditions. The main characteristics of this multi-image processing method
are that it utilizes a single master stack of differential interferograms, and
that only time–coherent pixels, i.e., “Permanent Scatterers,” are considered.
Furthermore, this technique distinguishes itself from other common interfero-
metric processing methods by the fact that all acquired images can be used,
including those with large baselines. This is the case since pixels with point-
like scattering do not suffer from geometrical decorrelation as targets with a
distributed scattering mechanism do, and such pixels thus remain coherent in
all interferograms.
The PS technique, which is referred to as the reference technique in this
book, is described in detail in section 2.1. The term “PS technique” is used to
refer to this description. Potential improvements upon the PS technique are
identified in section 2.2. These issues are addressed in the following chapters.
The key processing steps of the PS technique are the following (see, e.g.,
Ferretti et al., 1999b,c):
1. Computation of the interferograms.
2. Computation of the differential interferograms using a digital elevation
model (DEM).
3. Preliminary estimation—at a coarse grid—of the presumably most coher-
ent pixels. These pixels are referred to as Permanent Scatterer Candidates
(PSCs).
5
6 Chapter 2: The Permanent Scatterer Technique
Given K+1 SAR images (all the available images on the same track), K
interferograms are formed with respect to the same master image m. The
SAR images are oversampled by a factor of two in range and azimuth
direction before interferogram generation in order to avoid aliasing of the
complex interferometric signal. Therefore, the amount of data that needs to
be handled is considerable, even if the area of interest is limited to a city
and its surroundings (i.e., typically less than five percent of the total area
of a full-scene SAR image). For a typical project with K= 50 interferograms,
the required online storage space for convenient processing is approximately
100 GB. Although current computer systems have this storage available at
low cost, the processing time to handle these amounts of data is a factor of
importance when a processing environment is selected (regarding the amount
of memory, the speed of the disk drives, and usage of multiple CPUs). Note
that while the amount of data can be processed and stored by current
computer systems without major difficulties, there are too many unknown
parameters to perform their estimation in a single step. Aside from the amount
of data, a second difference with conventional interferometric processing is
that spectral range and azimuth filtering is not applied, since only targets
with a point-like scattering mechanism are considered.
The use of a single master image implies that the temporal, geometrical,
and/or Doppler baseline (difference in Doppler centroid frequency) will be
large for a number of interferograms, leading to decorrelation of targets that
have a distributed scattering mechanism. This may cause difficulties in the
coregistration because standard algorithms require a certain level of coherence
(see for example Hanssen, 2001). Therefore, in our implementation a newly
developed geometric coregistration procedure is applied using a DEM of the
2.1 The reference PS technique 7
area and precise orbit data. The offset of the slave image with respect to the
master image is computed on a grid of virtual tie-points using a zero-Doppler
iteration scheme (see, e.g., Hanssen, 2001). Using this information, the higher
order terms of the coregistration polynomial are determined. The zero-order
terms of the polynomial are estimated using a point matching procedure, since
timing errors in range and azimuth prevent an accurate geometric solution for
these terms. This algorithm is described in detail in (Adam et al., 2003).
The master image is selected such that the dispersion of the perpendicular
baselines is as low as possible, see (Colesanti et al., 2003a). In our imple-
mentation, the master image is selected maximizing the (expected) stack
coherence of the interferometric stack, which facilitates visual interpretation
of the interferograms and aids quality assessment. The stack coherence for a
stack with master m is defined as
1
K
γm = B⊥k,m , 1200) × g(T k,m , 5) × g(ffdc
g(B k,m
, 1380), (2.1)
K
k=0
where
1 − |x|/c if |x| < c
g(x, c) = , (2.2)
0 otherwise
and B⊥k,m is the perpendicular baseline between images m and k at the center
k,m
of the image, T k,m the temporal baseline (in years), and fdc the Doppler
baseline (the mean Doppler centroid frequency difference). The divisor c in
Eq. (2.2) can be regarded as a critical baseline for which total decorrelation
is expected for targets with a distributed scattering mechanism. The values
given in Eq. (2.1) are typical for ERS, but they can be easily adapted to
any other sensor with a different wavelength, look angle, and/or bandwidth.
Fig. 2.1 shows an example of the stack coherence function for a real data stack
of Berlin, where 70 SAR images are available. The images are sorted according
to the acquisition time. Note that in general the stack coherence γ m is larger
when the master is selected more centrally in time, but that it decreases when
it does not lie centrally regarding the perpendicular or Doppler baseline.
A reference digital elevation model (DEM) and precise orbit data are used to
obtain K differential interferograms. The interferometric phase component
that is induced by topography is largely eliminated using the differential
technique, see, e.g., (Bamler and Hartl, 1998; Bürgmann
¨ et al., 2000; Eineder,
2003; Massonnet and Sigmundsson, 2000; Rosen et al., 2000). The differential
interferometric phase is used in all further computations. In the following, the
term (interferometric) phase refers to the differential interferometric phase,
except when explicitly stated otherwise.
Instead of using an existing DEM, a height model can also first be gener-
ated from a subset of the available images, preferably with large perpendicular
8 Chapter 2: The Permanent Scatterer Technique
0.4
0.3
γ m [-]
0.2
0.1
0.0
1-Jan-1992 1-Jan-1994 1-Jan-1996 1-Jan-1998 1-Jan-2000
Acquisition date
Fig. 2.1: Example of the stack coherence function, Eq. (2.1), for 70 available
acquisitions of the Berlin area, track 165, frame 2547.
and small temporal baselines, see (Ferretti et al., 1999a). This is the standard
approach in the PS technique (Colesanti et al., 2003a). However, after the
Shuttle Radar Topography Mission (SRTM), a DEM of sufficient precision
is readily available for practically any area of interest between -57◦ and 60◦
latitude (Suchandt et al., 2001). The DEMs have a vertical resolution of one
meter (i.e., the elevation value is given in integer meters) and a horizontal
spacing of 1 arc second (approximately 30 meters at the equator). The SRTM
DEM accuracy specifications are 16 m absolute and 6 m relative for the vertical
direction, and 20 m absolute and 16 m relative horizontally (90% confidence),
see (Rabus et al., 2003). In our implementation, the SRTM X-band DEM is
used for topographic correction since it is expected to be more precise than the
C-band DEM, due to the shorter wavelength and the mode of operation used
(Rabus et al., 2003). However, the X-band DEM does not have continuous
coverage due to its smaller swath-width. If the area of interest is not fully
covered by the X-band DEM the C-band DEM is (partially) used. Although
the best available DEM is used, the results of the PS processing do not depend
on the precision of the DEM, since for each pixel also the elevation with respect
to the DEM is estimated. Keep in mind that even an extremely precise DEM
does not allow to fully correct the interferometric phase, since the location of
the scatterer is not known, e.g., the backscattered echo for a pixel at a certain
range to the radar could come from the street, bounced via a wall, from a
ledge in a window, or from a rooftop, or any combination thereof.
It was noted by Colesanti et al. (2003b) that the PS analysis can also
be carried out without using a reference DEM, but only compensating the
interferograms for a flat topography, since in the PS technique a topographic
term is estimated anyway.
2.1 The reference PS technique 9
Functional model
The functional model that is used in the PS technique for the unwrapped
differential interferometric phase Φk for a point in interferogram k is given in
(Colesanti et al., 2003a) as
Φk = φktopo + φdefo
k
+ φkatmo + φknoise , (2.3)
where φktopo is the phase due to inaccuracy of the reference DEM, φdefo
k
is the
k
phase due to displacement of the point, φatmo is the phase due to atmospheric
delays, and φknoise is decorrelation noise. The topographic phase practically is
a linear function of the perpendicular baseline, and can be written as
topo = βx · Δhx ,
k k
φx, (2.4)
where βxk is the height-to-phase conversion factor for point x, and Δhx is the
height of the point relative to the reference surface, referred to as DEM error
(see Eq. (2.12) on page 17 for the definition of β ). A time-linear model is used
to model the displacement of each point x. Therefore,
4π k
defo = − T · α(x),
k
φx, (2.5)
λ
where λ is the wavelength of the radar carrier signal, T k is the temporal
baseline with respect to the master acquisition, and α(x) is the average
displacement rate at point x. The phase φatmo due to atmospheric signal is not
modeled, but reduced considerably by considering phase differences between
nearby points. The noise term contains all other phase contributions. If the
displacement (difference between points) deviates from a time-linear behavior
this signal is thus also contained in the noise term. In the PS technique a tem-
poral high-pass filter is used to separate temporally correlated displacement
signal from random noise. The next section describes the estimation of these
signal components in detail.
integrated, yielding the unwrapped residual phase at the PSC positions with
respect to a reference point.
In the following, first the selection of the PSCs using the amplitude
dispersion index is described, then the estimation of the parameters using
the ensemble coherence, and finally the filtering that is performed to separate
atmospheric signal from temporally correlated displacement and random
noise.
The amplitude dispersion index Da , and its relation to the phase standard
deviation σφ , is defined in (Ferretti et al., 2001) as
σa
σ̂φ = = Da , (2.6)
ā
where σa is the temporal standard deviation of the amplitude and ā the
temporal mean of the amplitude for a certain pixel. Thus, a pixel that
consistently has a similar, relatively large, amplitude during all acquisitions
is expected to have a small phase dispersion. This relation enables the
identification of potentially coherent points without the need to analyze the
phase. The latter would not be possible at this moment, since the phase still
contains unknown signal contributions. Moreover, the amplitude dispersion
index, Eq. (2.6), does not regard neighboring pixels. This enables the detection
of isolated points, which is not possible if this detection is based on a spatially
estimated coherence value, as, for example, done in (Usai, 1997; Usai and
Hanssen, 1997).
Points are selected as PSC if the amplitude dispersion is below a threshold,
typically between 0.25 and 0.4 (Colesanti et al., 2003a; Ferretti et al., 2001).
Colesanti et al. (2003a) report that the PSC density must be at least
∼3 PSC/km2 , since otherwise the atmospheric signal cannot reliably be
interpolated. The estimation of the parameters is restricted to these selected
pixels in the preliminary estimation step. Ferretti et al. (2001) have shown
using a numerical simulation that the estimation of the phase stability based
on the amplitude dispersion holds very well for σφ < 0.25 rad (∼15◦ ) if K= 33.
This experiment is repeated here, see Fig. 2.2. For larger values of the
amplitude dispersion index there is no linear relation with the phase standard
deviation. The amplitude dispersion index tends to 0.5 for low SNR, see also
(Ferretti et al., 2001). Nonetheless, points with a smaller amplitude dispersion
index are expected to have a smaller phase standard deviation. Therefore,
thresholding on the dispersion index is a very practical way of selecting points
that are expected to have the smallest phase dispersion.
The images need to be radiometrically calibrated in order to allow for
the estimation of σa and ā (Ferretti et al., 2001). In our implementation,
the data are calibrated for antenna pattern, range spreading loss, and gain
factor, (relevant to the sensor, acquisition time, and processing center), see
2.1 The reference PS technique 11
0.4
0.2
0.0
0.0 0.2 0.4 0.6 0.8
Noise Standard Deviation [rad]
Fig. 2.2: Numerical simulation for the amplitude dispersion index following (Ferretti
et al., 2001). A complex variable z = s+n is simulated at 5000 points. The signal was
fixed to s = 1, while the noise standard deviation on the real and imaginary parts
of n was gradually incremented from 0.05 to 0.8. 34 data sets are supposed to be
available (K = 33). The mean estimated dispersion index Da (diamonds) and their
standard deviations are plotted as function of the noise standard deviation, together
with the phase standard deviation σφ (plus marks). Small values of the amplitude
dispersion index are a good estimate for the phase standard deviation.
also (Adam et al., 2003; Laur et al., 1998). However, this blind calibration (i.e.,
based on annotated parameters without examining the data) may not work
for all SLC images, likely due to an incorrectly annotated calibration constant
in the leader file. To ensure that the calibrated images are comparable, in our
implementation, the histograms of the calibrated intensity images are plotted
on top of each other. If the modes of the histograms vary more than what
could reasonably be expected, say 1 dB, then the histograms are all shifted to
the mode of the first image. Since decibel is used as a unit, this is equivalent
to multiplication of the intensity with a re-computed calibration constant. (To
avoid large random variation of the backscatter due to changes in soil moisture
and surface roughness, etc., the histograms are computed for a user-selected
polynomial region, e.g., ∼10 km2 of inner city area.)
1
K
k
γ̂ x,y = exp(jex,y ), (2.7)
K
k=1
k
is used as a norm, where j is the imaginary unit, and ex,y is the difference
between the observed and modeled phase between points x and y in interfero -
gram k. The “hat” in γγ̂ is used to stress that Eq. (2.7) is an estimate of the
12 Chapter 2: The Permanent Scatterer Technique
3
6
4 2
Φ [rad]
1
0 K γ̂
-2 0 S
S̄
-4
-2 0 2 4 6 -1
-1 0 1 2 3 4
T [y]
Fig. 2.3: Geometric interpretation of the complex coherence. The observed phase
(bold ×) is wrapped between −π and π. It is modeled (bold ) using a linear
displacement rate α. The unwrapped phase and model are displayed using a normal
font face. The residual phase is indicated by lines. The coherence is the complex
sum (red arrow) of the residual phase between observations and model, depicted in
2.3(b). The angle of the complex coherence corresponds to the average residual S̄.
• Random noise:
PSCs are selected by thresholding the amplitude dispersion index. If a
0.4 threshold is used for the dispersion index, then according
√ to Eq. (2.6),
the noise standard deviation is expected to be below 2 · 0.40 = 0.56 rad
for all considered differences. (Note though that the amplitude dispersion
underestimates the phase noise by approximately 50% for a value of 0.40,
see Fig. 2.2.)
The absolute value of the coherence lies in the interval [0,1], where a coherence
of 1 signifies complete correspondence of the modeled phase with the observed
phase. The angle of the complex coherence is said to be an estimate for the
master atmospheric signal in (Ferretti et al., 2001), but it would be more exact
to refer to it as the average interferometric residual phase, see Fig. 2.3. The
reason that it is called master atmosphere lies in the fact that the master image
is present in all interferograms, and that, for example, a large atmospheric
delay during the master acquisition would clearly be visible in this average.
However, it is not true that the average residual phase is always caused by an
atmospheric delay during the master acquisition.
This mean is an estimate for the atmospheric phase during the master
acquisition, and it is removed because it will not pass the high-pass filter that
is performed next. The temporal high-pass filtering is performed to remove
possible temporally correlated displacement from the residual phase. Finally,
a spatial low-pass filter is applied to the temporally filtered residuals in order
to remove the random noise component. These filtering steps can be written
symbolically, cf. (Ferretti et al., 2000a), as
k
φ̂x,
φ atmo = exk HP time + ēx LP space , (2.10)
LP space
k
where φ φ̂x,atmo is the estimated atmospheric phase at PSC position x in
interferogram k. In (Ferretti et al., 2000a) a triangular window of length 300
days was used for the temporal filter and a 2 × 2 km2 averaging window for the
spatial filter. Note that the order of the filtering steps can be interchanged,
and also that the temporal filter could be applied before the integration step.
Ferretti et al. (2000a) suggest that these filtering steps require the un-
wrapped residual phase, but this is not really necessary. By using a complex
filter, i.e., filtering the real and imaginary parts of the complex residual signal
separately, the (wrapped) low wavelengths can be easily obtained. In this case
there is no need for phase unwrapping, since the complex filtered residuals can
directly be subtracted from the original phase data, which is also wrapped.
(Though if unwrapping is performed, the chance of occurrence of unwrapping
errors is likely smaller after complex filtering, because the number of residuals
are likely much lower, depending on the power of the signal in the higher
frequencies.)
After the low wavelength part of the atmospheric delays is estimated at the
PSC positions, it is interpolated at the original resolution of the differential
interferograms. The interpolated atmospheric signal is referred to as “atmo-
spheric phase screen” (APS). It is noted in (Colesanti et al., 2003a) that the
2.2 Potential improvements 15
step of spatial low-pass filtering and interpolation of the residual phase can
also be performed simultaneously using Kriging interpolation, instead of using
the 2 × 2 km2 moving averaging window.
The interpolated APSs are subtracted from the differential interferograms
at full resolution, and additional PS points are searched for. This is done on a
pixel-by-pixel basis (i.e., not between nearby pixels, although still with respect
to a reference), since there is no need anymore to consider phase differences be-
tween nearby points. After all, the computations of the preliminary estimation
step, described in the previous section, are performed between nearby points
because otherwise the atmospheric signal would prevent a correct estimation,
and this signal is now removed. The same functional model, Eq. (2.7), is used
here as during the preliminary estimation step.
Points with an estimated ensemble coherence below a certain threshold are
discarded, e.g., |γ̂ | < 0.75 (Ferretti et al., 2001). The number of points that
finally can be used, is in the order of a few hundred points per square kilometer
(in urban areas), according to Ferretti et al. (2001). The same strategy of low-
pass temporal filtering, that is described at the end of the previous section,
is used to estimate temporally correlated displacements that deviate from the
linear displacement model.
Despite its spreading application, the reference PS technique does not nec-
essarily provide optimally estimated parameters under all circumstances,
particularly in cases when the assumptions on the displacement model and
properties of the signal components are not valid. Possible problem areas are
identified here, related to the following assumptions made in the PS technique:
• The functional model contains all phase components, see Eq. (2.3).
As will be shown in section 2.2.1, the sub-pixel position of the PS point
induces an additional phase that should be accounted for, particularly
when there are significant differences between the radar frequencies
and/or Doppler centroid frequencies of the acquired images. This
additional phase term was also introduced in the PS technique, i.e.,
in the reference technique, when ERS–ENVISAT cross interferometry
was discussed (Arrigoni et al., 2003; Colesanti et al., 2003d).
Moreover, in the model used in the PS technique, phase due to orbit
errors is lumped with the atmospheric signal. These terms should be
separated in the functional model.
• Displacement can be described using a constant rate, see Eq. (2.5).
The problem is over-parameterized in case a PS point does not undergo
displacement. A significance test should be used to detect whether
a displacement parameter can be significantly estimated, and if this
would not be possible, the estimation should be repeated without such
16 Chapter 2: The Permanent Scatterer Technique
Moreover, the coherence does not directly provide the precision of the
estimated displacement at a certain epoch.
These issues can only be studied after rigorously deriving the functional and
stochastic model. This is the subject of sections 2.2.1 and 2.2.2.
φkx = W {φk
φx,topo + φx,defo + φx,obj + φx,atmo + φx,orbit + φx,noise },
k k k k k
(2.11)
where W {.} is the wrapping operator1 , φtopo is the phase caused by uncom-
pensated topography, φdefo is the phase caused by a displacement of the target
in the time between the acquisitions, φobj is the object scattering phase related
to the path length traveled in the resolution cell, φatmo is the atmospheric
phase accounting for signal delays, φorbit is the phase caused by imprecise
orbit data, and φnoise is the additive noise term. The topographic phase is
related to the elevation of the target with respect to the reference surface
Δhx , referred to as DEM error in this work, as (Rodriguez and Martin, 1992)
4π B⊥kx
topo = − · Δhx
k
φx, m
λ rxm sin θx,inc (2.12)
= βxk · Δhx ,
where λ is the wavelength of the carrier signal used by the radar system, B⊥kx
is the local perpendicular baseline, rxm is the range from master sensor to the
m
pixel, and θx,inc is the local incidence angle, see also Fig. 2.4. The height to
phase conversion factor β relates a change in height to a change in phase.
This factor is computed for each pixel using a DEM of the area. It is equal
to the phase difference between a synthetic interferogram computed from the
DEM directly and from the DEM with a bias of one meter added to it. The
displacement term equals
4π
defo = −
k
φx, Δrrxk , (2.13)
λ
where Δrrxk is the line-of-sight displacement toward the radar since the
acquisition time of the master image. In order to limit the number of
parameters that needs to be estimated, the displacement behavior needs to be
modeled and parameterized. The displacement since the time of the master
acquisition is modeled using a linear combination of base functions as
D
Δrrxk = αd (x) · pd (k). (2.14)
d=1
1
W {x}=∠ exp(jx)
18 Chapter 2: The Permanent Scatterer Technique
B⊥kx
θxm
rxm
m
θx,inc
x
x
Δhx
reference
surface
Fig. 2.4: Satellite configuration for across-track radar interferometry. The master
sensor m and slave k go “into” the paper. A point x is observed in the pixel of the
master image at a range rxm and under a look angle θxm .
k
orbit B⊥kx
m
ξ
θxk
x
resolution
azimuth
ξx
θxm
ξm
x
ξ kx
ground-range
k
rxm rxm resolution
k ϑ
ϑm
slaannge
r
m
t-
reference ηx
surface
reference η kx x
η
surface
ηmx
Fig. 2.5: Geometry for a point scatterer located at a sub-pixel position in (a)
azimuth and (b) ground-range. The phase in the interferogram is computed at the
pixel position corresponding to the leading edge of the resolution cell, while the
phase center of the scatterer x is actually located at the sub-pixel position ξ x in
azimuth and η x in ground-range. The observed interferometric phase, corrected for
the phase of the reference surface, still contains the contribution due to the path
length difference that the signal traveled within the resolution cell, unless the phase
is interpolated at the exact sub-pixel position of the point.
The azimuth term can also be expressed in terms of the Doppler centroid
frequency. Using a simplified rectilinear imaging geometry, the Doppler cen-
troid frequency can be written as (Bamler and Schättler,
¨ 1993; Fernandez
et al., 1999)
k −2v
fdc = k sin ϑk , (2.18)
λ
where v is the instantaneous velocity of the satellite in an earth-fixed coor-
dinate system. For a curved geometry a correction factor close to one needs
to be applied, which accounts for the slightly smaller beam velocity on the
ground, see also (Cumming and Wong, 2005; Raney, 1986). From Fig. 2.5(a)
it is clear that the additional range from the start of the bin to the actual
position is
ξ kx = ξ x sin ϑk . (2.19)
By substitution of Eq. (2.18) in Eq. (2.19) it follows that
λ k
ξ kx = f · ξx, (2.20)
−2v x,dc
using that φ/(2π) = −2r/λ for repeat pass interferometry. The interferometric
phase, caused by the azimuth sub-pixel position of the point scatterer, can
finally be expressed as
2π m
φk,m
ξx = fx,dc − fx,kdc · ξ x , (2.22)
v
assuming equal sensor velocities v. Note that this phase depends on the
wavelength used by the radar, even though this is not directly visible from
Eq. (2.22).
In (Colesanti et al., 2003d), the interferometric phase caused by the range
sub-pixel position was expressed as
4π m f m B⊥
φηx = η x Δf + η m
x m , (2.23)
c r tan θm
where c is the speed of light, f m is the radar frequency of the master sensor,
Δf = f k −f m is the frequency offset of the slave sensor, and η mx is the slant-
range sub-pixel position. Using λ= c/f , Eq. (2.23) can be written in terms of
the wavelength as
4π 4π 4π B⊥
φηx = − m · ηm
x + m m · ηm . (2.24)
λ k λ λ r tan θm x
The range sub-pixel term in Eq. (2.17) can be approximated using θm −θk ≈
B⊥/rm as
4π B⊥ 4π
φηx ≈ sin(θm + ) − m sin θm · η x
λk rm λ
4π . (2.25)
B⊥ 4π
≈ k (sin θm + cos θ m
) − sin θ m
· η x
λ rm λm
The slant-range position of the scatterer in the master image η mx is related to
the ground-range position as η m m
x = η x sin θ , see Fig. 2.5(b). Substitution in
Eq. (2.25) yields
4π 4π 4π B⊥
φηx ≈ k
− m ·η m
x + k m cos θ · η x
m
(2.26)
λ λ λ r
Moreover, it holds that η x cos θm = η m m
x / tan θ , i.e., Eq. (2.26) can be written
as
4π 4π 4π B⊥
φηx ≈ k − m ·η m x + k m · ηm . (2.27)
λ λ λ r tan θm x
By comparison of Eq. (2.27) with Eq. (2.24), it follows that Eq. (2.17) and
Eq. (2.23) are equivalent expressions for the interferometric phase caused
by range sub-pixel position. The only difference is the usage of the master
2.2 Potential improvements 21
30 120
B⊥=1000
80
Φ [rad]
Φ [rad]
500
10 40
200
50
0 0
0 10 20 30 40 0 100 200 300 500
Δh [m] Δr [mm]
(a) DEM error (b) Displacement
12 5
Δffdc =3200
00
10 4
B⊥=1000
8
Φ [rad]
Φ [rad]
3
6 1600 500
0
2
4
800
0
2 1 200
200 50
0 0
0 1 2 3 50 4 0 5 10 15 20
ξ [m] η [m]
(c) Azimuth sub-pixel position (d) Range sub-pixel position
k
The atmospheric phase φx, atmo is caused by signal delay differences during the
acquisitions, mainly due to water vapor in the troposphere. The amplitude of
the atmospheric signal in the differential interferograms can be described by
a power-law model of the form
φorbit = a + b · ξ + c · η, (2.30)
see also (Hanssen, 2001). The bias a indicates that a reference point in the
interferograms must be selected, with respect to which the other points are
computed. In practice such a bias also absorbs differences in the absolute
signal delay. Note that in Eq. (2.30) the symbols for azimuth ξ and range
coordinate η are relative to the leading edges of the interferogram, while in
Eq. (2.17) the symbols ξ x and η x for the sub-pixel positions are relative to
the leading edges of the resolution cell. The orbit error phase is assumed to be
small for most interferograms. Hanssen (2001) has shown that the maximum
number of residual orbit fringes is less than one (95% confidence interval)
in a 100×100 km2 interferogram if 5 cm radial and 10 cm across-track rms
is assumed for the orbit precision. Since we use precise orbits estimated by
the GFZ (with comparable precision), in general the residual reference phase
caused by orbit errors is smaller than a few radians over the area of interest.
Note that a trend of the average displacement field cannot be distinguished
from the average phase caused by orbit errors. This is the case for displacement
estimation using a single interferogram as well as for the estimation using a
data stack. However, the residual orbit trends are assumed to be uncorrelated
between acquisitions, and their impact on the estimated displacement field
is thus assumed to be small. To get an impression of the impact of this
error on the estimated linear displacement rates, consider the case where ten
interferograms are available, only containing phase ramps in range direction
(caused by imprecise orbit data). Assume that the reference point is located
at the left hand side, and that the standard deviation of the residual reference
phase is one rad for points on the right hand side. If only a linear displacement
rate α is estimated at a point x on the right, the following system of equations
must be solved
24 Chapter 2: The Permanent Scatterer Technique
⎡ ⎤ ⎡ 4π 1 ⎤
Φ1x −λT
⎢ Φ2x ⎥ ⎢ − 4π T 2 ⎥
⎢ ⎥ ⎢ λ ⎥
⎢ .. ⎥ = ⎢ .. ⎥ α(x) + e, D{e} = I, (2.31)
⎣ . ⎦ ⎣ . ⎦
Φ10
x − 4π
λ T
10
where T k is the temporal baseline, see also Eq. (2.13), and D{e} denotes the
dispersion of the unmodeled phase components, i.e., the residual reference
phase caused by the orbit errors. (It is assumed that the unwrapped phase is
available, and a least-squares estimation is performed.) The variance of the
estimated linear displacement rate follows as
1
σ̂ 2α = 10 . (2.32)
(− 4π
λ )
2 k 2
k=1 (T )
Assuming λ = 56.6 mm, the wavelength used by ERS, and T k = k−5.5 years,
for k=1, . . . , 10, it follows that σ̂ 2α = 0.25 mm2 /y2 . If it is assumed 50 inter-
ferograms are available, equally spaced in time over this nine year period,
then σ̂ 2α = 0.06 mm2 /y2 . Depending on the application, this error cannot be
neglected. It is easily derived that the error on the estimated displacement
rates caused by orbit inaccuracies is a ramp just as the orbit error phase is.
If it can be assumed that the displacement field does not contain a trend,
or one is not interested in this component, the phase data can be detrended.
This may be necessary for a few interferograms anyway, since orbit data is not
always precise enough. For example, precise orbits may not yet be available
for very recent acquisitions, or the quality of orbit data is degraded due to
orbit maneuvers (causing problems for orbit propagation software). Moreover,
the altimeter on board of the ERS–1 satellite was switched off June 3rd , 1996,
which severely degraded the quality of the estimated orbits after this date.
Finally, the noise term is caused by, among others, thermal noise, quan-
tization of the signal in the D/A converter, approximations made during the
processing, and coregistration errors. The phase noise at the considered pixels
is assumed to have a zero-mean normal distribution.
The phase components induced by elevation with respect to the reference
surface (DEM error), displacement, and the sub-pixel position are considered
part of the functional model, whereas components due to inaccurate know-
ledge of the sensor position, atmospheric signal, and other effects are considered
part of the stochastic model. If all acquisitions have the same radar frequency
and only slightly different Doppler centroid frequencies, the functional model
is written as
E{φ} = W {φtopo + φdefo },
4π
D
(2.33)
= W {β · Δh − αd · pd }.
λ
d=1
4π
E{φ} = W {β · Δh − T · α}. (2.34)
λ
The stochastic model of interferometric observations is described in section
2.2.2. It is not used in the reference PS technique, which assumes equal weights
for all observations, and no correlation between them.
D{ϕk } = Qslck
= Qnoisek + Qatmok
⎡ 2 ⎤ ⎡ ⎤
σnoisek 2
σatmo k (0) σatmok (l1,2 ) σatmok (l1,3 ) ...
⎢ 1
⎥ ⎢
⎢ .. ⎥ ⎢ . 2
σatmo k (0) σatmok (l2,3 ) ... ⎥ ⎥
=⎢⎢
. ⎥+⎢
⎥ ⎣ .. ⎥.
⎣ ⎦ . . . ⎦
2 2
σnoise k . . . σ atmok (0)
H
(2.36)
Matrix Qnoisek describes thermal noise, processing noise, etc.; the noise is
assumed to be white. Qatmok is the vc-matrix that describes the atmospheric
state at acquisition k. The atmospheric signal S k at the time of acquisition k
is described by a probability density function having E{S a,b k
} = 0, D{S k
Sa,b }=
2
σatmok (la,b ), where la,b is the distance between points a and b. Fig. 2.7 shows
a covariance function that could be used to describe the residual phase in
the SLC images. An empirical covariance function could be used to fill this
matrix, for example one that is initialized using GPS measurements taken
at the time of the radar acquisition, using the model derived in (Hanssen,
2001). This covariance function could also be parameterized by an analytical
covariance function, of which its parameters are estimated using the residuals
after estimation of DEM error and displacement. Numerical simulations
26 Chapter 2: The Permanent Scatterer Technique
C(l)
2 2
σnoise + σatmo
2
σatmo
0 l
Fig. 2.7: Example covariance function for residual phase in an SLC image. The
2
nugget at l = 0 corresponds to the uncorrelated noise σnoisek.
using fractal surfaces with fractal dimension 2.67 showed that the empirical
covariance function for the atmospheric signal can be modeled approximately
by an exponential covariance function
2
Catmo (l) = σatmo exp(−l2 w2 ). (2.37)
y = P ψ. (2.42)
After this permutation, the order of the elements in vector y is that first all
interferometric phases for the first arc are given, then for the second arc, etc.
Since
y = P ΩΛ ϕ, (2.43)
it follows that the propagated vc-matrix for the interferometric phase differ-
ences with respect to the reference pixel is given by application of the law of
propagation of variances3 as
Qifg = (P ΩΛ) Qslc (P ΩΛ)∗ ,
(2.44)
= (P ΩΛ) Qnoise (P ΩΛ)∗ + (P ΩΛ) Qatmo (P ΩΛ)∗ ,
where Qslc , Qnoise , and Qatmo are the corresponding partioned matrices with
dimension (K+1)H×(K+1)H, e.g.,
⎡ ⎤
Qslc0
⎢ .. ⎥
Qslc = ⎣ . ⎦. (2.45)
QslcK
It is assumed that the noise is uncorrelated between the various SLC images.
If a single large design matrix B with dimension K(H−1)×2(H−1) is defined
for the least-squares estimation of all parameters, for example in the case of
DEM error and linear displacement
⎡ 1 ⎤
β − 4π λ T
1
⎢ ⎥
B = IH −1 ⊗ B , where B = ⎣ ... ..
. ⎦, (2.46)
β K − 4π
λ T
K
3
v = U u → Qv = U Qu U ∗
28 Chapter 2: The Permanent Scatterer Technique
b̂ = Qb̂ B ∗ Q-1
ifg y. (2.48)
The phase of a reference surface must be subtracted before Eq. (2.46) is valid.
The subtraction of the reference phase does not affect the propagated vc-
matrix, because it is considered to be a deterministic process, which is not
shown here. Noise introduced by the processing, such as mis-registration of the
slave images, also is not considered here. This can effectively be incorporated
by increasing the noise level for the slave images. For example, if this noise is
assumed to be equal to the inherent noise, this becomes Qnoisek = 2Qnoise0 for
k = 1, . . . , K.
Double-difference observations
K
2 ∗
k + σatmok (0) − σatmok (l) ik ik ,
2
+ 2 σnoise (2.50)
k=1
where EK is a K×K matrix filled with ones, and ik is a K×1 vector with a
single one at position k. It is assumed here that all points in an interferogram
2
have the same inherent noise level σnoise k . From Eq. (2.50) it can be clearly
This expression shows that the norm e ∗ Q-1 ifg e which is minimized using a
least-squares approach is less sensitive to a bias in the double-differenced
interferometric phase observations than a diagonal vc-matrix would be. Thus,
if this stochastic model is used there is less need to include a parameter for
the average atmospheric phase S̄ in the functional model, as was suggested in
section 2.2.1 (page 23).
3
The Integer Least-Squares Estimator
31
32 Chapter 3: The Integer Least-Squares Estimator
y = Aa + Bb + e, (3.1)
where:
y is the vector of measurements (observed minus computed double-difference
carrier-phase and code measurements in the case of GPS). The underlining
indicates a vector of stochastic variables.
a is the vector of integer-valued unknown ambiguities.
b is the vector of real-valued unknowns for the parameters of interest. For
GPS these are the three baseline components. Because the system of
equations is linearized for GPS, this vector consists of increments with
respect to a priori values or the previous iteration.
A, B are the design matrices for the ambiguity terms and baseline compo-
nents, respectively.
e is the vector of measurement noise and unmodeled errors.
Since the estimation criterion is based on the principle of least-squares, the
estimates for the unknown parameters of Eq. (3.1) follow from solving the
minimization problem
where .2Qy =(.)∗ Q-1y (.) and Qy is the variance-covariance matrix of the ob-
servables (the asterisk denotes the transposition). This minimization problem
is referred to as an integer least-squares problem (Teunissen, 1994). It is a
constrained least-squares problem due to the integer constraint a ∈ Z. The
solution of the integer least-squares problem will be denoted as ǎ and b̌.
The solution of the corresponding unconstrained least-squares problem will
be denoted as â and b̂. The estimates â and b̂ are referred to as the “float solu-
tion”, and the estimates ǎ and b̌ as the “fixed solution”. The approach taken
with the LAMBDA method, is to re-parameterize the integer least-squares
problem such that an equivalent problem is obtained, but one that is much
easier to solve. It consists of two steps. First, an ambiguity transformation Z ∗
is constructed that tries to decorrelate the ambiguities. This transformation
increases the efficiency of the search for the (transformed) integer ambiguities
that minimize Eq. (3.2). In the construction of Z ∗ , use is made of integer
approximations to conditional least-squares transformations. The ambiguity
transformation allows one to transform the original ambiguities, their least-
squares estimates and their corresponding variance-covariance matrix as
Since matrix Z consists of integers only and is volume preserving, the obtained
solution also minimizes â−a (Teunissen et al., 1995a). That is, the ambiguities
that are of interest can be obtained by solving Eq. (3.4). The solution is
obtained by means of a search using a set of bounds for the transformed
ambiguities (Teunissen et al., 1995b). If the ambiguities would be totally
decorrelated, the integer ambiguities would be given by means of a simple
rounding of the float ambiguities, since that would minimize Eq. (3.4).
However, this simple rounding scheme does not produce the required integer
least-squares estimates when matrix Qẑ is non-diagonal. It was shown by
Teunissen (1994) that minimizing the objective function Eq. (3.4) is identical
to minimizing
(ẑ 1 − z 1 )2 (ẑ 2|1 − z 2 )2 (ẑ n|n−1 − z n )2
min + + ... + , (3.5)
z i ∈Z σ12 2
σ2|1 2
σn|n−1
where
χ2i−1
i−1
(ẑ j|J − z j )2
li = 1 − , subject to χ2i−1 = . (3.7)
χ2 2
σj|J
j=1
Φk = φk + 2π · ak , subject to ak ∈ Z, (3.8)
problems of GPS and PS are not identical. The major difference is that for
radar interferometry the problem is inherently under-determined, since each
observation has an unknown ambiguity that needs to be estimated, aside from
the parameters of interest, see Eq. (3.8). The solution to this problem can only
be obtained by using the fact that the ambiguities are integers, while for GPS
a (less precise) solution can also be obtained without using this information,
using the code observations. A more practical constraint that must be kept
in mind is that the number of estimations that need to be performed is much
larger in the case of PS than it is for GPS, because the number of points
is much larger. Moreover, the number of acquisitions, i.e., the number of
ambiguities that need to be estimated, can be significantly larger than for
GPS. The algorithm developed for this research is the first with convincing
performance on real data (Kampes and Hanssen, 2004).
The model for the unwrapped phase in interferogram k is given in
Eq. (2.11), repeated here for convenience, see also section 2.1.2
k k k k k k
Φkx = φx,topo + φx,defo + φx,obj + φx,atmo + φx,orbit + φx,noise . (3.9)
The functional model for the phase difference Φkx,y = Φky − Φkx between two
points x and y, is given by
4π
D
2π k,m
E{Φkx,y } = βxk · Δhx,y − αd (x, y) · pd (k) + f · ξ x,y . (3.10)
λ v x,dc
d=1
The atmospheric, orbit, and noise phase differences are lumped in a new
random variable e with expectation E{e} = 0. In matrix notation this system
of observation equations is written as
⎡ ⎤
⎡ 1⎤ ⎡ ⎤⎡ 1⎤ ⎡ 1 1,m ⎤ Δh
φ −2π a βx p1 (1)..pD (1) 2π f
v dc ⎢ ⎥
⎢ φ2 ⎥ ⎢ −2π ⎥ ⎢ a2 ⎥ ⎢ βx2 p1 (2)..pD (2) 2π f 2,m ⎥ ⎢ α1 ⎥
⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ v dc ⎥ ⎢ . ⎥
E{⎢ . ⎥} = ⎢ .. ⎥ ⎢ . ⎥+⎢ . ⎥ ⎢ .. ⎥
⎣ .. ⎦ ⎣ . ⎦ ⎣ .. ⎦ ⎣ .. ⎦⎢ ⎥
⎣ αD ⎦
φ K −2π a K K 2π
βx p1 (K)..pD (K) v fdc K,m
ξ
(3.11)
The index {.}x,y is dropped, but note that this system of equations refers
to the phase differences between two points. The basic task is to estimate
the K integer ambiguities and the 2+D real-valued parameters from the K
observed wrapped phase values. It is assumed here that there is significant
variation in the Doppler centroid frequency. If this is not the case (or when the
azimuth sub-pixel positions are already estimated using a point target analysis
in the amplitude images), then the azimuth sub-pixel position needs not to be
estimated here, leaving 1+D real-valued parameters. To solve this system of
equations, additional constraints have to be introduced. As in (Bianchi, 2003;
Hanssen et al., 2001), pseudo-observations y 2 are used to achieve this
y A1 B1 y Qy1 0
E{ 1 } = a+ b, D{ 1 } = . (3.12)
y2 A2 B2 y2 0 Qy2
3.2 Application of the LAMBDA method 35
Matrix Qb̂ is the full variance-covariance matrix that describes the precision
of the estimated float parameters.
where x v2
1
Υ (x) = √ exp − dv. (3.21)
−∞ 2 2
The success rate can thus be computed in advance, using the baselines of the
acquired images, and assuming a known data noise level. It is cumbersome
to compute this probability for the LAMBDA method, but it can be shown
3.3 Computational aspects 37
Estimation strategy
p1 (T ) = sin(2πT ),
(3.29)
p2 (T ) = cos(2πT ) − 1.
If desired, a third base function for linear displacement could be added to this
model. Finally, dedicated base functions could also be created, for example, by
decomposing displacement that is observed with GPS or leveling into principal
components.
However, not any choice of base functions is appropriate. It may be
impossible to use a certain base function due to inadequate temporal sampling.
For example, a piecewise linear function cannot be estimated in a domain
where there is no data. Moreover, correlation between estimated parameters
may prevent significant estimation. This depends on the distribution of the
images in time, space, and Doppler frequency. For example, in the extreme
case where the perpendicular baseline is a linear function of time, the DEM
error is fully correlated with time-linear displacement, and both cannot be
estimated. A measure for the correlation between estimated parameters is
the cross-correlation coefficient that can be computed after selection of the
base functions using Eq. (3.18). Furthermore, the statistical significance of an
estimated parameter can be negligible. For example, for the DEM error this
would be the case when all interferograms would have an almost zero baseline.
Finally, sudden discontinues displacement larger than half the wavelength can
never be detected due to the wrapped nature of the radar observations.
3.4 Validation
importance for small K. The second variable in the simulations is the amount
of normally distributed noise that is added to the simulated input. The
standard deviation of the noise e is set to σ = 20, 30, 40, 50 degrees. After the
addition of the noise, the simulated phase is wrapped into the interval [−π, π).
In total, 204 different simulation scenarios are evaluated, for varying K and
e, where for each scenario 100 input sets are simulated. A linear displacement
rate with a superimposed seasonal component is modeled using three base
functions
p1 (T ) = T ,
p2 (T ) = sin(2πT ), (3.30)
p3 (T ) = cos(2πT ) − 1.
The unwrapped model phase is then computed using the forward model
Φ = Bb, where the parameters are randomly simulated with standard devi-
ations σΔh = 20 m, σα1 = 20 mm/y, σα2 = 15 mm, σα3 = 15 mm. The standard
deviation of the pseudo-observations, Qy2 , used to retrieve the input is set
to σΔh = 40 m, σα1 = 40 mm/y, σα2 = 20 mm, σα3 = 20 mm, and the a priori
standard deviation assumed for all interferometric phase differences Qy1 is set
to 50◦ in all scenarios, which is a conservative estimate.
Fig. 3.1 shows the individual CPU times required for the extended boot-
strap method and for the integer least-squares search for all simulations.
IDL1 is used as programming language, running on a SUN workstation
utilizing a single 750 MHz UltraSPARC-III CPU. Using C or FORTRAN
codes would likely increase the speed by a factor of, maximally, ten. The
reported CPU times originate from the IDL profiler. The time for the extended
bootstrap method is O(K 2 ), since always K bootstraps are performed over
the K−1 ambiguities. The time required for the integer least-squares search
depends on both the quality and amount of data. For a low noise level, the
correct ambiguities are found extremely quickly. This is caused also by
the small search bound χ2 that is returned from the bootstrap estimator. The
computation time increases with an increasing noise level. The reason is that
the search is performed for a solution that is in correspondence with the
a priori precision, and in order to find such a solution, the bounds for the
search of the hyper-ellipsoid get larger. If the maximum loop count would not
be introduced, the computation time for the least-squares search would get
extremely large for noisy data, and the method would become impractical.
Fig. 3.2 gives an overview of the success rate for all the simulations. The
individual success rate for the bootstrap and integer least-squares method is
not shown, since they have to be computed in all cases, and the combined
success rate is always the highest. Only when the integer least-squares search
is discontinued (using the maximum loop counter, which particularly occurs
for higher noise levels), the success rate of the bootstrap method is sometimes
larger than that of the integer least-squares estimator. An estimation is
1
version 5.1
3.4 Validation 41
1000 σ = 50◦
σ = 40◦
σ = 30◦
∗ σ = 20◦
+ σ = 10◦
CPU time [ms]
100
10
1
0 20 40 60
Number of interferograms (K)
Fig. 3.1: CPU time required by the extended bootstrap and integer least-squares
estimator as function of K and for different noise levels. The bootstrap method
is represented by the bold solid lines for all noise levels; the computation time
only depends on K. For the least-squares estimator the required computation time
increases with increasing noise level.
100
σ = 50◦
σ = 40◦
σ = 30◦
= 20◦
Success rate P (ẑ=z)
80 ∗ σ
+ σ = 10◦
60
40
20
0
0 20 40 60
Number of interferograms (K)
It can be observed that the success rate is very high for small noise levels,
up to 30◦ , and more than 20 images. The success rate is low if there are
only 10 images available, which can be explained by taking into account that
5 float parameters and 9 integer parameters (between, say, –15 and 15) are
estimated in this case; there are simply too many possibilities that give a
good fit in this case. Furthermore, it can be observed that the overall success
rate increases with increasing number of images and decreasing noise level.
The individual success rate of the extended bootstrap method is close to that
of the integer least-squares search, and sometimes it is even higher, while
theoretically the latter has at least the same success rate. This is caused by
the maximum loop count, which is introduced in the least-squares search for
speed considerations, causing the search to be discontinued at a certain point,
which occurs in particular for higher noise levels. The same effect also explains
why the success rate of the integer least-squares estimator decreases slightly
with an increasing number of images for a constant noise level. The maximum
loop count is kept constant, and thus for a smaller number of images K, the
hyper-ellipsoid is searched through more completely before being discontinued.
However, the least-squares search is more robust, since more possible solutions
are searched for, and it is less affected by an individual noisy value.
4
The STUN Algorithm
The complex observations in a focused radar image (SLC) are given in a matrix
of pixels. Since an interferogram is defined by a point-wise multiplication of the
master image with the complex conjugated of the coregistered slave image, the
phase in the interferogram is equal to the wrapped phase difference between
the images. The unwrapped phase is not observed, since there is an unknown
integer number of cycles that the signal traveled, and only the last fractional
part can be measured. However, only the unwrapped phase can be related to
the parameters of interest. Moreover, if a time series of unwrapped phase data
is available, the estimation of these parameters is relatively straightforward.
For example, Meyer (2004) uses a set of spatially unwrapped interferograms
to estimate the topography and displacement of polar glaciers using a least-
squares approach. Furthermore, the temporal and spatial filters used in the
PS technique (see chapter 2) to estimate the atmospheric signal can as well
be applied to the unwrapped phase time series.
In this chapter a newly developed algorithm is presented. This Spatial
Temporal Unwrapping Network (STUN) algorithm performs phase unwrap-
ping on a spatially sparse grid, utilizing the integer least-squares estimator and
a temporal displacement model. A final parameter estimation is performed
after the data are unwrapped, see also the flow chart in Fig. 4.1. After an
introduction explaining the need for phase unwrapping in section 4.1, the
pixel selection is described in section 4.2. Then, section 4.3 addresses the
estimation of the variance components to obtain the stochastic model used
by the integer least-squares estimator. The estimation of a reference network
is described in section 4.4, after which section 4.5 explains the estimation of
points with respect to this established network. Finally, section 4.6 describes
the explicit phase unwrapping and estimation using the unwrapped data.
43
44 Chapter 4: The STUN Algorithm
Sec. 4.2
Pixel selection
Discarded points
Ch. 3
Arc unwrapping
(weighted ILS)
Estimated parameters
between points
Sec. 4.5
Parameters at accepted Connect new points to
points of reference network
k nearest accepted point
Ch. 3
Arc unwrapping Wrapped
(weighted ILS) phase differences
Sec. 4.6
2-D unwrapping of
residual phase
Unwrapped phase
at points
Sec. 4.6
Estimation using
unwrapped data
Final parameters
Fig. 4.1: STUN algorithm processing flow. Ovals represent processes, rectangles
represent data. Temporal data are indicated by the double rectangles. First, the
pixels are separated in three groups, and a variance component estimation is
performed to obtain the vc-matrix of the observations. Second, the parameters are
computed at the points of a reference network using the weighted integer least-
squares estimator. Third, other selected points are estimated with respect to the
established reference network. Finally, the phase is explicitly unwrapped, and a
final parameter estimation is performed.
4.1 Three dimensional phase unwrapping 45
Since the phase is observed in the interval [−π, π), somehow the unwrapped
phase must be obtained. Note that the slant-range (travel time) to the
pixels cannot be used to obtain the unwrapped phase geometrically, since
the wavelength is much smaller than the precision of this measurement,
which currently is in the order of meters. It is also not possible, in general,
to obtain the correctly unwrapped phase at all points using a conventional
spatial unwrapping algorithm such as branch-and-cut (Zebker and Lu, 1998)
or Minimum Cost Flow (MCF, see for example Chen, 2001; Chen and Zebker,
46 Chapter 4: The STUN Algorithm
that a network as sketched in Fig. 4.2(b) can be used. In this case, first the
parameters can be estimated between nearby points, after which they can be
integrated with respect to a reference point, which then yields the unwrapped
phase, i.e., the same situation as sketched in Fig. 4.2(a). It is obvious that
a possible incorrect estimation between two points propagates to the other
points, while this cannot be detected. Moreover, if the residual phase in a
single or a few interferograms is larger than π, this is not noticeable if only
temporal unwrapping is used.
Therefore, a spatio-temporal unwrapping strategy is developed combining
these two approaches of two-dimensional spatial sparse grid unwrapping
and one-dimensional temporal unwrapping. This implies using a network as
sketched in Fig. 4.2(c), where phase differences in space and time are used
to obtain the unwrapped phase at the PS points. An algorithm that uses the
three-dimensional network directly is likely to be the best possible approach
to correctly estimate the unwrapped phase at all points, in all interferograms.
Unfortunately, an efficient algorithm for this problem is not yet developed. The
integer least-squares estimator can in principle be used to solve this problem,
but the large amount of unknown ambiguities that have to be estimated
prevents a direct application.
The phase stability of the pixels in the interferograms is not known before-
hand. That is, a large number of pixels is likely to be decorrelated, particularly
for interferograms with large temporal and perpendicular baselines. Reasons
for this decorrelation are:
• The angle under which the resolution cell is observed during the two
acquisitions is different (geometrical decorrelation).
• The elementary scatterers in the resolution cell move incoherently in the
time between the acquisitions (temporal decorrelation).
• Processing induced decorrelation , e.g., due to mis-registration of the slave
image. This cause is not further considered.
A distributed scatterer, i.e., a pixel for which the backscattered signal is the
complex sum of many uncorrelated elementary scatterers of which none is
dominant, cannot remain coherent in interferograms with either a large per-
pendicular baseline or squint angle (Doppler centroid frequency) difference, see
also (Hanssen, 2001). Therefore, only pixels that have a dominant scatterer,
i.e., point scatterers, are relevant. The signal model for such an observation
is sketched in Fig. 4.3, see also (Adam et al., 2004). A dominant scatterer
is spatially surrounded by incoherent clutter. Thus, the observed phase is
composed of a dominant signal and a superposition of the clutter. The phase
of the main scatterer is related to the distance to the sensor, while the resulting
phase caused by the clutter is random. For a PS point, the phase center of the
48 Chapter 4: The STUN Algorithm
σa
C
z
resolution cell
Fig. 4.3: Signal model for a dominant point scatterer surrounded by incoherent
background clutter. (a) depicts this in the spatial domain for a single resolution cell.
(b) shows the final backscattered signal, i.e., the complex sum of the elementary
scatterers in the resolution cell. The dashed vector z indicates a complex observation.
Also indicated are the temporal variation σ a of the amplitude a = |z| and the phase
error due to the clutter.
w w w
h
h
Δσ = σ o · A. (4.3)
and the wall of a house that are aligned with the flight path, is given as
(Freeman, 1992)
8πw2 h2
Δσ = , (4.5)
λ2
and that of a planar mirror, e.g., a metal rooftop with its normal direction in
the line-of-sight to the sensor, is (Curlander and McDonough, 1991)
4πw2 h2
Δσ = , (4.6)
λ2
where h and w are the height and width of the individual panels. Assuming
these two dimensions are equal, a pixel with a normalized RCS of –2 dB thus
corresponds to elements with sides of 31 and 37 cm for the dihedral and planar
mirror, respectively.
With this method pixels are selected if the average signal to clutter ratio
(SCR) of a pixel is above a certain threshold. The relation of the SCR to the
phase error is (Adam et al., 2004):
1
σ̂φ = σ̂ = √ . (4.7)
2 · SCR
Thus, a reasonable threshold SCR=2 selects points with a phase standard
deviation σφ < 0.5 rad (∼30◦ ). The length of the vectors S and C is estimated
using a point target analysis. In order to obtain estimates for the SCR a spatial
estimation window is used. The assumption is that the power of the clutter
around the pixel is equal to the power of the clutter inside the resolution cell.
This technique to estimate the SCR was developed to check the phase stability
of corner reflectors that were deployed for calibration purposes at specific
locations with low clutter power, see also Fig. 4.5 and (Freeman, 1992). If this
Fig. 4.5: Signal to clutter ratio estimation method for a corner reflector, see also,
e.g. (Freeman, 1992). The shaded regions are used to estimate the power of the
clutter, while the other pixels are used to estimate the power of the signal.
4.2 Pixel selection 51
For high SCR the estimates for the phase variance using the amplitude
dispersion method and the SCR method are equivalent. Both methods are
biased, although the SCR method to a lesser extent, which is proven by the
52 Chapter 4: The STUN Algorithm
directions. Moreover, layover effects may cause a large amplitude for a pixel
that is not a point scatterer. After a point target analysis (sub-pixel local peak
detection) at the selected pixels, 2308 unique points remain (∼575 points per
km2 ). Fig. 4.6(e) shows the selected points using the SCR method for this area.
The geometrical distribution of the points selected by this method seems to be
better than that for the amplitude thresholding method, see Fig. 4.6(f), which
can be seen, for example, in the dark triangular patch slightly off center, where
the amplitude thresholding method does not select any pixel. After a point
target analysis 1072 unique pixels were selected in this area, of originally 9489
selected points with an SCR > 2. The SCR method is used in the following
for the pixel selection, followed by a point target analysis to obtain the phase
at the estimated sub-pixel peak position.
Fig. 4.6:Pixel selection with various methods for a 2 × 2 km2 area of Berlin. The top
row show the mean intensity of seventy available images in the interval [–5, 15] dB.
The intensity images for the first and last available acquisition are shown as well.
Note that the radiometric resolution increases significantly by taking 70 temporal
looks. The second row shows the selected pixels using the different methods. The
threshold for the amplitude is N1 = 0.65K and N2 = −2 dB, resulting in the selection
of 18191 of 200000 (oversampled) pixels. Fig. 4.6(e) shows 9489 selected points using
SCR >2. Finally, Fig. 4.6(f) shows 7357 pixels selected using Da > 0.4.
54 Chapter 4: The STUN Algorithm
σatmok (l), cf. Eq. (2.50). The contribution of the atmospheric signal to the
dispersion of the interferometric double-difference observations is supposed to
be much smaller than that of the inherent noise. Moreover, if a relation of
the variance component with distance is ignored, the least-squares projection
matrix is identical for all estimations, which allows for a faster estimation.
An initial estimation of the unknown parameters is required before estima-
tion of the variance components is possible. During this initial estimation, a
stochastic model with a priori variance components1 is used cf. Eq. (2.50).
This a priori model is based on the assumptions that the interferometric
phase standard deviation for point scatterers is expected to be below ∼50◦ ,
and that slight mis-registration introduces a small amount of additional
noise in the slave images. Note that the variance component estimation
can be performed iteratively, and that the choice of the stochastic model
during the initial estimation is not very important, e.g., a scaled identity
2
matrix σnoise IK could also be used. The vector of variance components of
2 2 ∗
the SLC images σ = [σ σnoise 0 , . . . , σnoiseK ] is estimated by using the temporal
least-squares interferometric phase residuals vector ê of the initial estimation
between two points as (Verhoef, 1997)
σ̂ = N -1 r, (4.8)
D{σ̂} = 2N -1 . (4.10)
2
Fig. 4.7: “Network” used for the estimation of the variance components σnoise k
(2.51) is given by the mean of these estimations. The variance of the estimated
variance components is reduced by the number of estimations.
It is not guaranteed by Eq. (4.8) that the estimated variance components
2
are larger than zero. If this happens it could indicate that initially σnoise 0
is too large, that the number of estimates used to estimate the variance
components is too small, or that the least-squares estimates are incorrect,
i.e., the displacement model that is used. To avoid a possible non positive-
2 ◦ 2
definite vc-matrix, the estimated (mean) variance components σnoise k < (10 )
◦ 2
are set to (10 ) .
In the STUN algorithm, not all pixels in the interferograms are estimated, even
not during a final estimation step as is done in the reference PS technique,
see, e.g., (Ferretti et al., 2001). The pixels that are not selected in this
step are discarded in the further processing, because it is not expected that
they contain useful phase information, see section 4.2 for a more detailed
explanation. Remember that the threshold for the pixel selection is low, such
that a large amount of pixels is selected. The thresholds that are used for
the pixel selection do not have to be extremely selective, because analysis of
the phase data will reveal if the selected pixels are coherent or not. This
selection reduces the number of points that needs to be estimated from
hundreds of millions to a few hundred thousand. Memory requirements go
down dramatically once this set is obtained. The points that are not discarded
are further divided into two groups; reference points (PSCs) and other points,
similar to the preliminary and final estimation step of the PS technique, see
also Chapter 2. Most reference network points are expected to be coherent
in time, based on their amplitude dispersion index. The parameters are first
estimated between the points of the reference network, which is described
1 2 ◦ 2 2 ◦ 2
σnoise0 = (20 ) , and σnoisek = (30 ) for k = 1, . . . , K
56 Chapter 4: The STUN Algorithm
The DEM error and displacement parameters (differences) are first estimated
at all arcs of the reference network, see, e.g., Fig. 4.2(c). The integer least-
squares estimator, described in Chapter 3, is used for the estimation, instead of
the maximization of the ensemble coherence, which is used in the reference PS
technique. The dispersion of the double-difference phase observations is given
by the vc-matrix of Eq. (2.51), where the estimated variance components are
used to create the vc-matrix.
If all estimates at the arcs would be correct, the parameters at the points
are obtained by integration along any given path. Then, there would be no
need to perform these many estimations, instead, the estimations could be
restricted to the arcs of network Fig. 4.2(b). It is noted here again that
if the observed phase data at the points are not wrapped, then a network
like in Fig. 4.2(a) can be used to obtain the parameters at each point
(assuming a known reference point). In this case, the least-squares phase
residuals can directly be used to identify incoherent points. Unfortunately,
it would be impossible to identify incorrect estimations using a network
like that of Fig. 4.2(c), since there would never be any inconsistency; the
double-differences of the unwrapped phase at the arcs are linear combinations
of the unwrapped phase observed at the points. However, if wrapped data
inconsistencies occur, they can be used to identify incorrect estimations, as
described in section 4.4.3.
When the network of Fig. 4.2(c) is considered, the parameters are esti-
mated between the points indicated by the lines. It will be assumed in
the following that only a DEM error (difference) and a displacement rate
(difference) are estimated, although there is not such a restriction in the
58 Chapter 4: The STUN Algorithm
algorithm. The DEM error at the points can be obtained with respect to
the reference point (the first point; i.e., this unknown is removed from the
vector of unknown parameter together with the corresponding column of the
design matrix) by solving a system of observation equations like
⎡ ⎤ ⎡ ⎤
Δh2,1 −1 0 0 . . . 0
⎢ Δh3,1 ⎥ ⎢ 0 −1 0 . . . 0 ⎥ ⎡ ⎤
⎢ ⎥ ⎢ ⎥ Δh2
⎢ Δh4,1 ⎥ ⎢ 0 0 −1 . . . 0 ⎥ ⎢
⎢ ⎥ ⎢ ⎥ ⎢ Δh3 ⎥ ⎥
⎢ .. ⎥ ⎢ .. ⎥⎢
⎢ . ⎥ ⎢ . ⎥ ⎢ Δh4 ⎥ ⎥
⎢ ⎥ ⎢ ⎥
⎢ Δh3,2 ⎥ = ⎢ 1 −1 0 . . . 0 ⎥ ⎢ ⎢
.. ⎥ , (4.11)
⎢ ⎥ ⎢ ⎥ . ⎥
⎢ Δh4,2 ⎥ ⎢ 1 0 −1 . . . 0 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎣ΔhH−1 ⎦
⎢ .. ⎥ ⎢ . ⎥
⎣ . ⎦ ⎣ .. ⎦ ΔhH
ΔhH−1,H 0 0 0 . . . 1 −1
where the design matrix corresponds to the estimated arcs. The solution for
the unknown parameters is given by Eq. (4.13). An identical design matrix is
used to “integrate” the other estimated difference parameters independently,
in this example the displacement rate differences at the arcs. This system
of equations looks very similar to that of a leveling network. However, the
difference is that here the least-squares residuals (the misclosures) at the arcs
must be exactly zero, since there are no actual observations between points,
as with leveling data. If all spatial residuals are indeed equal to zero the least-
squares estimates at the points are the same as those that would be obtained
after integration along any path.
In practice non-zero residuals are found at the arcs of the network, due to
incorrect relative estimations at certain arcs. The problem is to identify the
reason for this. Possible reasons are that a point is incoherent or that only an
individual arc is estimated incorrectly. Consider the elementary case where
three points (x, y, z) in one interferogram are available, see also Fig. 4.8. The
phase is assumed to be induced by linear displacement only, i.e., the functional
model is given as E{φx } = − 4π λ T ·αx . Without loss of generality, the point
x is selected as the reference point, and the double-difference observations
∈ [−π, π) are given by φx,y , φy,z , and φz,x . Signal aliasing cannot be inferred
from these wrapped phase differences. For example, if the phase differences
due to displacement (difference) are equal to −4π λ T ·αx,y = − λ T ·αy,z = π−,
4π
where is a small positive number, then the wrapped phase difference φz,x is
not equal to the unwrapped phase difference, which implies that the estimate
α̂z,x is incorrect, resulting in a misclosure of the estimated parameter at the
arcs. This is equivalent to the way a residue is formed due to signal aliasing
in the conventional two-dimensional phase unwrapping problem.
In this example with only one interferogram, the estimated difference pa ra-
meters have an ambiguity that is one-to-one related to the phase differences.
4.4 Reference network computation 59
y
φx,y
x φy,z
φz,x
z
Fig. 4.8: Elementary network for which a misclosure can occur if parameters are
estimated using the wrapped interferometric phase observed at points x, y, and z.
If the search space is not limited, there always is a solution for the
ambiguities such that all least-squares phase residuals are zero. This is due to
the fact that the problem is inherently under-determined, i.e., there are more
unknown parameters than observations. In practice, the search space must
be bounded, and the estimated parameters minimize some norm defined on
the solution space. If it is assumed that only a single point is incoherent, and
its parameters are estimated with respect to nearby points, then the location
of the minimum (i.e., the estimated parameters) depends on the noise of the
other points. This means that misclosures can occur. But even if all points
are coherent, it may happen that a single estimation (an arc) is incorrect.
Reasons for this could be that the true solution lies outside the search space
for a specific arc, or that the wrong local minimum is found. The former reason
becomes clear by considering Fig. 4.8 again. Suppose that the search space is
limited to [–20, 20], while the true parameters are α x,y = αy,z = 15. Then, the
estimated value α̂z,x cannot be correct, and a misclosure occurs. The latter
could happen if an algorithm is used which searches the float parameters with
a too coarse resolution, for example the algorithm described in section 2.1.3.
This problem does not exist for the integer least-squares estimator, since the
integer ambiguities are searched instead of the float parameters.
Note that the least-squares phase residuals of the estimation between
two points (i.e., the phase residuals in the interferograms of the integer
least-squares estimation) are in the interval [−π, π). These residuals do not
necessarily have to be large if a point is incoherent. This is particularly true
when only a small number of interferograms is available.
The integrity of the network should be checked for the reasons described
above. Note again that the situation for the created network is not the same
60 Chapter 4: The STUN Algorithm
as for, e.g., a leveling network. Here the misclosures must be perfectly zero,
because there is no independent measurement (noise) on the arcs. A misclosure
is solely due to incorrectly estimated parameters at an arc. Such an error
should not be adjusted, but the cause must be found and rectified. Moreover,
the network testing procedure does not have to be approached as strict as
for, e.g., a leveling network. After all, if the misclosure is zero, removing
a (correctly estimated) arc does not change the parameter solution or its
precision. However, zero misclosures do not guarantee that the parameters
are estimated correctly (they can all be consistently incorrect), but it does
mean that all errors that could be found are dealt with.
In order to find outlier arcs and points, an alternative hypotheses testing
strategy is used, known as DIA procedure (Teunissen, 2000b). First, in
the detection step, the null-hypothesis is tested against general model mis-
specifications. If this test is rejected alternative hypotheses are specified to
identify the most likely cause. In the adaption step, either the stochastic
model, or the functional model is changed to account for the identified cause.
These steps are repeated until the null-hypothesis is accepted.
First, the case is considered where only DEM error differences are esti-
mated, thus ignoring other estimated parameters. The system of observation
equations, Eq. (4.11), is written as
The design matrix C specifies the functional relation between the unknown
parameters b at the points to the differences y between points, while matrix
Qy is the vc-matrix of the latter. The well-known least-squares formulas can
be applied to this system of equations, see also (Teunissen, 2000b)
Qŷ = CQb̂ C ∗ ,
-1
Qb̂ = (C ∗ Q-1
y C) , Qê = Qy − Qŷ ,
(4.13)
b̂ = Qb̂ C ∗
Q-1
y y,
ŷ = C b̂, ê = y − ŷ,
where Qb̂ is the estimated vc-matrix for the unknowns, Qŷ that of the adjusted
observations, Qê that of the least-squares residuals, and b̂, ŷ, and ê are the
vectors of adjusted unknowns, observations and residuals, respectively. When
there are more observations than unknown parameters, it is possible to test
the null-hypothesis, Eq. (4.12), against alternative hypotheses. An alternative
hypothesis is specified as a linear extension of the null-hypothesis
Detection
The detection step of the DIA procedure uses the overall model test (OMT)
to find model mis-specification in either or both the stochastic or functional
model. The OMT is given by
The dimension of the overall model test is equal to the redundancy r of the
problem. It is an important safeguard to indicate the validity of the null-
hypothesis. It is the most relaxed possible alternative hypothesis, imposing
no constraints on the observables. Hence, no restrictions are imposed on
the observables. Matrix Cq consists of r unit vectors for this alternative
hypothesis, but note that it does not need to be explicitly specified to compute
the OMT. If the OMT is rejected, alternative hypotheses are specified to
identify the most likely cause.
Identification
Tqki Tqlj
> ∀ k = l. (4.22)
χ2α(q χ2α(q
i) j)
4.4 Reference network computation 63
Adaption
After the most likely alternative hypothesis is identified as cause for rejection
of the null-hypothesis, the functional model is adapted accordingly. If an
outlier arc is identified it is removed from the reference network by deleting
the appropriate row in Eq. (4.11). If a point is identified, it is removed from the
vector of unknowns, together with all arcs connected to that point. The latter
could lead to the formation of two isolated networks, which yields a singular
problem that cannot be solved without introducing further constraints, e.g.,
selection of a second reference point. After this adaption, the estimation is
performed again, and the DIA procedure starts the next iteration with the
detection step.
Joint tests
critical value for this test is thus found under the chi-squared distribution with
dimension two times that of the individual alternative hypotheses. It can be
easily shown that the test statistic using the alternative hypothesis Eq. (4.24)
is equal to the sum of the independently computed q-dimensional tests using
Eq. (4.15)
T2q = Tq + Tq . (4.25)
This is the case because it is assumed that there is no correlation between the
estimated parameters. This joint test is used in order to identify outlier arcs or
points. During each iteration of the DIA procedure the most likely alternative
hypothesis is identified based on the test quotient of the joint test. In the
adaption step, the concerned observations in the networks for DEM error
and displacement rate are removed. Extension to more than two estimated
parameters is straightforward.
ê∗ Q-1
y ê
σ̂ 2x = , (4.26)
r
where ê are the temporal least-squares residual phase (differences) vector, and
r is the redundancy (i.e., the number of interferograms minus the number of
4.5 Estimation of points relative to the reference network 65
Fig. 4.9: Connection of new points (circles) to the reference network pixels
(squares). Each selected point is connected to the nearest point in the reference
network, and a relative estimation is performed using the integer least-squares
estimator. These connections are indicated at the left hand side of this figure.
Alternatively, each new point can be connected to a couple of points of the reference
network, indicated at the right hand side.
Φkx − ΦkR = βxk Δhx − βRk ΔhR → W {Φkx − ΦkR + βRk ΔhR } = 2π · akx + βxk Δhx
Φkx − ΦkP = βxk Δhx − βPk ΔhP → W {Φkx − ΦkP + βPk ΔhP } = 2π · akx + βxk Δhx ,
(4.27)
66 Chapter 4: The STUN Algorithm
where only two reference network points R and P , and only the DEM error
Δh, are considered for conciseness. The ambiguities are denoted by a, see
Chapter 3 for a detailed description. The system of equations is more stable
if more connections are used, because it is more likely that the parameters at
the different arcs correspond with the a priori information. The fact that
an incorrect set of ambiguities for a certain arc fits well with the model
due to noise, is reduced by the second arc (or third, etc.). This system of
equations can be solved using the integer least-squares estimator. However,
note that in this case the reference points are demodulated for their known
signal components, and that the unknown parameters of the new point are
thus directly estimated with respect to a zero reference. This means that the
value of the estimated parameters is likely to be larger than it would be when
a relative estimation is performed, which should be taken into account in the
values or the variances used for the pseudo-observations. Alternatively, the
phase of the new point could be demodulated using the mean of the known
parameters at the reference points, which then has to be added again when
the ambiguities are estimated.
variance factor, cf. Eq. (4.26), below a certain threshold k are included in
the spatial unwrapping,
σ̂ 2x < k, (4.28)
with, for example, k = 2.0. The value of this threshold depends on the vc-
matrix used during the ILS estimation, because the estimated variance factor
is a multiplication factor for this matrix.
After the parameters are estimated, the unwrapped model phase can be
computed for each point in all interferograms, using the forward model, as
Since the observed phase is wrapped, only the wrapped residual phase can be
obtained, i.e.,
W {êx } = W {φx − Φ̂x }. (4.30)
The residual phase at the selected points in each interferogram are expected
to contain a low-frequency component caused by interferometric atmospheric
signal and possible unmodeled displacement, and a small amplitude, high-
frequency component due to random noise. This property can be exploited
by application of a spatial complex low-pass filter to the wrapped residuals.
The residual phase per interferogram can then first be demodulated for
the low-pass component, which can then be unwrapped separately from the
high-pass component. The total unwrapped field is given by addition
of the two unwrapped components. However, to unwrap the residual fields
for each interferogram, a sparse grid MCF unwrapping algorithm (Eineder
and Holzner, 1999) is directly applied in the STUN algorithm. The distance
between the points is used to generate the cost function. Once the residual
fields are unwrapped, the unwrapped phase at the selected point is obtained
by addition of the unwrapped residual phase ěx to the model phase
The unwrapped phase at a reference point must be set to zero in all interferr -
grams. This reference point does not have to be identical to the previously
selected one. However, it must be a point that has a relatively small noise
component in all interferograms, i.e., the reference point must be present
during all acquisitions.
There are many possibilities to estimate the parameters once the phase data
at the selected points are unwrapped in all interferograms. For example,
68 Chapter 4: The STUN Algorithm
for the estimation using the wrapped data. This system of equations is thus
described by Eq. (3.17), see also Chapter 3.
Quality description
The use of a single factor implies that the precision of the double-difference ob-
servations scales in the same fashion for all interferograms. This is acceptable
since the atmospheric signal is the main reason for the deteriorating precision
at points with larger distances to the reference point, and the atmospheric
signal is expected to have a power-law behavior, see Eq. (2.28). Such an
approach was also suggested by Hanssen (2001), who constructed a generic
stochastic model for the phase in the interferogram that can be initialized
using a single scaling parameter (that could be initialized using, e.g., GPS
observations or analysis of a small area in the interferogram). The propagated
vc-matrix of the estimated parameters scales with the same a posteriori
variance factor as the vc-matrix of the observations, i.e.,
φkx = W {φk
φx,topo + φx,defo + φx,atmo + φx,noise },
k k k
(5.1)
71
72 Chapter 5: Synthetic Data Experiments
–2
–4
–1500 –1000 –500 0 500 1000
Perpendicular Baseline [m]
Fig. 5.1: Baseline configuration used in the simulation. The temporal baseline is
given on the vertical axis, and the horizontal axis lists the perpendicular baselines,
relative to the master acquisition at (0, 0). The 51 selected acquisition times and
perpendicular baselines correspond to actual acquired images for ERS frame 2547,
track 165.
where the noise and atmospheric parts are simulated on the SLC images.
The amplitudes of these pixels in the interferograms are set randomly. The
pixels of the reference network are selected based on these amplitudes. A
sparsification procedure with a grid of 250 by 250 m is used to select the
pixels of the reference network, based on the amplitude dispersion index
(i.e., randomly since this is based on the simulated amplitude). The reference
network is constructed using the algorithm described in section 4.4 with four
connections per point (one in each quadrant). The number of pixels in the
reference network is 811, and the number of arcs is 2144. The reference pixel
is arbitrarily selected near the Tempelhof airport, see Fig. 5.2.
Fig. 5.2: Location of the points used in the simulation (red plus marks). 41143
points are selected in an area of ∼10 × 10 km2 . These positions correspond to actually
selected positions for acquired images for ERS frame 2547, track 165. The lines
indicate the estimations that are performed between the 811 points of the reference
network. The background shows the average intensity in dB for this area (Berlin,
Germany). This image is mirrored in the horizontal and vertical direction to obtain
an almost geo-referenced image. The dark circular structure at the bottom of this
image is the Tempelhof airport located at the center of Berlin. The white asterisk
indicates the location of the chosen reference point.
Here, the DEM error is the first estimated parameter in meters, and the
displacement rate the second parameter in millimeters per year. The standard
deviation of the estimated difference parameters in this case are thus 0.17 m
and 0.12 mm/y. The correlation coefficient between the estimated parameters
is as small as –0.03, which is caused by the almost uniform distribution of
74 Chapter 5: Synthetic Data Experiments
Fig. 5.3: Network for variance component estimation. The indicated arcs are
used to perform an initial estimation of the parameters with a priori variance
components. The temporal least-squares residuals are then used to estimate the
variance components of the variance components model. The mean arc length is
290 m, the standard deviation is 107 m. The total number of arcs used for the
estimation is 405. The asterisk indicates the location of the reference point.
50
+ simulated noise level
estimated
40
[de
20
0 10 20 30 40 50
SLC image
Fig. 5.4: Estimated variance components for random noise compared to simulated
noise levels. The plus marks show the standard deviation of the Gaussian distributed
noise that is added to the phase in the SLC images. The diamond represents the
(square root) of the estimated variance components.
5.3 Atmospheric phase 75
Table 5.1: Statistics for estimated parameters at the arcs of the reference network
and for all points connected to the reference network after integration of the
parameters. Statistics are for estimates with an estimated a posteriori variance factor
σ̂ 2 < 2.0 (i.e., 2143 of 2144 arcs for the reference network, 41137 of 41143 estimated
points). The standard deviation that follows from the propagated vc-matrix is given
in parentheses.
the images in time and space, see Fig. 5.1. The parameters at the arcs of
the reference network are estimated using the integer least-squares algorithm.
These difference parameters at the arcs are integrated using the algorithm
described in section 4.4.2, and more points are estimated with respect to
this reference network. No arcs or points are rejected by the alternative
hypotheses tests in this case. For this simple scenario the residuals are not
spatially integrated, since it can be assumed that the data are correctly
unwrapped temporally. Reported in Table 5.1 is the mean and standard
deviation of the estimated parameters. Estimates with an a posteriori variance
factor σ̂ 2 ≥ 2, cf. Eq. (4.26), are not considered. The estimated precision of
the accepted estimated parameters corresponds very well with the formally
propagated vc-matrix. This is not remarkable, since the simulated signal
only consists of normally distributed noise. The theoretical success rate of
the simple bootstrap estimator, according to Eq. (3.20), is P (ẑ = z) = 0.999.
The theoretical success rate for the simple bootstrap estimator using a priori
variance components is P (ẑ = z) = 0.843. The computations are performed
using six CPUs operating at 750 MHz. The total CPU time required for the
estimation of the 41143 points is 348 seconds, or approximately one minute
for each processor. An implementation in C of the bootstrap and integer least-
squares estimator is used.
with a dimension 2.67, typically for atmospheric signals, see for example
(Hanssen, 2001). The maximum variation of the simulated atmospheric signal
during the acquisitions is set randomly with a standard deviation of two
rad. The variation of the atmospheric signal in an interferogram is typically
about one fringe, since the difference between the atmospheric states during
master and slave acquisition is observed. Fig. 5.5 shows an example of the
simulated atmospheric signal. The variance components are estimated using
1.000
0.100
rad2
0.010
0.001 –0.04
0.1 1.0 10.0 2 4 6
lag [km] lag [km]
(a) Fractal atmosph. (b) Structure function (c) Covariance function
Fig. 5.5: Example of simulated atmospheric signal using a fractal with fractal
dimension 2.67. Shown is the simulated master atmosphere. This APS has a
maximum variation of 1.72 rad (1.50 rad at the considered points). Fig. 5.5(b)
shows the structure function using logarithmic axes. Indicated by the dotted lines is
the theoretical slope of 2/3, which follows from the fractal dimension 8/3, see also
(Hanssen, 2001). Fig. 5.5(c) shows the empirical covariance function for this APS.
This covariance function is modeled using an exponential model. The covariance
typically gets negative at a certain distance, since the simulated atmosphere has most
power in the long wavelengths. For the estimation of the covariance and structure
function 5000 randomly selected points are used.
the same network as before, see Fig. 5.3. The estimated variance components,
noise level, and variation of the simulated APS are shown in Fig. 5.6. Clearly
visible is that the estimated variance components are larger if the atmospheric
variation is larger. This is expected, since the estimated variance component
accounts for both components. The variance components previously estimated
in the scenario without atmospheric signal are also plotted in this figure for
reference. In general, they are smaller than the variance components estimated
here, as expected. The vc-matrix of the estimated parameters is in this case
given by
0.0411 –0.0018
Qb̂ = . (5.3)
–0.0018 0.0180
This vc-matrix is approximately a factor 1.4 larger, compared to the vc-
matrix Eq. (5.2) of the scenario without atmospheric signal, which is due
to the atmospheric phase that is added. During the alternative hypothesis
testing step of the spatial integration of the estimated parameters, four arcs
are removed that clearly are estimated incorrectly. Table 5.2 reports the
5.3 Atmospheric phase 77
50
+ simulated noise level
estimate in absence of APS
40 estimate in presence of APS
10
0
0 10 20 30 40 50
SL
Fig. 5.6: Estimated variance components for random noise and atmospheric signal.
The plus marks show the standard deviation of the Gaussian distributed noise
that is added to the phase in the SLC images. The squares show the estimated
variance components using the short arcs. On the bottom, the maximum variation
of the atmospheric signal is plotted for each SLC image. For comparison reasons,
the diamond again shows the previously estimated variance components for the
simulation without the atmospheric signal, see section 5.2.
Table 5.2: Statistics for estimated parameters for the simulation scenario including
atmospheric signal. Given are the estimates at the arcs of the reference network and
for all points connected to the reference network after integration of the parameters.
Statistics are for estimates with an estimated a posteriori variance factor σ̂ 2 < 2.0
(i.e., 2138 of 2144 arcs for the reference network, 41139 of 41143 estimated points).
The standard deviation that follows from the propagated vc-matrix is given in
parentheses.
again corresponds very well with the formally propagated vc-matrix. This is
somewhat unexpected, since the power of the atmospheric signal is expected
to increase the further away the points are from the reference point, while the
78 Chapter 5: Synthetic Data Experiments
propagated vc-matrix is valid only for relatively nearby points. However, the
estimated parameters are spatially correlated (not shown). Fig. 5.7 shows the
covariance function of the estimated displacement rate. The theoretical success
0.002
0.001
[mm2 /y2 ]
0.0
–0.001
2 4 6 8 10
lag [km]
Fig. 5.7: Covariance function of the estimated displacement rates in the presence
of atmospheric signal. The estimated parameters are spatially correlated, because
the atmospheric signal is spatially correlated.
rate of the simple bootstrap estimator for this case of satellite distribution and
estimated precision, according to Eq. (3.20), is P (ẑ = zz) = 0.990. The total CPU
time required for the estimation of the 41143 points is 340 seconds, practically
equal to the previous simulation scenario.
The residual phase at the points is unwrapped using a sparse grid MCF algo-
rithm, see (Eineder and Holzner, 1999). Since, in this case, the displacement
model fully describes the actual displacement (indeed over-parameterizes it),
it would be feasible to estimate the low frequency atmospheric component
by applying a spatial low-pass filter to the residuals. For this study, the
parameters are simply re-estimated, but now using the unwrapped data.
Also a variance factor is estimated for each point, cf. Eq. (4.26). This
a posteriori factor describes the precision of the estimated parameters, taking
into account the atmospheric and random noise component, but assuming
a correct displacement model. The estimated variance factors are shown in
Fig. 5.8. The further away from the reference point, the worse the precision.
Fig. 5.7 shows the covariance function of the estimated displacement rate.
In order to verify the description of the precision, Table 5.3 reports the
percentage of points for which the actual error e is outside the given confidence
interval. Practically all estimates are within the two-sigma level.
5.3 Atmospheric phase 79
Table 5.3: Quality description of estimated parameters for simulation with atmo-
spheric signal. Reported is the percentage (of 41143 points) for which the actual
error on the estimated parameters is below the given threshold.
|ex | < 0.5σ̂ x |ex | < σ̂ x |ex | < 2σ̂ x |ex | < 3σ̂ x
DEM 54.7 87.0 99.8 100.0
defo 60.8 90.0 99.7 100.0
both 33.3 78.3 99.5 100.0
80 Chapter 5: Synthetic Data Experiments
with
f (T ) = -0.15·T + 0.004·T 3 , (5.5)
and
x2 + y 2
g(x, y) = − exp , (5.6)
−(N
Nη /3)2
where x, y are spatial coordinates, specifically x = η−Nη /2, y = (ξ−N Nξ /2)/5,
with Nη = 1000 and Nξ = 5000 the number of pixels in range and azimuth,
respectively. Fig. 5.9 shows this spatio-temporal displacement function. As can
be seen, the spatial displacement pattern is very smooth, with the maximum
located at the center of the area. For the center point, the subsidence rate
first increases to approximately 7 cm in six years, followed by ∼1 cm uplift in
two years. Additionally, a fully decorrelated phase is simulated for 2.5% of the
points, i.e., at 1029 points. During the alternative hypothesis testing step of
the network algorithm these incoherent points must be detected and removed
from the reference network. Finally, also a random bias is added to each SLC
image to account for inaccurate knowledge of the absolute signal delay and
sensor position.
In the following, first base functions are used that are capable of correctly
modeling the simulated displacement at the points and between points,
5.4 DEM error and
0.4
f (T ) [mm]
–0.4
–4 –2 0 2 4 6
T [y]
(a) Temporal displacement function (b) Spatial displacement function
Fig. 5.9: Simulated signal. Fig. 5.9(a) shows f (T ), the temporal model of the
displacement function, and Fig. 5.9(b) the spatial model g(x, y). Indicated are the
acquisition times and position of the PS points.
cf. section 5.4.1. Second, section 5.4.2 describes the processing using only
a single base function for linear displacement, as is routinely used in the
PS technique. Since during the estimation differences between nearby points
are considered, and the spatial displacement pattern is very smooth, it can
be expected that the simulated displacement can be adequately modeled by
linear displacement. That is, the phase data at the PS points could potentially
be correctly unwrapped using a linear displacement model and the spatio-
temporal unwrapping steps of the network algorithm. However, the estimated
parameters for the linear displacement will not correspond to Eq. (5.5). The
proper way to assess the success of the method is thus a comparison between
the simulated unwrapped phase values and the estimated unwrapped phase,
and not by using the standard deviation of the estimated parameters. After all,
once the data are correctly unwrapped, the displacement model that is used
during the unwrapping is no longer relevant. However, the estimated DEM
error is used to validate the estimation. Since the DEM error and displacement
parameters are to a great extent uncorrelated (depending on the space-time
distribution of the acquisitions, i.e., it is assumed that the displacement is
a smooth function in time, and not smooth as function of the perpendicular
baseline), a correctly estimated DEM error does not necessarily imply that
the displacement is correctly estimated.
The first analysis of the simulated data is performed using base functions that
are capable of exactly reconstructing the simulated displacement. These base
82 Chapter 5: Synthetic Data Experiments
functions are
4π k
p1 (k) = − T
λ
4π k 2
p2 (k) = − (T ) (5.7)
λ
4π k 3
p3 (k) = − (T ) .
λ
The vc-matrix of the estimated parameters (DEM error, displacement para -
meters) is ⎡ ⎤
0.084 0.004 0.006 –0.001
⎢ 0.004 0.252 0.025 –0.018⎥
Qb̂ = ⎢ ⎥
⎣ 0.006 0.025 0.013 –0.003⎦ , (5.8)
–0.001 –0.018 –0.003 0.002
where a priori variance components are used, see also Fig. 5.3. The corres-
sponding correlation matrix ρ is given by
⎡ ⎤
1.000 0.026 0.187 –0.072
⎢ 0.026 1.000 0.447 –0.885⎥
ρ=⎢ ⎣ 0.187 0.447 1.000 –0.713⎦ .
⎥ (5.9)
–0.072 –0.885 –0.713 1.000
Clearly, the second and third estimated parameter are very correlated with
each other and with the first base function. Still, the correlation coefficients
between the estimated DEM error and the displacement parameters are
small. However, for this estimation the base functions Eq. (5.7) are used.
The standard deviation for the pseudo-observations used by the integer least-
squares estimation is 25 mm/y for α1 , 5 mm/y2 for α2 , and 1 mm/y3 for α3 .
These values result in the same magnitude of the displacement for T = 5 year
for all three base functions.
The estimated variance components per SLC image are practically identical
to the variance components estimated in the previous simulation scenario
(which are shown in Fig. 5.6). These variance components are used to create
the vc-matrix for the integer least-squares estimation to obtain the parameters
between the points of the reference network. After the ILS estimation, eighteen
points are removed from the reference network. These points are clearly
incoherent, since the mean of the estimated variance factors of the connecting
arcs to these points is larger than three, cf. Eq. (4.26). This pre-processing
step is performed in order to save time during the alternative hypothesis
tests. During the spatial integration step of these parameters, additionally two
points and sixteen arcs are removed based on the DIA testing procedure. In
total twenty of 811 points (2.5%) are detected and removed from the reference
network, i.e., all incoherent points are correctly identified.
5.4 DEM error and displacement signal 83
The parameters at the remaining points are estimated using a single con-
nection to the reference network, cf. section 4.5. The total CPU time (sum
of six CPUs) for the estimation of the 41143 points is 411 seconds. The
extra time in comparison with the previous simulation scenarios is caused
by the larger amount of noise on the approximately thousand points, which
causes the integer least-squares search to take longer. The estimated variance
factors of each point are used to select reliable points. Table 5.4 shows the
number of points that would be selected if a certain threshold is for the
a posteriori variance factor. Here, a threshold σ̂ 2 < 2.0 is used, i.e., 39913
Table 5.4: Number of selected points for different thresholds for the a posteriori
variance factor. The percentage is relative to the number of coherent points (40114)
used in the simulation.
σ̂ 2 < 1.0 σ̂ 2 < 2.0 σ̂ 2 < 3.0 σ̂ 2 < 4.0
# 27761 39913 39945 40217
% 69.21 99.50 99.58 100.26
points of originally 41143 points are selected as reliable points. All 1029
incoherent points are identified, but also 201 coherent points are removed.
The wrapped residual phase is computed in each interferogram on the accepted
points, which is unwrapped using the sparse MCF algorithm. The DEM error
and displacement parameters are finally estimated using the unwrapped data.
First, the estimated DEM error is compared with the simulated topographic
signal. The parameters are estimated with respect to the reference point R
of the reference network, for which ΔhR = −0.10, α1 (R) = 3.03, α2 (R) = 0.0,
and α3 (R) = −0.081. The statistics of the difference between the simulated
input with the estimated DEM error and displacement parameters are given
in Table 5.5. It is interesting to see that all the estimated DEM errors are very
small, while some (actually eighteen) estimated displacement parameters are
incorrect. This is clearly visible when the minimum and maximum error of
the quadratic term are considered, since the simulated input has only linear
and cubic terms. The error on the estimated parameters is again spatially
correlated due to the simulated atmospheric phase, which cannot be inferred
from this table. Though, in general the estimated standard deviation of the
error agrees well with the propagated vc-matrix, see Eq. (5.8).
A better measure for the performance of the network algorithm is the error
in the unwrapped phase, since it allows for a direct comparison between several
estimations using difference base functions. Of 39916 unwrapped phase values
in 50 interferograms (nearly two million values), 122 are not identical to the
84 Chapter 5: Synthetic Data Experiments
Table 5.5: Statistics of the error on the estimated parameters at accepted points
for the simulation including displacement. The standard deviation that follows from
the propagated vc-matrix Eq. (5.8) is given in parentheses.
simulated input. These incorrectly unwrapped values are all due to incorrectly
estimated parameters at eighteen points. The integer least-squares estimator,
i.e., the temporal unwrapping step, found a better fit with the simulated data
at these points when the quadratic term is approximately 3 mm/y2 , instead
of zero, in combination with also incorrect linear and cubic coefficients. The
incorrectly estimated points are spatially separated, and it is by pure chance
that this occurs. Note that it would be relatively simple to identify these
incorrectly estimated points by using a threshold for the estimated coefficients
or for the estimated total amount of displacement. In conclusion it can be said
that the network algorithm is successful at 39895 of 39913 points (99.95%).
The probability of correct temporal phase unwrapping using the ILS estimator
is larger when the difference between the displacement model and the actual
displacement is smaller. The absolute value of this model error should at
least be smaller than π in most interferograms, since otherwise it will be
wrapped, preventing the retrieval of the displacement from the observed
wrapped phase values. The error is expressed in radians here. For ERS and
ENVISAT, π corresponds to ∼1.4 cm of displacement. Furthermore, other
error sources, such as atmospheric phase and random noise, are ignored.
For this simulation, the temporal displacement function, Eq. (5.5), can be
approximated reasonably well using a linear displacement rate. Additionally,
a linear displacement rate model may yield correctly unwrapped data, since,
during the estimation, the displacement between nearby points is considered,
and here the spatial displacement pattern is very smooth, see Eq. (5.6) and
Fig. 5.9.
One could envision a processing strategy which uses the linear model,
or piecewise linear model, during the temporal phase unwrapping step, and
thereafter another displacement model during the final estimation using the
unwrapped data, for example using alternative hypothesis test to test for
significance. For the displacement simulated here, the uplift at the end of
the time interval would not be detected if the model would not be changed
once the data are unwrapped, since the displacement dominantly consists
of subsidence. Note that in the standard PS technique, where a linear
5.5 Conclusions 85
displacement model is routinely used, the danger is that the deviations from
the model are interpreted as atmospheric phase. This would certainly happen
when a temporal high-pass filter would not be used before applying the spatial
low-pass filter to isolate the APS. Moreover, note that this data set contains a
temporal gap, and that the uplift occurs at the end of the time interval. Both
effects makes the application of a temporal filter cumbersome.
The following describes the estimation using a linear displacement model.
The vc-matrix of the estimated parameters (DEM error, linear displacement)
is given by
0.081 0.002
Qb̂ = , (5.10)
0.002 0.037
where a priori variance components are used. The estimated variance compo-
nents are practically equal for this model as for the higher order polynomial
model used in section 5.4.1, indicating that indeed the “actual” displacement
between nearby points is well approximated by a linear model. In a pre-
processing step, again, eighteen points are removed. The overall model test of
the spatial integration of the parameters (DEM error and linear displacement
rate) is accepted after removing four more arcs. All incoherent points are
removed from the reference network by the alternative hypothesis testing.
The threshold for the a posteriori variance factor is again set to σ̂ 2 < 2,
which for this estimation meant that 40067 points are accepted (all 1029
incoherent points are rejected, as well as 47 coherent points). Using the same
threshold for the a posteriori variance factor in this estimation thus yields
154 more points compared to when two more displacement base functions
are used. This is explained by Eq. (4.26), since the weighted squared sum of
least-squares residuals is divided by the redundancy, which is larger in this
case. The unwrapped phase at these points is again obtained by the MCF
algorithm. For this estimation, 100 phase values are incorrectly unwrapped, at
9 points (99.98%). Likely, the temporal unwrapping (estimation) is incorrect
at these points. The estimation using a linear displacement model thus is able
of unwrapping more points correctly than the estimation using the correct
base functions. The reason is that the simulated displacement between nearby
points can be well approximated by linear displacement. More points are
estimated incorrectly when the higher order polynomial base functions are
used, since the degree of freedom is larger in that case, i.e., the data can be
unwrapped using quadratic and cubic coefficients, leading to a better fit.
5.5 Conclusions
The STUN algorithm performs well using simulated data sets. Data are
simulated on 41143 points in 51 acquisitions. The simulated positions of the
points and the perpendicular and temporal baselines correspond to an existing
data set. It is shown by a simulation using random noise only that the variance
components of the stochastic model are correctly estimated, see Fig. 5.4. As
86 Chapter 5: Synthetic Data Experiments
87
88 Chapter 6: Real Data Processing
Application of the STUN algorithm to rural areas is described in, e.g., (Kircher
et al., 2003a,b; Walter et al., 2004).
6.1 Berlin
Berlin is chosen as the first test site at the start of the DLR Permanent
Scatterer software project, mainly as a validation site for the developed
software. Berlin is the largest city and the capital of Germany. It is located at
52◦ 30 northern latitude, 13◦ 20 eastern longitude, and lies ∼200 km inland
at 100 m height (WGS84). The topographic variation is approximately 50 m
in the processed area, see also the DEM of the area provided in Fig. 6.1.
The urbanized area is approximately 20×20 km2 and Berlin has a population
of ∼3.5 million people. Thus, Berlin is a typical urban area that can be
well processed using the PS technique. Moreover, a large number of ERS
acquisitions are available for Berlin. No significant displacement signal is
expected for the Berlin test site. Atmospheric circumstances are typical for
a land climate (moderate). The average annual precipitation is 5.7 cm and
the average temperature is between 1◦ C in January and 20◦ C in July (Berlin
Tempelhof station, data 1991–2004, see Deutscher Wetterdienst, 2004). These
factors make Berlin a good candidate for an initial test site.
5
5840000
N
437
5
5820000
180 m
5
5800000
100
60
5
5780000
0 10 20 km
Fig. 6.1: DEM used for the Berlin test site (color shaded). Projection: UTM, zone
33, WGS84 ellipsoid. The rectangles indicate the processed areas for data of track
165 and 437, respectively.
6.1 Berlin 89
First, the data availability for the Berlin test site is described in sec-
tion 6.1.1. Next, section 6.1.2 describes a reference processing for the Berlin
area where the estimated parameters are DEM error and linear displacement
rate. In section 6.1.3 the sensitivity of the STUN algorithm to several
algorithm settings is investigated. Finally, section 6.1.4 describes a cross-
comparison of the estimated linear displacement rates using a second stack
of data from an adjacent track. The experiments that are performed for this
test site are summarized in Table 6.1.
Table 6.1: Berlin test site experiments. Listed are the approximate number
of interferograms used K, the estimated parameters b̂, and the purpose of the
experiment. The estimated parameters are coded as H for estimated DEM error
and V for linear displacement rate. The size of the area for all tests is ∼20×20 km2 .
# K b̂ Purpose
I 50 H, V Reference processing. Distance between points in the ref-
erence network is ∼1000 m and each point has ∼6 connec-
tions.(The other scenarios are compared to this processing).
IIa 50 H, V Sensitivity to the stochastic model.
IIb 50 H, V Sensitivity to the number and length of the arcs between
points in the reference network.
IIc 10 H, V Sensitivity to the number of available acquisitions.
IId 50 H, V Sensitivity to the choice of the testing parameters for the
arc and point test.
III 30 H, V Cross-comparison of linear displacement rates estimated
using data of an adjacent track.
For the Berlin test site data of two descending ERS tracks are available. The
first data set, frame 2547 and track 437, is used in the experiments described
in the following sections. This data set contains more acquisitions, and has
the advantage that the city of Berlin is completely covered. The second data
set, frame 2547 and track 165, is used to perform a cross-comparison between
the estimated linear displacement rates in the overlapping area. These data
and the comparison are described in section 6.1.4. All available acquisitions
are listed in Appendix C.
The baseline distribution for the first track is shown in Fig. 6.2. In total, 70
ERS–1 and ERS–2 acquisitions are available. All 69 differential interferograms
are generated with orbit 10039 as the master image, acquired at March 22nd ,
1997. As an example, fourteen differential interferograms for track 437 are
shown in Fig. 6.3. This stack is centered on the city of Berlin, hence the higher
coherence in the middle and lower coherence due to temporal decorrelation
90 Chapter 6: Real Data Processing
Jan. 1, 2002
Jan. 1, 2000
Jan. 1, 1998
Jan. 1, 1996
Jan. 1, 1994
Jan. 1, 1992
–1000 –500 0 500 1000 –600 –400 –200 0 200 400 600
Fig. 6.2: Baseline distribution for available ERS data (70 acquisitions) for the
Berlin area (track 165, frame 2547). Doppler centroid frequency for the Berlin data
set. The (earlier acquired) ERS–1 images have a consistently larger Doppler centroid
frequency than the ERS–2 images. The Doppler centroid frequencies of ERS–2 data
acquired after February 2000 is not stable due to gyroscope failures. Selected 50
data for the experiments are shown with a diamond.
Fig. 6.3: Some differential interferograms for the Berlin area, track 437. The
interferograms are sorted from left to right according to (the absolute value of
the) perpendicular baseline, |BB⊥| ∈ [52, 658] m. The master image was acquired
at March 22nd , 1997. The last panel shows the average amplitude of the processed
area. The images are in the radar coordinate system. In this case, they are roughly
geo-referenced after mirroring in the vertical axis.
Fig. 6.4: Selected points and reference network for the Berlin test site. The total
interferometrically processed area is ∼26 km wide by 24 km high. Based on their
SCR, 78779 points are selected in an area of ∼26 km wide by 18 km high (170
PS/km2 ). In the reference network there are 1066 points (2.3 per km2 ), and 6650
arcs.
92 Chapter 6: Real Data Processing
using a priori values, see also section 4.3. They are estimated, cf. Eq. (4.8), at
526 independent arcs between points of the reference network. The mean arc
length during this estimation is 528 m with a standard deviation of 153 m.
The vc-matrix of the estimated parameters using the a priori stochastic model
is given as
0.123 −0.004
Qb̂ = . (6.1)
−0.004 0.042
The DEM error is the first parameter and the displacement rate the second,
i.e., the estimated standard deviation using this a priori model is ∼0.35 m for
the estimated DEM error and ∼0.20 mm/y for the displacement rate between
points. The correlation between the estimated parameters can be neglected in
this scenario.
Fig. 6.5 shows the estimated variance components of the stochastic vari-
ance component model plotted as function of perpendicular, temporal, and
Doppler baseline, respectively. The ERS–2 images seem to have a slightly
better precision than the ERS–1 images, which could be due to improved
hardware and sensor settings. However, errors in the functional model (e.g.,
the linear displacement model may be too simple, or some points may not
be coherent during the earlier acquisitions) also lead to larger estimated
variance components for the earlier ERS–1 images. The latter cause is more
likely, since the estimated variance components are only larger for the earlier
acquired ERS–1 data. Note that the estimation of the variance components
could become self-fulfilling, i.e., data that do not fit the functional model
are down-weighted, which in turn leads to a better fit with the model. The
vc-matrix using the estimated stochastic model is given as
0.080 −0.005
Qb̂ = , (6.2)
−0.005 0.030
The standard deviations of the estimated difference in DEM error and linear
displacement rate between nearby points are thus estimated to be ∼0.28 m
and ∼0.17 mm/y, respectively.
Next, The DEM error and the linear displacement rate (differences) are
computed at the arcs of the reference network. The ILS estimator and
estimated stochastic model are used for this estimation using wrapped data.
The standard deviation for the pseudo-observation to regularize the ILS
estimator is set to 25 m for the DEM error and to 10 mm/y for the
displacement rate. The theoretical success rate for the bootstrap estimator,
cf. Eq. (3.20), in this case is P (ẑ = z) = 0.992. The mean of the estimated
DEM error is –0.11 m and –0.06 mm/y for the displacement rate. The
mean a posteriori variance factor at the arcs is 1.26, which suggests that
the estimated variance components realistically describe the actual precision.
The estimated DEM error and displacement rate at the arcs are plotted in
Fig. 6.12 (top row).
6.1 Berlin 93
60 60
[deg]
[deg]
40 40
20 20
0 0
–1000 –500 0 500 1000 –4 –2 0 2 4
Perpendicular baseline [m] Temporal baseline [y]
60
[deg]
40
20
0
–100 0 100 200 300
Doppler baseline [Hz]
Fig. 6.5: Square roots of the estimated variance components for the Berlin test
site as function of perpendicular, temporal, and Doppler baseline. A red asterisk
corresponds to an ERS–1, and a blue diamond to an ERS–2 acquisition.
Network integration
After the estimation of the DEM error and the displacement rate at the
arcs of the reference network, they are integrated using the least-squares
adjustment and testing procedure described in section 4.4.2. A pre-processing
step is performed first to save time during the alternative hypothesis step, and
to guarantee that clearly incoherent points are removed from the reference
network. As visible in Fig. 6.12 (top row), the estimated a posteriori variance
factor is less than two for most of the estimated arcs. Therefore, all points
are removed for which the mean of the a posteriori variance factors of the
connecting arcs is larger than three. Seventeen points are removed of the ori -
ginally 1066 reference network points. Thereafter, additionally 46 arcs are
removed (of 6439 remaining arcs) with an a posteriori variance factor σ̂ 2x > 3.0,
but only if both connecting points have at least three other connections. For
the redundant network used here, each point is still connected with at least
eight arcs after this procedure. One additional point and sixteen arcs are
removed during the alternative hypothesis testing. The testing parameters
used are γ = 90% and α1 = 0.05, see also Appendix B. The mean least-squares
residual at the arcs is 0.001 m and –0.001 mm/y for the DEM error and
linear displacement rate, respectively. The standard deviation is 0.09 m and
0.11 mm/y, and the maximum absolute error after the hypothesis testing is
1.35 m and 1.78 mm/y. After integration of the parameters, the estimated
DEM error at the (remaining 1048) points is between –39.14 and 44.55 m
and between –3.48 and 4.34 mm/y for the displacement rate. The estimated
parameters are relative with respect to the selected reference point. In this
case, the reference point is selected at the center of the image. The mean
intensity of the selected point is 9.2 dB and the amplitude dispersion index
94 Chapter 6: Real Data Processing
is Da = 0.12, i.e., the random noise component of the reference point seems
to be small. Note that the least-squares residuals at the arcs are not all
exactly equal to zero, which is expected from the theory, and confirmed
by the simulation experiments. This could be caused by a small number of
points that are partially incoherent, e.g., points that are not visible during
all acquisitions. The residual phase at such points during these acquisitions
could be around π during the ILS estimation of the parameters at the arcs.
This could have lead to small misclosures, although it is unclear why. Another
cause could be rounding errors or numerical instability during the adjustment
of this relatively large system of equations (designmatrix is ∼6000 ×1000,
computations are performed in IDL, using eight and four byte floating point
arithmetic). However, it is considered unlikely that this can cause residuals
in the order of one meter, particularly because no instability is reported by
the software. By continuation of the alternative hypothesis tests it could be
forced that the least-squares residuals become zero at all arcs of the reference
network. In the extreme case, the iterations continue until a network results
as sketched in Fig. 4.2(b) remains. However, no obvious outlier arcs could
be detected anymore, and it is not expected that doing so would significantly
affect the final parameter solution using the unwrapped phase. The parameters
at the 77731 selected points that are not part of the reference network are now
estimated with respect to the nearest point of the reference network using the
ILS estimator. The standard deviation for the pseudo-observation to regularize
the ILS estimator is set to 25 m for the DEM error and 10 mm/y for the
displacement rate, i.e., the same values as those used during the estimation
at the arcs of the reference network. The next step of the STUN algorithm
is the phase unwrapping at selected (reliably estimated) points. These points
are selected based on the estimated variance factor, cf. Eq. (4.26). In this
case, a threshold on the estimated variance factor of 1.0 selects points with
a variance below 0.080 m2 and 0.030 mm2 /y2 for the relatively estimated
DEM error and the linear displacement rate, respectively, see also Eq. (6.2).
If a threshold of 2.0 is used, these variances would be multiplied by a factor
two. Fig. 6.6 shows the estimated displacement rates at the selected points
using different thresholds for the a posteriori variance factor. The parameters
are estimated using the wrapped data. An uplift area to the west of Berlin
can be clearly identified (close to the Olympic Stadium). This uplift was not
anticipated since the Berlin test site was not expected to undergo significant
displacements. Most likely this uplift is related to underground gas storage.
This area was in the news at April 26th , 2004, after a gas explosion occurred
(Berliner Zeitung, 2004). The reservoir is located under a densely populated
area (particularly Berlin-Charlottenburg and Berlin-Spandau). The reservoir
is in use since 1992 and can provide the city of Berlin with gas for one year.
Therefore, it is expected that the linear displacement model does not fully
describe the actual displacements in that area. Some of the more localized
subsidence points visible in Fig. 6.6 are likely to be incorrectly estimated.
6.1 Berlin 95
Fig. 6.6: Estimated linear displacement rates for the Berlin test site for different
thresholds on the a posteriori variance factor. In total 78779 points are computed.
Red corresponds to 5 mm/y subsidence and blue to 5 mm/y uplift. (a) shows 11078
points with an a posteriori variance factor below 1.0 (estimates between –4.2 and
4.9 mm/y). (b) shows 28724 points below 2.0 (estimates between –18.2 and 6.6),
96 Chapter 6: Real Data Processing
Fig. 6.6: (cont.) Fig. (c) 42802 points below 3.0 (estimates between –18.2 and 8.9),
and (d) 54384 points below 4.0 (estimates between –95.6 and 93.1). The location
of the reference point is marked by the black asterisk. The (blue) uplift area west
of Berlin is the most striking displacement feature. Note that for a larger threshold
the amount of incorrectly estimated points increases.
6.1 Berlin 97
However, the estimates at most of the points plotted in Fig. 6.6(a) and
Fig. 6.6(b) are likely to be correct.
Fig. 6.7: Residual phase for first three interferograms (sorted numerically according
to orbit number). The spatial correlation suggests that atmospheric signal is present
in the residual phase. The location of the reference point is marked by the black
asterisk. A cyclic colorbar is used (one color cycle corresponds to a 2π phase
difference). (a) corresponds to interferogram with slave orbit 10330, acquired at July
7th , 1993, B⊥= 890 m, (b) to orbit 10540, acquired at April 26th , 1997, B⊥= 450 m,
and (c) to orbit 10831, acquired at August 11st , 1993, B⊥= 85 m.
98 Chapter 6: Real Data Processing
The unwrapped phase at the selected points is obtained using the MCF
sparse grid unwrapping algorithm. This phase is used to estimate the DEM
errors and the linear displacement rates. The parameter solution is visually
identical to the solution using wrapped data, see Fig. 6.6. The a posteriori
variance factors, estimated using the unwrapped data, are shown in Fig. 6.8.
These are multiplication factors for the vc-matrix given in Eq. (6.2), i.e., a
factor of five indicates a variance of about 0.40 m2 and 0.15 mm2 /y2 for
the estimated DEM error and linear displacement rate, respectively. The
precision is described with respect to the reference point. In general, the
further away from the reference point the worse the estimated precision of
the double-difference observations and the estimated displacement rate differ
ence. The a posteriori variance factors are relatively small, i.e., the atmospheric
variation is likely to be small for the Berlin test site. However, the estimated
precision suddenly increases in the uplift area. This supports the theory that
the functional model does not fully describe the displacement in this area.
Fig. 6.8: A posteriori variance factors for the Berlin test site estimated using
unwrapped data. (blue: 0, red: ≥8).
sensitive the STUN algorithm is for this setting. This effect is confirmed
by the experiments described in section 3.4 and is explained in more detail
below.
• Number of estimated (displacement) parameters.
The larger the number of estimated parameters, the smaller the degree of
freedom, and the better the fit with the observed data. However, if the
problem is over-parameterized, the probability increases that parameters
are fitted that describe the wrapped data better, but have no relation
with the actual displacement. The more acquisitions are available the
more complex the model for the displacement can be. The vc-matrix
and corresponding correlation matrix can be used to assess whether it
is feasible to estimate individual parameters. Note that this analysis can
be performed without using actual data. Sensitivity to the estimated
parameters is studied in detail using the Las Vegas data set, see section 6.2.
The larger the distance between the points in the reference network, the
more likely it is that the parameters are incorrectly estimated using the
wrapped data, due to increased atmospheric difference signal. Moreover, if
the redundancy of the network decreases, it is more difficult to identify outlier
arcs and points. Fig. 6.9 shows the much sparser reference network used in
this section. The cell width used during the sparsification to select reference
points is set to 1500 m, resulting in the selection of 154 points. The network
is constructed using Delaunay triangulation. The mean distance between the
points is 2107 m and the standard deviation is 726 m. The number of arcs
is 435, i.e., there are ∼2.8 connections per point. Since the distance between
points in the reference network is increased on purpose, the stochastic model
previously estimated using shorter distances could be used here. However,
the stochastic model is estimated using the 154 selected points to simulate a
situation in which the point density cannot be increased, for example in rural
environments. The variance components are estimated using 77 independent
estimations, cf. Eq. (4.8). The average distance between points is 1623 m
with a standard deviation of 933 m, i.e., the arcs are approximately three
times longer than during the reference processing. The estimated variance
components are shown in Fig. 6.10. Compared to the variance components
estimated previously, see Fig. 6.10, these components seem to be slightly
larger. This is particularly noticeable for some of the ERS–2 acquisitions.
It is likely that the atmospheric conditions during these acquisitions were
more severe than during the others, which remained unchanged. However, the
parameters seem to be estimated correctly using the wrapped phase differences
between points on these distances for the Berlin area. The vc-matrix of the
estimated parameters is propagated as
102 Chapter 6: Real Data Processing
Fig. 6.9: Reference network using large distances and small redundancy for the
Berlin test site (network created using Delaunay triangulation). The redundancy of
this network is much smaller than that used for the reference processing, shown in
Fig. 6.4(b).
60 60
[deg]
[deg]
40 40
20 20
0 0
–1000 –500 0 500 1000 –4 –2 0 2 4
Perpendicular baseline [m] Temporal baseline [y]
60
[deg]
40
20
0
–100 0 100 200 300
Doppler baseline [Hz]
Fig. 6.10: Square roots of the estimated variance components for the Berlin test
site using a reference network with large distances between points, as function of
perpendicular, temporal, and Doppler baseline. A red asterisk corresponds to an
ERS–1, and a blue diamond to an ERS–2 acquisition.
6.1 Berlin 103
0.067 −0.003
Qb̂ = , (6.3)
−0.003 0.027
Compared to the vc-matrix using the shorter arcs, see Eq. (6.2), the propa-
gated precision using these variance components is slightly better, although
the atmospheric (double-difference) signal is expected to be larger. The reason
may be that the atmospheric signal is limited for the Berlin test site or that
the (fewer) selected points contain less noise.
During the same pre-processing procedure as used during the reference
processing scenario, 24 arcs and three points are removed from the reference
network. During the alternative hypothesis tests one more point is removed.
The standard deviation of the least-squares residuals at the arcs (misclosures)
is 0.04 m and 0.04 mm/y for the DEM error and displacement rate, respec-
tively. For the selection of the points, an a posteriori variance factor threshold
of σ̂ 2x < 3.33 is used. This threshold is chosen such that selected points have the
same threshold on the variance of the displacement rate, see the vc-matrices
in Eq. (6.2) and Eq. (6.3). Using this threshold, 43169 points are selected, i.e.,
approximately the same number as during the reference processing. Fig. 6.11
shows the estimated linear displacement rates using this reference network.
No difference with the estimation using a denser reference network can be
observed. The estimated precision is practically identical to that estimated
during the reference processing described in section 6.1.2, particularly Fig. 6.8.
The reason is that the unwrapped phase is identical in both cases. Apparently,
the atmospheric signal for the Berlin area is small.
For comparison, the estimation at the arcs of the reference network is also
performed using the a priori stochastic model, and the ensemble coherence
estimator used in the reference PS technique, see Eq. (2.7). The theoretical
success rate for the bootstrap estimator, cf. Eq. (3.20), is P(ẑ = z)=0.907
using the a priori stochastic model, i.e., somewhat smaller than the success
rate obtained using the estimated variance components. However, the meaning
of the computed success rate is limited in this case since it depends on the
precision of the observations, which is described using the a priori model.
The mean of the estimated DEM error is –0.10 m and is 0.03 mm/y for
the displacement rate using this estimator. The average a posteriori variance
factor at the arcs is 0.96, which suggests that the precision of the observations
is described well by the a priori stochastic model. Finally, the DEM error,
displacement rate, and average interferometric residual phase (i.e., termed
master atmosphere in Ferretti et al., 2001) are estimated using the ensemble
coherence estimator used in the reference PS technique. The search space is
bound to [–50,50] m for the DEM error and [–50,50] mm/y for the linear
displacement rate. Using the ensemble coherence estimator, the mean of the
estimated parameters are –0.29 m, –0.08 mm/y, and 0.96◦ for the DEM error,
104 Chapter 6: Real Data Processing
Fig. 6.11: Estimated displacement rates for the Berlin test site using a sparse
reference network with ∼3 arcs per point and an average distance of ∼1600 m between
points (red: 5 mm/y subsidence, blue: 5 mm/y uplift).
the displacement rate, and the bias, respectively. The average coherence is 0.80
with a standard deviation of 0.09. The average of the coherence corresponding
to the second best fitting set of parameters is 0.39, i.e., it is likely that at most
arcs the correct parameters are estimated.
The estimated DEM errors and displacement rates at the arcs of the
reference network using the different methods are plotted in Fig. 6.12. The
histograms of the estimates using the different methods are visually identical
and are not shown. However, there are clear differences between the estimated
parameters using the different methods. In general these differences are
larger if the data do not agree with the mathematical model. The estimated
parameters often get unrealistically large (e.g., 100 m DEM error difference
between nearby points) when the precision of the observations is low. Although
the estimated parameters do not differ very much at most arcs using this
configuration of acquisitions and points of the reference network, an additional
6.1 Berlin 105
100 100
50 50
[mm/y]
0 0
[m]
–50 –50
–100 5 2 –100 5 2
σ̂ x σ̂ x
0 0
0 2000 4000 6000 0 2000 4000 6000
100 100
50 50
[mm/y]
[m]
0 0
–50 –50
–100 –100
0 2000 4000 6000 0 2000 4000 6000
100 100
50 50
[mm/y]
[m]
0 0
–50 –50
–100 –100
0 2000 4000 6000 0 2000 4000 6000
[arc] [arc]
Fig. 6.12: Sensitivity of the estimated parameters to the stochastic model. The top
row shows the estimated DEM errors (left) and linear displacement rates (right)
at the arcs of the reference network using the ILS estimator and the a posteriori
stochastic model. The a posteriori variance factors σ̂ 2x , cf. Eq. (4.26), are plotted
on the bottom for each arc. The second row shows the differences between these
estimated parameters and estimates obtained using the a priori model (see footnote
at page 55). The third row shows the differences with parameters estimated using
the unweighted ensemble coherence estimator.
106 Chapter 6: Real Data Processing
During the alternative hypothesis testing, points and arcs are removed based
on the computed test statistics using an iterative procedure. This testing
procedure is described in detail in section 4.4.3 and Appendix B. The choice
of the level of significance α and the power of the (all) tests γ0 influences
when the null-hypothesis is rejected, and, if so, which alternative hypothesis
is selected as most likely cause for the rejection. A higher level of significance
implies that the false alarm rate increases, i.e., that more often points and arcs
are removed that do not need to be removed. Here, the level of significance
for the one-dimensional test is fixed to α1 = 5%. The effect of changing the
power is studied using the network and data of the reference processing.
The power of the test is gradually increased from 20% to 90%. The results are
reported in Table 6.2. Initially, the reference network consists of 1066 points
Table 6.2: Experiments with testing parameters. For different values of the power
γ the level of significance for the OMT test, the non-centrality parameter λ0 , the
number of iterations before acceptance of the OMT test, and the total number
of removed points and arcs are reported. The level of significance for the one-
dimensional test α1 = 5% is fixed during all experiments.
and 6650 arcs. As expected, more points are removed if the power increases,
because the higher dimensional alternative hypothesis is more likely to be
accepted. The number of iterations also decreases for increasing power. This
is likely caused by the fact that if a point is removed, also all incorrectly
estimated arcs are removed. However, the differences are marginal, which may
be caused by the fact that most points in the reference network are coherent,
and most estimations at the arcs are correct. A large number of arcs are
clear outliers, which are identified using all settings for the test parameters.
Moreover, the errors at the arcs are not expected to have a normal distribution.
Nonetheless, the procedure that is followed offers a way to automatically
remove the inconsistencies in the network.
During the reference processing, a power γ 0 = 5% and a level of significance
α1 = 5% is used. During a pre-processing step, 17 points and 257 arcs are
removed. After this, another point and 16 arcs (in total) are removed by
6.1 Berlin 107
the alternative hypothesis testing procedure, see also section 6.1.2. This pre-
processing step is not performed here. Apparently, more points are removed
if this pre-processing step is used. However, it is advised to use some kind of
pre-processing step in order to guarantee the removal of all clearly incoherent
points, and to make this step faster. The alternative hypothesis testing
procedure takes approximately one hour for 80 iterations. The computation
of the vc-matrix of the least-squares residuals and the point tests takes the
longest.
α1 t
W {α1 t}
–1 0 1 t
−π
α2 t
Fig. 6.13: Signal aliasing. Shown are three phase signals φ = αi t that have (some)
coinciding wrapped phase values at the sample points. This figure demonstrates
that the (unwrapped) signal can only be recovered if at least |αi |/π regularly spaced
samples are available.
∼14 mm/y. Moreover, the nominal orbit repeat cycle of the ERS satellites is
35 days, which implies that aliasing occurs for a displacement rate (difference)
larger than
λ 1
max
αERS = · ≈ 150 mm/y. (6.7)
4 35
The phase induced by a DEM error is a linear function of the baseline. A phase
cycle is induced by a DEM error equal to the height of ambiguity (Hanssen,
2001) which follows from Eq. (2.12) by substitution of φtopo = 2π as
λ r sin(θ)
Hamb = . (6.8)
−2 B⊥
For example, for K=10 and typical ERS parameters1 , the maximum DEM
error between points that can be estimated is Δh max = 21.6 m. Since both
the linear displacement rate and the DEM error are estimated using the
wrapped phase data, the minimum required number of interferograms is the
sum Kαmin + KΔh min
of the minimum in each dimension. For example, this is
min
the case if Kα temporally equally spaced samples with a zero perpendicular
min
baseline and KΔh spatially equally spaced samples are available with a zero
temporal baseline. In this special case, the two frequencies can be estimated
independently. Due to the irregular sampling, the sampling distance for certain
interferograms is smaller than the average. This, and the fact that only a
single frequency is estimated in each dimension, allows for the estimation
of DEM error and linear displacement rates above the Nyquist frequency
1
λ = 56.6 mm, r = 850 km, θ = 21◦ , ΔB
B⊥ = 2000 m
6.1 Berlin 109
(i.e., the frequency corresponding to the Nyquist sampling rate). The effect
of irregular sampling has another advantage, namely that aliasing does not
occur at a single frequency, but the power is spread out over the estimated
spectrum.
Noise is ignored in this derivation. However, from the numerical simulation
described in section 3.4, it is clear that it can have a large impact on finding
the correct ambiguities. If the solution space is searched above the Nyquist
frequency, aliasing occurs in case of regular sampling, and the correct DEM
error or displacement rate cannot be distinguished from the aliased solution.
In case of irregular sampling, aliasing occurs to a lesser extent, but certain
solutions aside from the true solution are still likely to have a higher amplitude
in the spectrum. Due to observation noise, the incorrect solution may actually
have a higher amplitude than the true solution, which implies that the search
bound on the solution space should be chosen appropriately low if the number
of interferograms is small.
To demonstrate this, an estimation is performed using K = 10 interfero-
grams; the solution space is bounded using a standard deviation σΔh = 20 m
for the DEM error and σα = 10 mm/y for the linear displacement rate. These
“soft” bounds imply that most parameters are expected to be between 40 m
and 20 mm/y (two-sigma level). For regularly sampled data the number
of required interferograms would be (using ERS parameters and ΔT = 5 y,
ΔB B⊥ = 1000 m)
i.e., signal aliasing is expected to occur. The ten interferograms used during
this estimation are randomly selected from all the images used during the
reference processing (see section 6.1.2) for which |B B⊥|< 500 m, and |T | < 2.5 y.
The selected data set has ΔB⊥=795 m and ΔT = 3.1 mm/y. The same reference
network is used as during the reference processing. The points in the reference
network are thus not selected using the amplitude dispersion index estimated
using the reduced data set. The variance components of the stochastic model
are estimated as described during the reference processing. The variance
components have similar values. The vc-matrix of the estimated parameters
using the estimated stochastic model is given as
1.016 0.471
Qb̂ = , (6.11)
0.471 1.035
and the corresponding correlation matrix as
1.000 0.460
ρ= . (6.12)
0.460 1.000
The correlation between the estimated DEM error and linear displacement
rate increased compared to previous estimations using more data. However,
the correlation is still reasonably small. The theoretical success rate using
110 Chapter 6: Real Data Processing
the bootstrap estimator, cf. Eq. (3.20), is P (ẑ = z)= 0.958 (and 0.84 using the
a priori stochastic model). However, aliasing effects are not taken into account
by this estimate for the success rate.
The DEM error and displacement rate are estimated at the 6650 arcs
of the reference network, using the integer least-squares estimator and the
estimated stochastic model. After this estimation, points and arcs that clearly
are incorrect are removed in a pre-processing step. This procedure is described
in section 6.1.2 (network integration). In total, one point and 178 arcs are
removed from the reference network. During the alternative hypothesis testing
step, the same testing parameters are used as during the reference processing.
Additionally, eighteen points and 395 arcs are removed. For this scenario, a
total of nineteen points and 573 arcs are removed, while during the reference
processing eighteen points and 62 arcs are removed. Thus, the network using
K= 10 is less consistent then the network using K= 50 interferograms, which
is to be expected. However, an internally consistent reference network could
be established by removing the identified arcs and points. The standard
deviation of the misclosures at the arcs of the reference network are 0.28 m
and 0.40 mm/y for the DEM error and displacement rate, respectively. The
estimated displacement rates at the points of the reference network are plotted
in Fig. 6.14(a). Some points of the reference network seem to be estimated
incorrectly (large values), but apparently consistently. This demonstrates that
a small closing error does not necessarily mean a correct estimated parameter,
which is due to the fact that the arcs are not independently observed.
Fig. 6.14(b) shows the estimated displacement rates at 28462 points with
an a posteriori variance factor below one. The estimated variance factors are
significantly smaller for the estimation using only ten interferograms. The
reason for this is the reduced redundancy, i.e., the least-squares residuals are
expected to be smaller, and thus the a posteriori variance factor. The variation
of the estimated displacement rates is much larger in this case, compare, e.g.,
with Fig. 6.6. This larger variation is likely caused by unmodelled atmospheric
signal. Note that the uplift area cannot significantly be detected from this
result.
For the Berlin test site, data of the ERS–1 and ERS–2 satellites are available
for two descending tracks (i.e., adjacent tracks, with approximately 40 km
overlap at this latitude). The difference in viewing angle between these two
ERS tracks is approximately 3◦ . Due to this difference in viewing angle,
also the ground-range pixel spacing is different. For the considered area this
is approximately 9.64 m vs. 10.69 m for the first stack towards the East
(points at larger slant range) and West, respectively. A joint processing of
all data with respect to a single master image is not attempted, because the
height ambiguity would be extremely small for interferograms with such large
baselines. Aside from this, such an approach would severely limit the amount
6.1 Berlin 111
Fig. 6.14: Estimated line-of-sight displacement rates for the Berlin test site using
ten interferograms. (a) shows the estimates at the 1047 points of the reference
network in the range from –5 to 5 mm/y (red to blue). (b) shows 28462 selected
points with an estimated a posteriori variance factor smaller than 1.0.
112 Chapter 6: Real Data Processing
of points that can be estimated (only targets with a extremely wide opening
angle are expected to be coherent in both stacks). Colesanti et al. (2002)
estimated that less than 30% of PS points that are visible in data of one track
are also observed in data of the other track. Therefore, each stack is processed
on its own master image. However, it is assumed that there is a common set of
PS points located at the same objects in both stacks. The overall displacement
pattern is assumed to be spatially smooth, thus allowing a cross-comparison
of the estimated displacements at different PS points in both stacks. The
estimated DEM error cannot be compared, because the PS points are not
expected to be the same in both stacks. The processed area for both tracks
is shown in Fig. 6.15. It is not identically cropped in both stacks. The main
Fig. 6.15: Processed area for the Berlin test site for the two adjacent tracks. Shown
is the mean amplitude of all available data for each track. The data is coarsely geo-
referenced by mirroring in azimuth and range direction. The stack in the West (left),
which only partially covers the city of Berlin, corresponds to track 165 (∼28 km wide
by 29 km high). The stack in the East (right) corresponds to track 437 (∼26 km
wide by 24 km high).
reason for this is that the city of Berlin is not fully covered by the second stack
(the scene ends at the right side of the crop shown in Fig. 6.15). Moreover,
the first stack was processed before the second stack became available, and
6.1 Berlin 113
the master image and processed area were selected without considering the
data of the second stack.
Scene selection
Fig. 6.16 shows all available data for both tracks. The first stack, track 437,
contains 70 scenes and is processed on a master acquired at March 22nd , 1997
(ERS–2). The second stack, track 165, contains 43 scenes, and is processed
relative to a master acquired at October 22nd , 1998 (ERS–2). Data are
acquired at approximately 10:03 and 10:06 UTC for the first and second track
respectively, i.e., around 8:00 am local time. For the cross-comparison of the
estimated displacement in the two data stacks, data are selected that had the
largest possible temporal overlap, see also Table 6.3. Data before December
23rd , 1995, are thus not used during this not cross-validation, because they
are not available for track 437. Data after December, 1999, are not considered
for this cross-validation due to the instability of the ERS–2 platform after
this date, and resulting large variation of the Doppler centroid frequency.
The Doppler centroid frequency of the selected acquisitions of track 437 is
shown in Fig. 6.17. For ERS 1 acquisitions, the Doppler centroid frequency
Jan. 1, 2002
Jan. 1, 2000
Jan. 1, 1998
Jan. 1, 1996
Jan. 1, 1994
Jan. 1, 1992
–1500 –1000 –500 0 500 1000 1500
Perpendicular Baseline [m]
Fig. 6.16: Data selection for the Berlin test site. Data for two adjacent tracks are
available; track 437 (plus marks) and track 165 (diamonds). Only data acquired
between December 23rd , 1995, and after February 5th , 1999, are selected to make
the two processed stacks more comparable (dashed lines).
114 Chapter 6: Real Data Processing
is approximately 400 Hz, while for ERS–2 it is approximately 200 Hz. Due
to this difference, a difference of one meter of the azimuth sub-pixel position
between two PS points induces an interferometric phase of approximately
10◦ , see Fig. 2.6 at page 22. The azimuth resolution of ERS is ∼4 m, which
implies that this effect should be included in the functional model. However,
the precision of an estimated azimuth sub-pixel position is small, because the
variation in Doppler centroid frequency is small. Assuming a priori variance
components (see section 4.3), the vc-matrix of the DEM error, the linear
displacement rate, and the azimuth sub-pixel position can be computed in
advance. For these 41 images it is given, cf. Eq. (3.18), as
⎡ ⎤
0.127 −0.007 0.033
Qb̂ = ⎣−0.007 0.167 −0.110⎦ , (6.13)
0.033 −0.110 1.648
and the corresponding correlation matrix is
⎡ ⎤
1.000 −0.047 0.072
ρ = ⎣−0.047 1.000 −0.209⎦ . (6.14)
0.072 −0.209 1.000
The units are m, mm/y, and m for the DEM error, linear displacement rate,
and azimuth position, respectively. The standard deviation of the estimated
azimuth sub-pixel position thus would be ∼1.3 m (assuming the a priori
variance components correctly describe the precision of the data), which is
rather large compared to the azimuth resolution of ERS. In addition, as
can be inferred from Fig. 6.17, the correlation between possible unmodeled
displacement in 1996 and estimated azimuth sub-pixel position is large if these
data would be used, since almost all ERS–1 images are acquired in that year.
Therefore, only ERS–2 acquisitions are used during the comparison of the
two stacks, and only a DEM error and linear displacement rate are estimated.
600
400
fdc [Hz]
200
0
Jan. 1, 1996 Jan. 1, 1998 Jan. 1, 2000
Fig. 6.17: Doppler centroid frequency for selected data of the first stack of the
Berlin test site. The ERS–1 images have a larger Doppler centroid frequency than
the ERS–2 images.
6.1 Berlin 115
Table 6.3 gives an overview of the finally selected data for the comparison
based on the temporal and the Doppler centroid frequency constraint.
Table 6.3: Berlin test site data used during cross-comparison of adjacent tracks.
Listed are the acquisition dates of the master, the first and the last scene. In
Appendix C a full listing of all available acquisitions is given.
The city of Berlin is in the center of the processed area, which is approximately
26 km wide by 24 km high, see also Fig. 6.16 and Fig. 6.15. Using 33 ERS–2
acquisitions, the DEM error and the linear displacement rate are estimated
at the selected points. The estimated vc-matrix is
0.101 −0.002
Qb̂ = . (6.15)
−0.002 0.152
The parameters are estimated in a similar manner as is done for the reference
processing, described in section 6.1.2. The main difference is that during this
estimation less data from a smaller time span are used. Finally, 51269 of 78779
estimated points are selected as reliable points using a threshold σ̂ 2x < 3.0
on the a posteriori variance factor (estimated using the wrapped data). The
estimated linear displacement rates at these points are shown in Fig. 6.18(b).
Track 165 is the most West track, only partially covering the city of Berlin,
see also Fig. 6.15. The processed area is approximately 20 km wide by 20 km
high. For this track, 29 ERS–2 acquisitions are selected, see also Table 6.3. The
processing is performed using similar parameters as for the first stack. The
number of points in the reference network is 968. The ratio of the arcs per
point is 6.2. The average distance between points in the reference network
is 1016 m, with a standard deviation of 382 m. These numbers are very
similar for both stacks, even though this crop contains more rural area west of
the city. After estimation of the variance components, orbit 8307 is removed
from the data set because it clearly is less precise than the other images.
The perpendicular baseline for the interferogram with this slave image is
∼1200 m. Likely a coregistration problem occurred, or not all points are ideal
point scatterers, causing geometrical decorrelation for this interferogram. The
116 Chapter 6: Real Data Processing
The estimated precision is very similar for this stack as for the first stack, see
Eq. (6.15). The larger correlation between the estimated parameters is caused
by the different distribution of the acquisitions in time and space. Particularly,
the perpendicular baselines of the interferograms of the first stack is a bit
larger. Anyway, the correlation coefficient is 0.17, which is small. The points
are selected using the same threshold on the a posteriori variance factor. In
total, 41620 of 65137 estimated points are selected. The finally estimated
displacement rates using the unwrapped data are shown for both tracks in
Fig. 6.18(a). The reference points are chosen near to the Tempelhof airport
for both tracks.
The linear displacement rates at the PS points are estimated from fully
independent data, and can be cross-validated using the assumption that the
displacement is spatially correlated. The only variable that is not independent
is the DEM that is used during the differential interferometric processing.
However, it is not expected that the DEM affects the estimated displacement
rates, because a DEM error is also estimated. Moreover, the correlation
coefficient between these two estimated parameters is small. The estimated
points are geo-referenced to enable a comparison in the same reference frame.
Furthermore, vertical displacement is assumed, i.e., the estimated line-of-sight
displacements are mapped to the local vertical direction using the incidence
angle as
α̂x
α̂x,VERT = . (6.17)
cos θx,inc
The impact of this mapping is negligible, because the difference in look angle is
only a few degrees, but it is performed nonetheless. For the comparison ∼2000
selected points around the uplift area are used for which the a posteriori
variance factor is σ̂ 2x < 3.0, see Fig. 6.19. Each estimated point of track
437 is compared to the closest point estimated in track 165. The averages
of the estimated displacement fields of both tracks are set to zero in the
overlapping are where no displacement is expected, to avoid the influence
of different displacement rates of the reference points used during the in-
dependent estimations. A histogram of the difference between the estimates
of both tracks is plotted in Fig. 6.20(a). The variance of the difference is
∼1 mm2 /y2 . Assuming equal variance for all points in both tracks, and
neglecting individual points that may have a deviating displacement behavior,
the variance of the estimated displacement rates is ∼0.5 mm2 /y2 . This is in
6.1 Berlin 117
reasonable agreement with the expected variance for this area, see Eq. (6.15)
and Eq. (6.16), the vc-matrix of the first and second track, respectively, which
have to be scaled with the estimated a posteriori variance factors for each point
(which is between two and five for most points).
-- 5
Fig. 6.19: Estimated displacement rates for the city of Berlin using data from two
adjacent tracks. Estimates are converted to vertical displacement and plotted in the
range between –5 and 5 mm/y in UTM projection. The plotted area is ∼6 × 3 km2 .
A plus mark is used to plot 2187 selected estimates of track 437 and an ×-mark to
plot 2071 estimates of track 165.
300 300
250 250
200 200
count [-]
count [-]
150 150
100 100
50 50
0 0
–4.0 –2.0 0.0 2.0 4.0 –4.0 –2.0 0.0 2.0 4.0
displacement rate diff. [mm/y] test value [-]
Fig. 6.20: Histograms of estimated displacement rates using data from two adjacent
tracks. (a) shows the histogram of the difference between the estimated vertical
displacement rates. (b) shows the difference confronted with the estimated precision,
cf. Eq. (6.18). The dashed line shows the standard normal distribution.
To confront the estimated values with the estimated precision (i.e., to as-
sess the quality of the estimated precision), the following statistic is computed
for each difference
6.2 Las Vegas 119
α̂Ix,VERT − α̂II
x,VERT
wx = , (6.18)
σ̂ xI + σ̂ 2xII
2
where α̂Ix,VERT is the estimated vertical displacement rate using data of track
437, and α̂II x,VERT that of the closest point in the second stack. The variance
σ̂ 2xi of the estimated vertical displacement rate is given as
1
σ̂ 2xi = i
· σ̂ 2x · σ̂ 2α , (6.19)
cos θx,inc
The second test site is Las Vegas city, located at 36◦ 10 northern latitude,
115◦ 10 western longitude. Las Vegas is one of the fastest growing metropoli-
tan areas in the United States of America. Between 1990 and 2000 the
population almost doubled. In the metropolitan area live ∼1.4 million people
(Las Vegas Metropolitan Statistical Area, see Evans et al., 2000). Currently,
the urbanized area is approximately 20 × 20 km2. Las Vegas lies in a broad
desert valley in southern Nevada. Mountains surrounding the valley extend
to ∼3500 m above the valley floor, see also the DEM of the area shown in
Fig. 6.21. The average daily temperature is between 5◦ C in January and
30◦ C in June. The average annual precipitation varies significantly from year
to year but typically is between 5 and 20 cm (National Weather Service,
2004). The Las Vegas area undergoes large displacements dominantly linear
and locally seasonal of nature, see, e.g., (Amelung et al., 1999; Bell et al.,
2002; Hoffmann et al., 2001; Pavelko, 2003). The local subsidence is primarily
related to groundwater withdrawal. Between 1948 and 1963 the center of the
valley had subsided ∼1.0 m, and by 1980 ∼1.5 m, and it still continues to
do so (Bell et al., 2002). First, section 6.2.2 describes a standard processing
with the STUN algorithm of the Las Vegas data set, estimating DEM errors
and linear displacement rates. In the next section the number of estimated
120 Chapter 6: Real Data Processing
4
4050000
4
4020000
3200 m
2000
3
3990000
400
0 15 30 km
Fig. 6.21: DEM used for the Las Vegas test site (color shaded). Projection: UTM,
zone 11, WGS84 ellipsoid. The area covered by the interferograms is indicated by
the rectangle.
Table 6.4: Las Vegas test site experiments. Listed are the number of interferograms
used (K), the estimated parameters (b̂), and the goal of the experiment. The
estimated parameters are coded as H for estimated DEM error, V for linear
displacement rate, A for average atmosphere, D for Doppler dependent azimuth
sub-pixel position, S and C for sine and cosine terms of seasonal displacement, and
R for range sub-pixel position. The size of the area for all tests is ∼23 × 20 km2 . The
time range is from 1992 to 2000 for experiments I–III, and up to 2004 for experiment
IV.
# K b̂ Purpose
I 45 H, V Reference processing. This estimate is compared
with the reference PS technique.
IIa 45 H, V, A Additionally estimate average atmosphere
(demonstration that estimated H and V are not
very sensitive to this parameter, but that the
residual phase is reduced considerably).
IIb 45 H, V, D Additionally estimate azimuth sub-pixel position
(demonstration that this parameter should not be
estimated for this data set).
III 45 H, V, S, C, A Additionally estimate seasonal displacement
(demonstration of using trigonometric base
functions).
IV 55 H, V, D, R Include ERS–ENVISAT cross interferograms
(demonstration of continuation of the ERS phase
time series with ENVISAT data).
Jan. 1, 2002
Jan. 1, 2000
Jan. 1, 1998
Jan. 1, 1996
Jan. 1, 1994
Jan. 1, 1992
–1000 –500 0 500 1000 0 100 200 300 400 500
Fig. 6.22: Baseline distribution for available ERS data (45 acquisitions) for the Las
Vegas area (track 356, frame 2871). Note that the (earlier acquired) ERS–1 images
have a consistently larger Doppler centroid frequency than the ERS–2 images.
122 Chapter 6: Real Data Processing
centroid frequencies for this area are centered around ∼125 Hz for the ERS–2,
and around ∼400 Hz for the ERS–1 acquisitions. This implies that the
interferometric phase of point scatterers contains a term induced by azimuth
sub-pixel position. One meter position (difference) causes φξ ≈15◦ phase in
interferograms with an ERS–1 slave image, see Eq. (2.17). Note that there is
only a small temporal overlap between data of ERS–1 and ERS–2, which
implies that there is a large correlation between estimated azimuth sub-
pixel position and displacement that occurred around January 1st , 1996. The
amplitude of the processed area and fourteen differential interferograms are
shown in Fig. 6.23. The NS–EW street pattern, typical for American cities, can
be clearly seen in the (average) amplitude image, as well as highway 95 (upper
left to lower right), highway 15 (center to upper right), and the mountains
surrounding Las Vegas. The Las Vegas area appears very coherent, even for
interferograms with temporal baselines of more than five years. Furthermore,
significant atmospheric signal is visible in the interferograms. Fig. 6.24 shows
the selected pixels and the constructed reference network.
During this first estimation, four parameters are considered; DEM error, linear
displacement rate, azimuth sub-pixel position, and average atmosphere. The
estimation strategy is to only model the DEM error and the displacement
rate for the estimation using the wrapped data, and to additionally estimate
the other parameters after phase unwrapping. Ignoring the azimuth sub-pixel
position is not expected to have a severe impact on the estimation using
wrapped data, because the phase induced by this parameter is relatively
small. Aside from this, it is correlated with linear displacement rate, which
can be observed from Fig. 6.22; the Doppler frequency is not random in
time. Since the azimuth sub-pixel position is not included in the functional
model during the initial estimation, the initially estimated linear displacement
rates can be slightly biased. However, this bias is expected to be small and
spatially uncorrelated, because the azimuth sub-pixel position of the PS
points is assumed to have a uniform distribution. Moreover, the azimuth sub-
pixel position cannot be estimated with a high precision due to the small
variation of the Doppler frequencies. Using a priori variance components (see
section 4.3), the vc-matrix of the parameters DEM error (meter), displacement
rate (mm/year), and azimuth sub-pixel position (meter) is
⎡ ⎤
0.114 0.016 −0.050
Qb̂ = ⎣ 0.016 0.100 −0.318⎦ , (6.20)
−0.050 −0.318 2.049
which clearly shows that the azimuth sub-pixel position cannot be estimated
with the required precision (the standard deviation for this parameter is
∼1.4 m while the azimuth resolution for ERS is ∼4 m). The corresponding
correlation matrix is
6.2 Las Vegas 123
Fig. 6.23: Some differential interferograms for the Las Vegas area. The inter-
ferograms are sorted from left to right according to (the absolute value of the)
perpendicular baseline, |BB⊥| ∈ [2, 1098] m. The master image was acquired at June
13th , 1997. The bottom right image shows the mean intensity of the 45 images,
scaled to the interval [–20,0] dB. The city area where the estimation is performed is
indicate by the rectangle. The images are in the radar coordinate system, i.e., in this
case, the images are roughly geo-referenced when they are mirrored in the vertical
axis.
Fig. 6.24: Selected points (red) and constructed reference network for the Las Vegas
test site. The images are coarsely geo-referenced by mirroring in vertical axis. The
estimation is restricted to 100592 pixels with SCR > 2, see also section 4.2. The
points are selected in the city area of approximately 23 × 20 km2 (∼220 points per
km2 ). The reference network contains 1084 points, selected using a sparsification
with a grid cell width of ∼500 m, see section 4.4.1. The number of arcs per point is
set to six, which resulted in 4475 arcs with an average length of 880 m.
124 Chapter 6: Real Data Processing
⎡ ⎤
1.000 0.150 −0.104
ρ = ⎣ 0.150 1.000 −0.703⎦ . (6.21)
−0.104 −0.703 1.000
This clearly expresses the large correlation between estimates of the linear
displacement rate and the azimuth sub-pixel position. Therefore, during the
estimation using wrapped data only a DEM error and linear displacement
rate are estimated.
First, the variance components for each SLC image are estimated. The
1084 points of the reference network are used to perform 534 independent
estimations of the DEM error and the linear displacement rate differences,
using the a priori stochastic model. The mean distance between points is
521 m, with a standard deviation of 166 m. The variance components are
estimated at each arc separately, cf. Eq. (4.8). The mean of the estimated
variance components at these 534 arcs are used to construct the stochastic
model, see Eq. (2.51). They are plotted as function of perpendicular, temporal,
and Doppler baseline, see Fig. 6.25. The earliest ERS–1 images seem to
have a slightly worse precision, as do images with a large perpendicular or
Doppler baseline. This can be due to pixels that are not ideal point scatterers,
i.e., pixels that (slightly) decorrelate with these baselines, even though the
reference network points are selected based on their amplitude dispersion
index.
60 60
[deg]
[deg]
40 40
20 20
0 0
–1000 –500 0 500 1000 –4 –2 0 2
Perpendicular baseline [m] Temporal baseline [y]
60
[deg]
40
20
0
–100 0 100 200 300
Doppler baseline [Hz]
Fig. 6.25: Square roots of the estimated variance components for the Las Vegas
test site as function of perpendicular, temporal, and Doppler baseline. A red asterisk
corresponds to an ERS–1, and a blue diamond to an ERS–2 acquisition.
Next, the DEM error and the linear displacement rate are estimated with
the ILS estimator at the arcs of the reference network. The vc-matrix of
the double-difference phase observations is constructed using the estimated
variance components. The standard deviation of the pseudo-observations used
6.2 Las Vegas 125
for the regularization of the ILS estimator is set to 25 m for the DEM error and
25 mm/y for the linear displacement rate differences. The theoretical success
rate for the bootstrap estimator, cf. Eq. (3.20) is P (ẑ = z) = 0.990. After these
estimations, a pre-processing step is performed to speed up the alternative
hypothesis testing procedure during the network integration. Firstly, points
for which the connecting arcs have a mean estimated a posteriori variance
factor (cf. Eq. (4.26)) larger than three are iteratively removed. Secondly,
arcs with an a posteriori variance factor σ̂ 2x > 3.0 are removed, as long as
each point is still connected with at least three arcs. In this case, in total
seventeen points and 244 arcs are removed by this procedure, leaving 1067
points and 4231 arcs in the reference network. After this pre-processing
step, the reference point is selected in an area that is known to be stable.
Then, the parameters are least-squares adjusted (integrated), as described in
section 4.4.2. The testing parameters that are used are γ0 = 0.90 and α0 = 0.05,
see also Appendix B. During these tests four more points and 43 arcs are
removed (∼1%). The residuals for the parameters at the arcs of the reference
network are shown in Fig. 6.26. Note that not all residuals are exactly equal to
zero, as they are expected to be. The maximum absolute residual is 2.09 m and
2.14 mm/y for the DEM error and the linear displacement rate, respectively.
The standard deviation of the residual is 0.15 m for the DEM error and
0.20 mm/y for the linear displacement rate. More arcs could be removed
until all residuals are zero, but this would hardly have any effect on the
estimated parameters at the points of the reference network, and no obvious
outlier could be detected anymore. A reason for these non-zero misclosures
could be the use of the a posteriori variance factor to create the vc-matrix
for the estimated parameters at the arcs, see Eq. (4.12). This is done to
down-weight arcs that are estimated with a large a posteriori variance factor
since this information would otherwise be lost. Next, the ∼100000 points,
initially selected based on their SCR, which are not part of the reference
network, are estimated relatively to the established reference network. The
3 3
2 2
1 1
y]
[m]
0
–1
–2
–3 3
0 1000 3000 4000 0 1000 3000 4000
[arc] [arc]
Fig. 6.26: Least-squares residuals at arcs of reference network for the Las Vegas
test site. Plotted are 4188 residuals for the DEM error and the linear displacement
rate.
126 Chapter 6: Real Data Processing
CPU time required for these estimations is ∼0.03 s per point. The same
standard deviation for the pseudo-observations used to regularize the ILS
estimator and the same stochastic model are used for these estimations. 58888
points with an a posteriori variance factor σ̂ 2x < 3.0 are selected as reliable
points. Furthermore, eighteen points are removed because the estimated DEM
error is outside the interval [−80, 80] m, or the estimated displacement rate
is outside the interval [−30, 30] mm/y. The residual phase at these points is
unwrapped in the interferograms using the sparse grid MCF algorithm, after
which the unwrapped interferometric phase is obtained by addition of the
unwrapped residual phase to the model phase, see also section 4.6. Using the
unwrapped data, the same or additional parameters can be estimated, see also
section 6.2.4, where seasonal displacement is estimated using these unwrapped
data. However, the estimated parameters using wrapped data can be inspected
already. Fig. 6.27 shows a plot of the estimated linear displacement rates at the
accepted points. Wrapped data is used to obtain these estimates. The point
density is approximately 130 points per km2 . Clearly visible is the overall
−20 20
Fig. 6.27: Estimated linear displacement rates using wrapped data at ∼60000 points
for the Las Vegas test site. Area is approximately 23×20 km2 . 45 ERS acquisition
are used, acquired between April 21st , 1992, and February 18th , 2000.
6.2 Las Vegas 127
subsidence, occurring from the bottom of this image, to the center, and then
to the upper left corner. This displacement pattern is mainly caused by three
localized subsidence bowls (Northern, Central, and Southern bowl), which
were recognized before 1980 (Bell et al., 2002). Amelung et al. (1999) measured
a maximum subsidence of 190 mm between April 1992 and December 1997
(33 mm/y average) using four differential interferograms for the Northern
bowl, and 110 mm (19 mm/y average) for the Central bowl, which reasonably
agrees with these estimates. Furthermore, there seems to be a subsidence bowl
located roughly between the Northern and Central bowls, and some localized
uplift areas (lower right, slightly above the center, on the right of the major
subsidence bowl in the Northwest, and in the upper right corner). The two
uplift areas on the right were also identified by Bell et al. (2002).
The phase residuals at the points after estimation of the DEM error and
the linear displacement rate are shown for the first three interferograms
in Fig. 6.28. The spatial correlation of the phase residuals suggests that
interferometric atmospheric signal is contained in the residuals. A spatial low-
pass filter could be used to estimate this signal. However, spatially correlated
unmodeled displacement could also be present in the residuals.
−π
Fig. 6.28: Residual interferometric phase after estimation of ithe DEM error and
the displacement rate for the first three interferograms of the Las Vegas data set,
orbits 12024, 22254 and 3216. The residuals are spatially correlated, which suggests
that atmospheric signal is contained in these residuals.
p2 (T k ) = 1, k = 1, . . . , K. (6.22)
phase, see also Eq. (2.29). Note that the base functions are normally used
to model displacement, cf. Eq. (2.14), and this parameter is not related to
displacement. However, the generic concept of defining base functions to
estimate parameters can be used for this parameter as well. This has the
advantage that the software does not need to be adapted for this choice, but
only the input.
There is a small correlation between the estimated parameter for the
average atmosphere and the other parameters in this case, which is evident
from the vc-matrix
⎡ ⎤
0.113 0.008 −0.006
Qb̂ = ⎣ 0.008 0.041 −0.008⎦ , (6.23)
−0.006 −0.008 0.258
12024
22254
3216
Fig. 6.29: Residual phase after estimation of DEM errors and displacement rates
(left), compared to additional estimation of average atmosphere (right). For the
first three interferograms of the Las Vegas stack, orbits 12024, 22254 and 3216, the
residual phase at the points of the reference network is shown as function of range
and azimuth coordinates. Two ambiguity levels are plotted for easier interpretation
(green and blue). The red line corresponds to the estimated trend. Clearly, the
residuals are much smaller if an additional parameter is estimated accounting for
the average interferometric atmosphere.
The DEM errors, linear displacement rates, and azimuth sub-pixel positions
are simultaneously estimated using the unwrapped phase data at the ∼60000
unwrapped points. The vc-matrix for this choice of parameters is given in
Eq. (6.20). From this matrix, and the correlation matrix, Eq. (6.21), it is
clear that the azimuth sub-pixel position should not be estimated, due to the
relatively small variation and temporal correlation of the Doppler centroid
frequencies of the acquisitons, see also Fig. 6.22. Nonetheless, this estimation
is performed. The estimated DEM errors and linear displacement rates are
visually identical to the solutions obtained using a functional model that does
not include the azimuth sub-pixel position, see also Fig. 6.27. The mean of the
estimated DEM error at these points is –1.68 m, with a standard deviation
of 3.90 m. The mean and standard deviation of the estimated displacement
rates are –0.91 mm/y and 2.70 mm/y, respectively.
The estimated azimuth sub-pixel positions for these points is shown in
Fig. 6.31 in the range between –4 and 4 m, the azimuth resolution of ERS.
Clearly, unmodeled displacement is leaked to the estimated azimuth sub-
pixel position. The sub-pixel position is expected to be spatially uncorrelated,
but the estimates are not. This is caused by the distribution of the Doppler
centroid frequencies of this dataset. Consider Fig. 6.22 again. The estimated
azimuth sub-pixel position is a linear function of the Doppler centroid
130 Chapter 6: Real Data Processing
−π
Fig. 6.30: Average atmospheric delay estimates at ∼60000 points for the Las Vegas
test site. Note that some features of the average atmospheric phase are visible in the
interferometric residual phase images shown in Fig. 6.28, particularly at the center,
lower right, and upper right, for orbits 12024 and 3216.
frequency between master and slave image, see Eq. (2.22). Thus,
displacement that occurs according to the pattern of the Doppler centroid
frequencies, i.e., around January–June, 1996, is estimated as azimuth position.
Therefore, the azimuth sub-pixel position should not be estimated if data
are used with a Doppler centroid frequencies that is strongly correlated with
time. Instead, the sub-pixel positions should be estimated using a point target
analysis, and the phase interpolated at these positions in the interferograms.
A Standard PS Analysis (SPSA) of the Las Vegas area was carried out by
Tele-Rilevamento Europa2 (TRE), independently from the estimation with
the STUN algorithm. In their terminology, an SPSA analysis is intended for
large scale application, mainly to identify stable areas and to highlight possible
risk areas. The minimum size of the area is approximately 100 km2 . The
estimated parameters are DEM error, displacement rate, and the APS for each
2
a POLIMI spin-off company
6.2 Las Vegas 131
−4 4
Fig. 6.31: Estimated azimuth sub-pixel positions for the Las Vegas test site. Clearly,
unmodeled displacement is leaked to the estimated positions, which are expected to
be random. This is due to the small variation and temporal correlation of the Doppler
centroid frequencies of the acquisitions. For this dataset, azimuth sub-pixel position
should not be estimated. The azimuth direction corresponds to the vertical axis.
area, whereas the reference point used by the STUN algorithm is located
in an area possibly undergoing slight subsidence. This different choice can
introduce a constant bias between the displacement rates that are estimated
by both algorithms. Moreover, it can cause differences in the (displacement)
phase time series, since these are relative to the selected reference point. The
parameters that are estimated using the SPSA are the DEM error, the line-
of-sight displacement rate, and the APS for each acquisition. No temporal
filtering is performed to compute the APS. The reference PS technique uses
the ensemble coherence maximization for the estimation of these parameters.
The estimated precision is better than 1 mm/y for the displacement rates and
in the order of a few mm for the individual APS corrected measurements, for
PS within ∼5 km from the reference point (Ferretti et al., 2000a, 2001).
Fig. 6.32 shows the estimated linear displacement rates and coherence
using the reference technique. Data are geo-referenced and superimposed on a
LandSat image. For comparison, the estimates using the STUN algorithm
are geo-referenced as well, see Fig. 6.33. The area processed by TRE is
clearly somewhat larger than the area processed using the STUN algorithm.
Plotted are 157018 points with a minimum estimated coherence |γ̂ | = 0.63.
For visualization purposes, only the point with the highest coherence in each
100 × 100 m2 cell is plotted by TRE. However, the density of the estimated
points using the SPSA seems larger, also outside the urbanized area. For
example, a large amount of points are estimated with high coherence north
of the city. This suggests that pixels with a distributed scattering mechanism
are estimated as well, apparently with good results. Recall that the estimation
using the STUN algorithm is limited to an initially selected set of pixels, see
also section 4.2. However, the estimated displacement rates compare well to
each other. The same spatial displacement features and similar magnitudes
of the displacement rates are estimated by both processing techniques. Note
that the colorbar used by TRE contains more green in the middle and slightly
less intense red and blue at the edges.
Finally, the displacement time series of a few points are compared. The
position of the points is near the Northern subsidence bowl, see Fig. 6.33.
The estimated displacement rates and quality of these points are listed in
Table 6.5 for the SPSA and STUN algorithm. In general there is a good
agreement between the estimated displacement rates at the selected points. A
constant bias between the estimated displacement rate is acceptable, because
it can be due to relative movement between the reference points used during
both estimations. The mean difference between the estimated displacement
rates at these eight points is –1.36 mm/y (supporting the theory that the ref-
erence point used by the STUN algorithm is subsiding slightly) and the
standard deviation of the difference is 0.77 mm/y. The difference between the
estimated displacement rates is the largest for the sixth point. If this point
is not considered, the mean and standard deviation are 1.13 and 0.47 mm/y,
respectively, which is within the estimated precision for the displacement rates
using the STUN algorithm. Furthermore, the estimated coherence (SPSA) is
6.2 Las Vegas 133
Fig. 6.32: Linear displacement rates ensemble coherence estimated using the
Standard PS Analysis (reference technique). Data processed by c Tele-Rilevamento
Europa.
Fig. 6.33: Estimated linear displacement rates using the STUN algorithm. Data
are geo-referenced (UTM projection, zone 11) and plotted on top of the temporally
averaged radar intensity map. The white rectangle indicates the zoom area shown in
(b). The area is ∼2 × 2 km2 . Displacement time series estimated using the reference
PS technique are available for the eight points marked with an ×.
134 Chapter 6: Real Data Processing
Table 6.5: Line-of-sight displacement rates estimated with the STUN and reference
technique (SPSA estimates and geographical coordinates provided by TRE). Posi-
tion of the PS points given in Fig. 6.33. Estimates using STUN after unwrapping;
estimation included a parameter for the average atmosphere. Estimated precision
σ̂ α using STUN algorithm relative to reference point at distance ∼10 km. A bias
of the difference between both estimates can be due to relative motion between the
different reference points used.
the highest for the eighth point while the estimated precision using the STUN
algorithm is the worst for this point. It is possible that point identification
errors are made causing these differences.
Fig. 6.34: Displacement time series using the Standard PS Analysis (reference
technique). Data processed by c Tele-Rilevamento Europa.
The displacement times series for these two interesting points are shown
in Fig. 6.34 (as provided by TRE). The results of the STUN algorithm at
these two points are shown in Fig. 6.35. The estimated mean displacement
rate is also plotted for the STUN and for the SPSA algorithm in this figure.
The time series of the STUN algorithm appears more noisy. This is expected
since the APS is estimated and removed during the SPSA using a spatial
low-pass filter, while this is not done for the estimation using the STUN
algorithm. Note that a possible phase unwrapping error would manifest as a
jump of ∼28 mm, i.e., half the wavelength used by ERS, in these plots. Such
errors are not apparent, but could explain the difference between the estimated
6.2 Las Vegas 135
The displacement of the Las Vegas test site is known to have a seasonal
component, see, for example, (Hoffmann et al., 2001). Therefore, one linear
and two trigonometric base functions are used to model the displacement
4π
p1 (T k ) = − T k,
λ
4π
p2 (T ) = −
k
sin(2πT k ), (6.25)
λ
4π
p3 (T ) = −
k
cos(2πT k ) − 1 .
λ
For easier interpretation, the coefficients of the trigonometric base functions
are transformed to a seasonal displacement function with certain amplitude
A (in mm) and certain temporal offset t0 (in years) as
60
displacement [mm]
40
20
–20
–40
(a) Point 6
60
displacement [mm]
40
20
–20
–40
(b) Point 8
Fig. 6.35: Displacement time series with respect to the reference point using the
STUN algorithm. The unwrapped phase data are plotted, corrected for the estimated
DEM error and the average atmospheric phase, at the two points that deviate the
most from estimates obtained using the reference technique. The error bars show the
a posteriori error on the interferometric double-difference phase (one-sigma level,
converted to mm). The estimated displacement rate is plotted as a red line. The
SPSA estimate obtained by TRE is plotted in blue (dotted line: original estimate;
dashed line: corrected by 1.13 mm/y to account for the mean difference between
STUN and SPSA).
6.2 Las Vegas 137
0 [dB]
40
[mm]
–40
0 [dB]
0 [dB]
40
[mm]
–40
0 [dB]
0 [dB]
40
[mm]
–40
0 [dB]
Fig. 6.36: Relative displacement time series between nearby points using the STUN
algorithm. (a) shows the difference between the nearby points 6 and 8, see also
Fig. 6.35. The error bars are computed for the double-difference phase observations
using the estimated variance components (valid for nearby points). Note the large
deviations from the displacement model although atmospheric phase is not expected
to be present. The intensity (in dB) of the points in the slave images are given
above and below the displacement values for point 6 and point 8, respectively. The
intensity for these points varies, but it is clearly above the average for this area,
which is approximately –11 dB. (b) shows the difference between point 6 and a
bright point ∼500 m to the South, and (c) the difference with a point ∼1150 m to
the South.
138 Chapter 6: Real Data Processing
50
40
extension [mm]
30
20
10
–10
Jan. 1, 1994 Jan. 1, 1998 Jan. 1, 2002
Fig. 6.37: Borehole extensometer data of the Lorenzi test site. Data courtesy USGS.
where the order of the parameters is DEM error (m), linear displacement
rate (mm/y), amplitude of sinusoid term (mm), and cosine term (mm).
The maximum correlation coefficient is ρ 2,3 = 0.23, i.e., there is no significant
6.2 Las Vegas 139
The displacement is modeled with the three base functions given in Eq. (6.25).
The parameters of these base functions are estimated simultaneously with the
DEM error at the arcs of the reference network. The a posteriori variance
components are used to construct the vc-matrix used by the ILS estimator.
The standard deviation for the pseudo-observations for the DEM error is set to
25 m, to 10 mm/y for the linear displacement, and to 10 mm for the sine and
cosine terms. Note that a smaller value for the linear displacement parameter
is used here than in the previous scenarios. This is done to prevent estimation
of unrealistically large coefficients for the linear and seasonal terms that,
when combined, could better fit the wrapped data. After the ILS estimation
at the arcs, the estimated differences are spatially integrated and tested, as
described in section 6.2.2. During a pre-processing step, 21 points and 330
arcs are removed from the reference network. One more point and eleven arcs
are removed during the following alternative hypothesis testing. The least-
squares residuals for the estimated parameters at the arcs after the alternative
hypothesis testing step are shown in Fig. 6.38. The mean residual for all terms
is 0.000, while the standard deviations are 0.16 m, 0.28 mm/y, 0.18 mm,
and 0.15 mm, respectively. After the integration of the estimated difference
parameters, the same four parameters are estimated at all other points with
respect to the network. Finally, 52242 points are selected with an estimated
a posteriori variance factor σ̂ 2x < 3.0. Previously, when less parameters were
estimated, ∼60000 points were detected using the same threshold for the
a posteriori variance factor. The reason that more points are removed is
the smaller redundancy due to the two additional base functions. Points
that do not undergo seasonal displacement have identical least-squares phase
residuals, but a larger estimated variance factor. Consequently, less points are
140 Chapter 6: Real Data Processing
2 2
[mm/y]
[m]
0 0
–2 –2
2 2
[mm]
[mm]
0 0
–2 –2
Fig. 6.38: Least-squares residuals at arcs of reference network for the Las Vegas
test site. Plotted are 4216 residuals for the DEM error, linear displacement rate,
sine and cosine term of the seasonal displacement.
selected. Moreover, the estimated variance factors are now marginally smaller,
which also implies that more points will be removed if the same threshold
for the a posteriori variance factor is used. The phase at the selected points
is unwrapped using the MCF sparse grid unwrapping algorithm. The final
estimation is performed after unwrapping of the data at the selected points.
The same three base functions are used to model the displacement as for
the estimation using wrapped data. Additionally, the average interferometric
atmosphere is estimated using the unwrapped data, see also Eq. (6.22).
The vc-matrix for this choice of estimated parameters (DEM error, linear
displacement, sinus and cosine term, average interferometric phase.) is given
as ⎡ ⎤
0.088 0.005 0.002 −0.019 −0.010
⎢ 0.005 0.036 0.012 0.004 0.004 ⎥
⎢ ⎥
Qb̂ = ⎢
⎢ 0.002 0.012 0.266 0.029 0.016 ⎥ .
⎥ (6.28)
⎣−0.019 0.004 0.029 0.255 0.070 ⎦
−0.010 0.004 0.016 0.070 0.371
The estimated displacement parameters are shown in Fig. 6.39(a). Clearly,
the estimated amplitude of the seasonal displacement is significant for the
central subsidence bowl, i.e., ∼10 mm or ∼2.25 rad for ERS. Furthermore, the
estimated offset is spatially very consistent, while it is estimated independently
for each point. The average estimated offset is ∼0.5 year at the positions
with the largest amplitude. Since the master image is acquired June 13th ,
the maximum (relative uplift) of the seasonal term thus occurs around March
and the minimum (additional subsidence) around September.
6.2 Las Vegas 141
−20 20 −20 20
0 10 0 10
Fig. 6.39: Las Vegas linear and seasonal displacement. (a) shows the estimated
linear displacement rate, the amplitude and the offset of the seasonal displacement,
cf. Eq. (6.26). (b) shows the same parameters after significance tests are performed.
142 Chapter 6: Real Data Processing
Fig. 6.40: A posteriori variance factors for the Las Vegas test site. The asterisk
indicates the reference point. In general, the precision decreases with distance from
the reference point.
6.2 Las Vegas 143
Significance tests
However, the parameters cannot be estimated significantly for all points. For
example, the estimated offsets of the seasonal term are probably meaningless
at the sides of the processed area where the amplitudes are practically zero,
see also Fig. 6.39(a). Therefore, a hypothesis testing procedure is followed,
to test the significance of the estimated displacement parameters. The least
relaxed model is used as the null-hypothesis, i.e., no displacement is assumed,
but only parameters for a DEM error and average interferometric atmosphere
are modeled. The null-hypothesis is thus given by
⎡ 1 ⎤
β 1
⎢ β 2 1⎥
⎢ ⎥ Δh
H0 : E{y} = ⎢ . . ⎥ . (6.29)
⎣ . . ⎦ S̄
. .
βK 1
The first alternative hypothesis extends the null-hypothesis using a linear
displacement model, i.e.,
⎡ 1 ⎤ ⎡ ⎤
β 1 p1 (1)
⎢ β 2 1⎥ ⎢ p1 (2) ⎥
⎢ ⎥ Δh ⎢ ⎥
HA : E{y} = ⎢ . . ⎥
1
+ ⎢ . ⎥ α1 . (6.30)
⎣ .. .. ⎦ S̄ ⎣ .. ⎦
βK 1 p1 (K)
The second alternative hypothesis extends the first alternative hypothesis
further to account for seasonal displacement
⎡ 1 ⎤ ⎡ ⎤
β 1 p1 (1) p2 (1) p3 (1) ⎡ ⎤
⎢ β 2 1⎥ ⎢ p1 (2) p2 (2) p3 (2) ⎥ α1
⎢ ⎥ Δh ⎢ ⎥
HA : E{y} = ⎢ . . ⎥
2
+⎢ . .. .. ⎥ ⎣α2 ⎦ . (6.31)
⎣ .. .. ⎦ S̄ ⎣ .. . . ⎦ α3
βK 1 p1 (K) p2 (K) p3 (K)
The procedure which is usually followed to perform these tests is to first set
the power for all tests and the level of significance for the one-dimensional
test. If the null-hypothesis is rejected, the test quotient is computed for the
specified alternative hypotheses. The alternative hypothesis with the largest
test quotient is selected as the most likely one, see Appendix B. The vc-matrix
of the observations is generally assumed to be known. (Because otherwise, for
example, the null-hypothesis would almost never be rejected if the precision
of the observations would be described very pessimistically.) However, the
vc-matrix is not known in this case, since the atmospheric signal (at lower
frequencies) is not accounted for in the estimated variance components.
Therefore, the following procedure is followed during the significance tests:
1. Perform the least-squares adjustment under the most relaxed model and
estimate a variance factor for the vc-matrix used during the estimation
with wrapped data.
144 Chapter 6: Real Data Processing
For the Las Vegas test site, nine ENVISAT acquisitions are available, see
Appendix C. ENVISAT swath IS2 data with a comparable looking angle as
ERS are used. In this section it is demonstrated that this ENVISAT data
can be used on the PS points by generating cross interferograms with the
same ERS master that is used before. The ENVISAT sensor does not exactly
6.2 Las Vegas 145
use the same carrier frequency and sampling rates as the ERS satellites, see
Table 6.6. These slight differences cause three problems when data of these
Table 6.6: Sensor parameters for ERS and ENVISAT. Derived quantities are given
in parenthesis. The ground-range is computed using a look angle θ = 21◦ (i.e., swath
IS2 of ENVISAT).
Pixel spacing
Sensor Radar frequency (Wavelength) range (ground-range) azimuth
ERS 5.300 GHz (5.656 cm) 7.90 m (22.04 m) 4.00 m
ENVISAT 5.331 GHz (5.624 cm) 7.80 m (21.77 m) 4.05 m
60 60
[deg]
[deg]
40 40
20 20
0 0
–1000 –500 0 500 1000 –4 –2 0 2 4 6
Perpendicular baseline [m] Temporal baseline [y]
60
[deg]
40
20
0
–500 0 500 1000
Doppler baseline [Hz]
Fig. 6.41: Square roots of the estimated variance components for the Las Vegas
test site as function of perpendicular, temporal, and Doppler baseline. A red asterisk
corresponds to an ERS–1, a blue diamond corresponds to an ERS–2, and a green
square to an ENVISAT acquisition.
as a bias of the residual phase). Each row shows a phase difference time
series between two points. From the left to the right, the phase differences
are corrected for an additional component. The left column shows the phase
differences corrected for the estimated DEM error. The second column corrects
the phase also for the estimated linear displacement. The difference between
the second and third column is the correction for the estimated azimuth
sub-pixel position. Note that this has the largest effect in the interferogram
with the last ERS–2 acquisition as slave image, which has a Doppler centroid
frequency difference of ∼1000 Hz. The effect in (all) the interferograms with
ERS–1 slaves is approximately 25% of that, due to the smaller Doppler
centroid frequency differences, see also Fig. 6.22. Finally the range sub-pixel
position is corrected for, i.e., the last column shows the residual phase. The
range sub-pixel position only affects the ENVISAT acquisitions due to the
difference in wavelength of ERS and ENVISAT.
6.3 Conclusions
The STUN algorithm is successfully applied at two test sites using real data of
ERS–1 and ERS–2. For the Berlin test site a linear model is used to estimate
displacement over a time period from 1992 to 2000. A bowl-shaped uplift
area with a diameter of approximately 4 km and a maximum displacement
rate of ∼4 mm/y is identified to the west of the city of Berlin, as well as
some individual points with apparent displacement. Using 50 images, the
standard deviation of the displacement rate is estimated to be about 0.3 mm/y
between points less than one kilometer apart, and to be below 0.9 mm/y for all
6.3 Conclusions 147
4 4 4 4
0 0 0 0
–4 –4 –4 –4
–4 0 4 –4 0 4 –4 0 4 –4 0 4
4 4 4 4
0 0 0 0
–4 –4 –4 –4
–4 0 4 –4 0 4 –4 0 4 –4 0 4
4 4 4 4
0 0 0 0
–4 –4 –4 –4
–4 0 4 –4 0 4 –4 0 4 –4 0 4
Fig. 6.42: Time series for estimation including ENVISAT for the Las Vegas test
site. Plotted are interferometric phase difference data between three pairs (rows)
of nearby points. The most left panel shows the data corrected for the estimated
DEM error. The next panel additionally corrects for linear displacement rate. The
third panel corrects for the azimuth sub-pixel position and the last panel for range
sub-pixel position. The estimate shown in the first row has an a posteriori variance
factor σ̂ 2 = 0.3; second row σ̂ 2 = 1.1; third row σ̂ 2 = 2.1. A red asterisk corresponds
to an ERS–1, a blue diamond to an ERS–2, and a green square to an ENVISAT
acquisition.
points (with respect to the reference point). Different settings for the STUN
algorithm are described and experimented with. The estimated parameters
are influenced mainly by the amount of available acquisitions, although this
dependency is related to the actual signal and the bounds on the search space.
Results are relatively insensitive to the number of points and arcs in the
reference network. This is likely related to limited atmospheric signal for the
Berlin area and small displacement rates. The choice of the testing parameters
also has only a small effect on the finally estimated parameters. A cross-
comparison using data from two adjacent tracks confirms the presence of the
uplift area and the validity of the estimated precision.
For the Las Vegas test site, approximately 50 images are available, acquired
between 1992 and 2000. Using these data, experiments are performed regard-
ing the choice of estimated parameters. First, the displacement is modeled
using a linear rate. Three (known) subsidence bowls and various uplift areas
are identified. The maximum estimated displacement is ∼20 mm/y (Northern
bowl). It is demonstrated for this test site that estimation of an average
atmospheric interferometric phase significantly reduced the phase residuals,
but not the estimated displacement rates. Azimuth sub-pixel position could
not be estimated reliably, due to the small variation of the Doppler centroid
frequencies of the Las Vegas data that are used. Seasonal displacement
is modeled using trigonometric base functions. The seasonal displacement
mainly occurred in the area of the main subsidence bowls. The maximum
148 Chapter 6: Real Data Processing
amplitude of this seasonal term is ∼10 mm. The precision of the estimated
displacement is estimated to be ∼0.2 mm/y for the linear component, and
∼1 mm for the amplitude of the seasonal component (standard deviation of
the difference between point less than ∼1 km apart). The use of significance
tests is demonstrated using the unwrapped data. The estimated parameters
using the STUN algorithm compared well with results using the reference PS
technique that is performed by TRE. Finally it is shown that ENVISAT data
can be used to extend the data stack by performing cross interferometry using
the same ERS master image.
7
Conclusions and Recommendations
In section 7.1 the conclusions of the research described in this work are given.
Section 7.2 presents recommendations for further research that are outside
the scope of this study.
7.1 Conclusions
The conclusion is provided in this section using the items that already are
identified in section 1.1:
Functional model. The functional relationship between the observed, wrap-
ped, double-difference phase values and unknown parameters is derived
and written in matrix notation using the model of observation equations,
as commonly used in geodesy. The model contains parameters for a DEM
error, displacement, average atmospheric phase, azimuth and range sub-
pixel position. The displacement is modeled as a function of temporal
baseline in a generic fashion using base functions. This allows for a broad
range of applications. Using synthetic data, an algebraic polynomial is
used to model the displacement. For the Berlin test site the displacement
is modeled using a linear rate, and for the Las Vegas test site a combination
of a linear rate and a seasonal model is successfully applied. The unknown
DEM error is practically a linear function of the perpendicular baseline,
while the azimuth sub-pixel position is a linear function of Doppler
centroid frequency difference. The range sub-pixel position is mainly a
linear function of the difference in radar frequency, i.e., this parameter
is only of importance when data of sensors with different frequencies are
used.
149
150 Chapter 7: Conclusions and Recommendations
is also described with respect to this point. The precision deteriorates the
further the distance of a point from the reference point. This is due to
atmospheric signal, which is accounted for in the stochastic model. The
precision of the observed phase in the acquired images is estimated to
be between ∼15◦ –40◦ . The formal relative precision of estimated DEM
error and linear displacement rate between nearby points is typically
∼0.3 m and ∼0.2 mm/y (standard deviation) for the test sites Berlin
and Las Vegas, using approximately 50 ERS–1 and ERS–2 acquisitions
over a nine year time period. For points approximately 25 km apart,
these standard deviations are a factor three and five worse for Berlin
and Las Vegas, respectively. The correlation between these two estimated
parameters is small for these data stacks. Azimuth sub-pixel position could
not be estimated because of two reasons. First, the precision would be too
small (compared to the azimuth resolution), due to small variation of the
Doppler centroid frequency. Second, the correlation with the displacement
is largely caused by a lack of overlap between the available ERS–1
and ERS–2 data, which have a systematically different Doppler centroid
frequency.
Reliability. The Spatio-Temporal Unwrapping Network (STUN) algorithm is
a robust method for three-dimensional phase unwrapping. Due to numer-
ical constraints, estimations are first performed between selected points of
a reference network. The parameters are then obtained at the points by a
least-squares adjustment and testing procedure. During experiments with
synthetic data, all simulated incoherent points are detected and removed.
Also during experiments with real data all significant misclosures could be
handled by removing points and arcs. The main parameter that affects the
outcome of the STUN algorithm is the number of available interferograms,
although displacement for the Berlin test site could be estimated using
ten interferograms, albeit with a smaller precision. The sensitivity to the
number of points and arcs in the reference network is small for the cases
presented in this study. Furthermore, the STUN algorithm is shown to be
insensitive to the choice of the testing parameters during the alternative
hypothesis testing.
7.2 Recommendations
point is visible during the entire time span of the observations. Both
assumptions can be relaxed. There could be more than one dominant point
in the resolution cell, and it could be that points, for example, on top of new
buildings, become “persistent” after a certain time. Relaxation of the first
assumption leads to the application of tomography, i.e., the estimation
of the position of multiple scatterers based on observations with a small
variation of the viewing angle. Concerning the second assumption, recently
the concepts of semi-PS and temporary-PS are introduced in the reference
PS technique, see (Basilico et al., 2004). These are PS points that are
only visible in a subset of the interferometric stack. In the reference PS
technique, such points are now identified based on sudden changes of the
amplitude, using the (reasonable) assumption that the phase stability is
directly related to the amplitude stability.
It should be studied how these issues are best dealt with in the STUN
algorithm. For example, a different weight for each point in each acquisition
could be introduced. However, this increases the numerical complexity
and processing time considerably. The recent concept of Integer Aperture
Estimation (see, e.g., Teunissen, 2003a,b, 2004) can possibly be used to
identify points that are not coherent during certain acquisitions. This
class of estimators provides an overall approach of integer estimation and
validation. Each estimated ambiguity can be integer or non-integer, though
it is known that this parameter is integer valued. This choice can be made
based on the distance of the float solution to the closest integer, i.e., if the
integer solution fits badly with the model, the float solution can be used
instead, or the observation can be ignored. Moreover, the fail rate can be
controlled using this estimator, i.e., the user can set a limit to the amount
of incorrectly fixed ambiguities.
• In the near future, a new class of 1–3 m high-resolution spaceborne
radar sensors will be launched, such as RADARSAT–2, TerraSAR–X,
and COSMO–SkyMed. To demonstrate the level of detail that will be
visible in such imagery, Fig. 7.1 shows a high-resolution radar acquisition.
Some interesting aspects can be derived from this image. First, it seems
that there must be many scatterers in the resolution cell of current day
sensors. Such high-resolution images could be used to study the physical
properties of PS points and the interaction of the radar signal with the
object, something not yet fully understood. Second, using high-resolution
data, each object may contain many PS points. This could allow for the
observation of stress increase in buildings, which could be used for civil
protection. However, suitable algorithms need to be developed to deal with
the large amount of data. The point selection used by the STUN algorithm
may prove to be a good way to achieve a considerable data reduction, while
not losing information. Finally, a recursive estimation scheme needs to be
developed to enable updating a current solution with newly acquired data.
• The phase caused by atmospheric heterogeneities is treated as a stochastic
signal, and the estimation of displacement parameters is focused upon.
7.2 R ecommendations 153
Fig. 7.1: (Image courtesy of A. Brenner.) High-resolution radar image of the campus
of Karlsruhe University, Germany. The SAR data were acquired by the X-band,
airborne, PAMIR sensor, in August, 2002, and processed to 20 cm resolution, see
also (Brenner and Ender, 2004; Soergel et al., 2004).
This appendix gives the proof for Eq. (4.26) on page 64, following (Verhoef,
1997). The expectation and covariance of the quadratic form of normally
distributed observables are required for this proof, which are derived in
section A.1. The proof, given in section A.2, is based on the concept of
y R -variates. The definition of y R -variates is that they are either functionally
or stochastically related to another set of observables y (Teunissen, 2000a).
are now derived (M and N are symmetric matrices with dimension m×m).
The trace operator1 is used for this derivation, as well as the following of its
properties
1
trace(A) = i Aii
157
158 A: Variance Component Estimation
trace(A) = trace(A∗ )
trace(A + B) = trace(A) + trace(B)
trace(CD) = trace(DC) (A.3)
∗ ∗
trace(Abb ) = b Ab
E{trace(A)} = trace(E{A}),
where A and B are square matrices with dimension m×m, C is m×n, D
is n×m, and b is m×1. Using these properties, together with E{e} = 0 and
D{e} = E{e ∗e} = Qy , the expectation of a quadratic form is
The first to fourth moment of the probability density function of the ob serv
se v-
ables is used to derive the dispersion of the quadratic form. A normal
distribution is assumed here. The covariance between two quadratic forms of
normally distributed observables is given as
Finally, using E{e∗ e}=Qy and E{e∗ M e}= trace(M Qy ), the covariance be-
tween two quadratic forms of normally distributed observables is derived as
The m×1 vector of stochastic errors of the model defined in Eq. (A.1) is now
written as a linear combination of p groups of elementary errors as
p
e= Uk k , (A.12)
k=1
where
e is a m×1 vector of stochastic errors.
Uk is a m×ck transformation matrix describing the influence of the kth group
of errors on the observations.
k is a ck ×1 vector of stochastic errors of group k.
It is assumed that the groups of errors are not correlated and that the errors
within a group have equal variance and are uncorrelated, i.e.,
E{k } = 0 ∀ k = 1, · · · , p;
C{k , l } = 0 ∀ k = 1, · · · , p, l = k; (A.13)
D{k } = Q k
= σk2 Ick ∀ k = 1, · · · , p.
Using the propagation law of variances and covariances it follows for the vc-
matrix of the observations and the covariance matrix of the kth group of errors
with the observations that
160 A: Variance Component Estimation
p
p
∗
Qy = σk2 Uk Uk = σk2 Qk , (A.14)
k=1 k=1
Q k ,y
= σk2 Uk ∗ . (A.15)
where êR and ê are the vectors of least-squares residuals and y R -variates,
respectively, and QyR ,y is the matrix of covariance between the y-variates
and the y R -variates. According to the theory of y R -variates, the least-squares
estimator for the kth group of errors k can thus be computed from the least-
-1 ∗ -1
squares vector of corrections ê = PB⊥ y, with PB⊥ = I−B(B ∗ Q-1 y B) B Qy ,
as
ˆk = Q k ,y
Q-1
y ê
(A.17)
= σk2 Uk ∗ Q-1 ⊥
y PB y.
With Eq. (A.4) the expectation of this quadratic form can be shown to be
The covariance between the shifting variates of the kth and lth group of errors
follows from Eq. (A.10) as
p
∗ ⊥ -1 ⊥ ⊥ -1 ⊥
E{y Q-1
y PB Qk Qy PB y} = trace(Q-1 2
y PB Qk Qy PB Ql ) σl ∀ k = 1, · · · , p.
l=1
(A.22)
Symbolically this system of equations can be written as
E{r} = N σ, (A.23)
where
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
r1 N11 . . . N1l . . . N1p σ12
⎢ .. ⎥ ⎢ .. .. .. ⎥ ⎢ .. ⎥
⎢.⎥ ⎢ . . . ⎥ ⎢.⎥
⎢ ⎥ ⎢ ⎥ ⎢ 2⎥
r=⎢ ⎥
⎢ rl ⎥ , N =⎢
⎢Nk1 . . . Nkl . . . Nkp ⎥⎥, σ=⎢ ⎥
⎢ σl ⎥ , (A.24)
⎢.⎥ ⎢ . .. .. ⎥ ⎢.⎥
⎣ .. ⎦ ⎣ .. . . ⎦ ⎣ .. ⎦
rk Np1 . . . Npl . . . Npp σk2
and
r(k+1) = ê∗ Q-1 -1
y Qk Qy ê,
⊥ -1 ⊥
(A.25)
N (k+1, l+1) = trace(Q-1
y PB Qk Qy PB Ql ).
σ̂ = N -1 r, (A.26)
Thus,
D{r} = 2N. (A.28)
The vc-matrix of the estimated components follows by application of the
propagation law of variances as
163
164 B Alternative Hypothesis Testing
• A group of related observations. For example the point test, see Eq. (4.21).
• All observations. This is the overall model test, see Eq. (4.18).
• A specific model deviation. For example, significance of increasing the
degree of a polynomial displacement model.
The test statistic Tq can be used to decide whether the alternative hypothesis
is a significant extension of the null-hypothesis. It is given as (Teunissen,
2000b)
-1
Tq = ê∗ Q-1 Cq ∗ Q-1
y Cq (C
-1 ∗ -1
y Qê Qy Cq ) Cq Qy ê, (B.3)
where ê is the vector of least-squares residuals under the null-hypothesis. This
test statistic has a chi-squared distribution with q degrees of freedom
possible tests possessing the same size type-I error, the test for which the
type-II error is as small as possible should be used. The test statistic Tq ,
cf. Eq. (B.3), is a consequence of this principle (Teunissen, 2000b).
Note that only the size of the MDB can be computed, not its sign. To compare
the model errors that can be found by different alternative hypotheses, it is
166 B Alternative Hypothesis Testing
required that all tests have the same power. Otherwise it would be difficult to
assess which alternative hypothesis to select. For example, consider the case
that a specific hypothesis is able to detect a model error of, say, 2 cm with a
probability γ1 = 60% while another hypothesis can detect another model error
of, say, 3 cm with γ2 = 90%. It cannot be decided which of these two alterna-
tive hypothesis must be selected. Choosing the same power for all tests is the
essence of how tests of different dimensions are related to each other in the
B-method of testing, see (Baarda, 1968; Teunissen, 2000b). The non-centrality
parameter λ is the connection between the tests, enabling the computation of
appropriate testing parameters and the corresponding critical values. First,
the power γ0 for all tests and a value α1 for the level of significance for the
one-dimensional test is fixed, and the corresponding non-centrality parameter
λ0 is computed. Then, the level of significance for a test of dimension qi is
computed using the relation
Then, the corresponding critical value can be computed from the chi-squared
distribution using this level of significance. The use of equal values for the
non-centrality parameter λ = λ0 and power γ = γ0 in all tests implies that a
certain model error can be found with the same probability by all tests.
Tqk Tql
> ∀ k = l. (B.12)
χ2α(q) χ2α(q)
If the dimensions of the alternative hypotheses are not equal, then the most
likely alternative hypothesis does not necessarily have the largest test statistic
Tqki , since the probability density function of the test statistics, cf. Eq. (B.5),
168 B Alternative Hypothesis Testing
are not the same. Therefore, to select the most likely alternative hypothesis,
the test quotient is a better criterion, confronting each test statistic with its
critical value. Test quotients with a value smaller than one do not have to
be considered, since the null-hypothesis is more likely for these alternative
hypotheses. For the remaining alternative hypotheses, it is assumed that the
test quotient of the most likely alternative hypothesis is most rejected, i.e.,
that it has the largest test quotient. This assumption is only true when a
power γ0 ≤ 50% is used, as proven in (De Heus et al., 1994). The proof is
given below.
In the following, it is assumed that HAi is the arc test, and HAj is the point
test. The dimension of the alternative hypothesis HAj is thus larger than that
of HAi . Moreover, the point test is an extension of the arc test, i.e., Cqi ∈ Cqj .
Two cases can be distinguished:
1. HAi is the correct hypothesis (H HAj is too relaxed).
If the testing parameters are chosen according to the B-method of testing,
i.e., λ(αqi , q = qi, γ = γ0 ) = λ(αqj,q = qj , γ = γ0 ), then both test statistics Tqki
and Tqlj are rejected with the same probability. Furthermore, if the choice
is made for γ0 = 50%, then it is expected that when an error occurs of the
size of the minimal detectable bias that
Tqki Tqlj
= = 1. (B.13)
χ2α(q χ2α(q
i) j)
Since γ > γ0 , this can only be the case if αqj > αqj , where αqj is the new
value for α. Therefore,
χ2α(q ) < χ2α(q ) . (B.15)
j j
Since the test quotient is computed with the larger, original, value χ2α(qj ) ,
the test quotient for the lower dimensional alternative hypothesis is
expected to be the larger than that of the higher dimensional test quotient
Tqki Tqlj
> > 1. (B.16)
χ2α(q χ2α(q
i) j)
than the minimal detectable bias is always smaller than 1, which implies
that this alternative hypothesis should be rejected in favor of the null-
hypothesis.
2. HAj is the correct hypothesis (H
HAi is too narrow).
Since Cqi ∈ Cyj , a model error according to HAj implies that
∇i ∗ Cqi ∗ Q-1 ∗ ∗ -1
y Qê Qy Cqi ∇i < ∇j Cqj Qy Qê Qy Cqj ∇j .
-1 -1
(B.17)
∇j ∗ Cqj ∗ Q-1
y Qê Qy Cqj ∇j = λ(αqj , q = qj , γ = 50%),
-1
(B.18)
Tqj Tqi
=1> . (B.19)
χ2α(q χ2α(q
j) i)
∇i ∗ Cqi ∗ Q-1
y Qê Qy Cqi ∇i = λ(αqi , q = qi , γ = 50%)
-1
(B.20)
= λ(αqj , q = qj , γ = 50%).
Tqj Tqi
> = 1. (B.21)
χ2α(q χ2α(q
j) i)
If the model error ∇j increases more, the test quotient of the lower
dimensional test increases more than that of the hypothesis HAj . It could
even happen that the test quotient of HAi becomes larger than that of
HAj , depending on the values of the elements in the vector ∇j . This means
that it could happen that the wrong alternative hypothesis (the lower
dimensional) is selected, also when γ = 50%, particularly if some elements
of the model error are large. The chance that this occurs is reduced if the
dimension of HAj is only one larger than that of HAi . If the significance of
extension of a displacement model is tested, e.g., by increasing the degree
of an algebraic polynomial, the alternative hypotheses testing procedure
thus must be performed in small steps. The first alternative hypothesis
should specify an increase of the degree of the displacement model by
one, and if the null-hypothesis is rejected, this becomes the new null-
hypothesis, and a new alternative hypothesis is specified, again increasing
the degree of the displacement model by one.
Thus, for selection of the most likely alternative hypothesis among hypotheses
of unequal dimensions, it is mandatory that γ0 ≤ 50%. Since a larger power
enables detection of smaller errors, in (De Heus et al., 1994) the power is
170 B Alternative Hypothesis Testing
chosen as γ0 = 50%. This choice implies that the minimum detectable bias of
an alternative hypothesis of dimension q can be interpreted as the error that
just is rejected/not rejected (i.e., the expected test quotient is one).
However, if a larger value for γ0 is chosen, more often a higher dimensional
alternative hypothesis is selected, even when the lower dimensional alternative
hypothesis correctly specifies the model error. For the application of point tests
and arc tests this implies that points are removed from the reference network,
even when this would not be necessary. This is not a severe drawback, since it
is considered more important that the points that remain in the network are
correctly computed. In the developed software, the user can select whether to
do only arc tests, only point tests, or to do both. Only in the latter case the
above is of importance.
C
Used SAR Data
This appendix lists the ERS and ENVISAT data that are relevant to the
experiments performed during this study, see Chapter 6.
Table C.1: ERS data for the Berlin area (track 165, frame 2547). Parameters are
relative to the master acquisition, orbit 10039, acquired at 22–MAR–1997 10:03 am
(UTC). Data are sorted on acquisition time, except for the master image, which is
listed first.
171
172 C Used SAR Data
Table C.2: ERS data for the Berlin area (track 437, frame 2547). Parameters are
relative to the master acquisition, orbit 18327, acquired at 22–OCT–1998 10:06 am
(UTC). Data are sorted on acquisition time, except for the master image, which is
listed first.
Table C.3: Data for the Las Vegas area (track 356, frame 2871). Parameters are
relative to the master acquisition, orbit 11232, acquired at 13–JUN–1997 18:22 pm
(UTC). Data are sorted on acquisition time, except for the master image, which is
listed first.
177
178 D Developed Software
”
Table D.1: Common keys to all programs. prg” stands for the name of the
program.
D.1.2 Logging
then trace and debug information are suppressed. Typically, trace is used to
output the value of a variable, debug for more important variables and for
locating where something is executed in a program, info for user information
on the process and important parameters, warning for manageable unexpected
events, and alarm for fatal events. Furthermore, these levels are defined for
logging to the screen and to a file. The level for the screen is normally set equal
to or higher than that of the file in order to have more detailed information
available in the log file without disrupting the overview of the processing.
All data stored in binary files use an adapted form of the SUN raster file (SRF)
format. The SRF format is lossless and mainly used to store rectangular image
data. The data is written after a header as a raw binary stream in row major
order using big endian byte order. The header consists of eight 4 byte integers,
optionally followed by a color map. These integers specify the magic number,
width, height, depth, length, type, color map type, and color map length. The
width must be even. The standard is defined for a data depth of 1, 8, 24, and
32 bit. The type specifies the way the raster is stored.
The GENESIS format is an extension of the SRF format that enables
storing data of any type, including complex data, in big and small endian
byte order. The header contains additional information on the data type and
byte order using bits that are not used in the SRF format. This format is
backward compatible with the SRF format for the old data types, except that
the data are allowed to be of odd width, which is correctly handled by most
image viewing software anyway.
D.1.4 Archiving
D.1.5 Implementation
Library functions handle argument parsing and i/o with the parameter pool
and binary files. Using these functions prevents programming mistakes and
guarantees conformity of the modules. Moreover, it puts a buffer between
implementation and realization, i.e., a change in the library function updates
all modules. For example, if a variable of type string was written to a
parameter pool without surrounding quotes (by the library function), it may
be awkward to work with strings containing blanks. By updating the definition
of a string in the library such that it has quotes this can conveniently be
remedied, without the need for changing the source code of every individual
module. The computing environment IDL is used at the DLR for algorithm
development. IDL is an interactive array-oriented language with numerous
routines for mathematical analysis and visualization. The possibility of inter-
active examination of the content of variables and graphical representation
allows for rapid prototyping. When a program is fully developed, the IDL
code is ported to C++ for performance reasons. Similar library functions for
argument parsing, parameter pool handling, and binary i/o are available in
IDL and C++.
Although IDL is an interactive language, it can be run in batch mode
from the UNIX prompt. This concept is used extensively to make the software
(quasi-) operational. Each program written in IDL has a (csh) wrapper script
that starts IDL and parses the command line input to it. The syntax for
executing C++ and IDL code is therefore identical, and the user does not
notice a difference between the two, except in execution time.
points
...
1 2 H
images
1
2
K
flags for points
The spare bits could be used in future version to flag, e.g., points that do not
undergo significant displacement, interferograms that are corrected for phase
trends, etc.
D.2.2 Parallelization
The flag array for the points can be conveniently used to divide the workload
between several available processors. The first bit in the flag array is used
to signal that a point needs to be computed. Since the computations can
be performed independently of each other, each processor can be assigned
182 D Developed Software
Table D.2: Meaning of bits used in flag arrays. A flag array is a byte vector that
contains information on the usage of points or interferograms. A dash indicates that
a bit is undefined.
1 2 3 4 5 678
point use ref. network ref. point accepted unwrapped – – –
image use – – – – –––
0
flag array passed to CPU 1
Fig. D.2: High-level parallelization using the flag array for points. Example for two
CPUs. The points in the right half of the original flag array are flagged with a zero
(gray) in the flag array passed to the first CPU, and vice versa for the second CPU.
Estimation software
aps analyze Compute and plot covariance and structure functions.
aps blockfilt Spatial low-pass filter using an averaging kernel.
aps complexfilt Spatial low-pass complex filter.
aps low pass time Temporal low-pass filter.
baselineplot Plot baseline distribution and selects master based on
the total coherence function.
calibrate analyze Plot histograms of amplitude of user selected region.
construct network Construction of the reference network.
correct dem Remove holes and invalid values from a DEM.
create base f General purpose utility to generate base functions.
estimate all points Estimate parameters at points with respect to the
reference network.
184 D Developed Software
E.1 Introduction
The software on the CDROM consists of the STUN Matlab Toolbox. The
fundamental capabilities of this toolbox are variance component estimation
and integer least-squares estimation. Furthermore, test data and demon-
stration scripts are provided that explain the key concepts and usage of
these functions. These functions are intended to demonstrate the practical
application of the theory described in this book. They serve as essential
building blocks in a persistent scatterer processing system. The ILS routines
are adapted from original code distributed in the lambda toolbox, see (Delft
University of Technology, 2005). The changes make the code better suited
for PSI processing. Because the dimension of the problem is bigger for PSI
(there are more ambiguities to fix compared to GPS), the code is optimized
to handle frequent calls using the same mathematical model. Furthermore,
the input parsing is changed to be more consistent and to use pre-computed
matrices which previously were computed inside loops. Optimization for speed
is also achieved by vectorization of loops and by assuming that in general
only the best candidate needs to be found during the ILS search of the
185
186 E Software on the CDROM
Fig. E.1: Fractal atmospheric delay and topography generated by the phase simu-
lation program “simphi”. While in Matlab, type help simphi for more information
and ready-to-use examples.
Key Features
• Simulation scripts to generate data for a single master stack of differential
interferograms, according to the functional model given in section 2.2.1.
• Variance Component Estimation, described in section 4.3 and Appendix A,
using the stochastic model derived in section 2.2.2.
• Integer Least-Squares ambiguity resolution using the LAMBDA method
described in Chapter 3.
E.2 Installation 187
• Demonstration scripts that can be used to learn the basic call sequences
and can be customized for a specific implementation.
• Online help and back-references to this book to find more information.
Examples of the data that can be simulated with the program “simphi” are
shown in Fig. E.1. See Fig. E.2(a) for a screenshot of the demonstration
script “ilsdemo1db”. With this script the basics of ILS are demonstrated
by simulation of a second degree polynomial with random parameters and
noise. The coefficients of the polynomial are then estimated using wrapped
data, together with a second best fitting set of parameters. Fig. E.2(b) shows a
screenshot of the demonstration program to estimate the variance components
using a stack of simulated data. Help on these demonstration scripts can be
obtained from within Matlab.
E.2 Installation
The program Matlab is required to run the routines and demonstrations. The
M-file scripts have been tested with Matlab versions 5.3 upward. To learn
more about Matlab, see (MathWorks, 2005).
To install this toolbox, only the directory stun on the CDROM needs to
be copied to your hard drive. After you have copied the toolbox, start Matlab.
Make sure that the toolbox can be found by adding the directory where you
copied it in the Matlab search path. For example, you can do this by using
the script addpath. Type:
>> help addpath
for more information. After this, typing help stun should give:
>> help stun
Elementary Functions.
bs_success - Success rate of the bootstrap.
enscoh - Ensemble coherence.
plotarc - Plot arcs of network in color.
plotps - Plot PS points in color.
sparsify - Bin points in grid cells.
wrap - Wrap phase data.
Functional Model.
simacq - Simulate acquisition baselines.
simphi - Simulate ERS-like phase observations.
simpos - Simulate 2D positions.
Stochastic Model.
psivce - Variance Component Estimation for PSI.
188 E Software on the CDROM
Ambiguity Resolution.
ebs - Extended bootstrap fixed solution.
ils - Integer least-squares fixed solution.
ltdl - LTDL decomposition Q=L.’*D*L.
zt - Z-transformation (decorrelation).
Demonstrations.
ilsdemo1d - ILS estimation of the slope of wrapped line.
ilsdemo1db - ILS estimation of 2nd degree polynomial.
stundemo - Main demonstration script.
vcedemo - Variance Component Estimation.
Data Sets.
poly_good.mat - Example data set for ilsdemo1db.
poly_wrong.mat - Example data set for ilsdemo1db.
vcedemodat.mat - Reference results for vcedemo.
stundemodat1.mat - Fractal displacement example data set.
stundemodat2.mat - Example data with atmosphere.
After installation of the STUN toolbox, it is advised to first read the general
help as described in section E.2. To obtain the help of a specific function in
the STUN toolbox, type help followed by the M-file name, for example
>> help ils
gives a synopsis of the program “ils” and explains the input and output
variables. It also provides some short examples of how to run it.
• People interested in variance component estimation are encouraged to first
run the program “vcedemo” and to inspect the code of this M-file.
• For those people mainly interested in integer least-squares estimation, a
good starting point is to run the demonstration program “ilsdemo1d” and
follow the instructions on the screen.
The demonstration program “stundemo” combines these concepts, i.e., es t -
mation of variance components and estimation of DEM error and linear
displacement rate differences. Furthermore, this program creates a reference
network, and the estimated parameters at the arcs are integrated to obtain
them with respect to the reference point. The source code of this demonstra-
tion program can be extended for your own applications.
References
Adam, N., Kampes, B. M. and Eineder, M.: 2004. The development of a sci-
entific persistent scatterer system: Modifications for mixed ERS/ENVISAT
time series. ENVISAT & ERS Symposium, Salzburg, Austria, 6–10 Septem-
ber, 2004. pp. 1–9 (cdrom).
Adam, N., Kampes, B. M., Eineder, M., Worawattanamateekul, J. and
Kircher, M.: 2003. The development of a scientific permanent scatterer
system. ISPRS Workshop High Resolution Mapping from Space, Hannover,
Germany, 2003. pp. 1–6 (cdrom).
Amelung, F., Galloway, D. L., Bell, J. W., Zebker, H. A. and Laczniak, R. J.
1999. Sensing the ups and downs of Las Vegas: InSAR reveals structural con-
trol of land subsidence and aquifer-system deformation. Geology 27(6), 483–
486.
Amelung, F., Jónssen,
´ S., Zebker, H. and Segall, P. 2000. Widespread
uplift and trap door faulting on Galápagos
´ volcanoes observed with radar
interferometry. Nature 407(6807), 993–996.
Arnaud, A., Adam, N., Hanssen, R., Inglada, J., Duro, J., Closa, J. and
Eineder, M.: 2003. ASAR ERS interferometric phase continuity. Interna-
tional Geoscience and Remote Sensing Symposium, Toulouse, France, 21–25
July, 2003. pp. 1–3 (cdrom).
Arnaud, A., Closa, J., Hanssen, R., Adam, N., Eineder, M., Inglada, J.,
Fitoussi, G. and Kampes, B.: 2004. Development of algorithms for the
exploitation of ERS-Envisat using the stable points network. Technical
report. Altamira Information. Barcelona, Spain. European Space Agency
Study report ESA Contract Nr. 16702/02/I-LG.
Arrigoni, M., Colesanti, C., Ferretti, A., Perissin, D., Prati, C. and Rocca, F.:
2003. Identification of the location phase screen of ERS-ENVISAT perma-
nent scatterers. Third International Workshop on ERS SAR Interferometry
‘FRINGE03’, Frascati, Italy, 1–5 December, 2003. pp. 1–3 (cdrom).
Baarda, W.: 1968. A testing procedure for use in geodetic networks. Vol. 5 of
Publications on Geodesy. 2 edn. Netherlands Geodetic Commission. Delft.
189
190 References
Colesanti, C., Ferretti, A., Locatelli, R. and Savio, G.: 2003a. Multi-platform
permanent scatterers analysis: first results. Second GRSS/ISPRS Joint
Workshop on “Data Fusion and Remote Sensing over Urban Areas”, Berlin,
Germany, 22–23 May, 2003. pp. 52–56.
Colesanti, C., Ferretti, A., Novali, F., Prati, C. and Rocca, F. 2003b. SAR
monitoring of progressive and seasonal ground deformation using the
Permanent Scatterers Technique. IEEE Transactions on Geoscience and
Remote Sensing 41(7), 1685–1701.
Colesanti, C., Ferretti, A., Prati, C. and Rocca, F.: 2002. Full exploitation of
the ERS archive: Multi data set permanent scatterers analysis. International
Geoscience and Remote Sensing Symposium, Toronto, Canada, 24–28 June,
2002. pp. 1–3 (cdrom).
Colesanti, C., Ferretti, A., Prati, C. and Rocca, F. 2003c. Monitoring
landslides and tectonic motions with the Permanent Scatterers Technique.
Engineering Geology 68, 3–14.
Colesanti, C., Ferretti, A., Prati, C., Perissin, D. and Rocca, F.: 2003d. ERS-
ENVISAT Permanent Scatterers Interferometry. International Geoscience
and Remote Sensing Symposium, Toulouse, France, 21–25 July, 2003. Vol. 2.
pp. 1130–1132.
Costantini, M. 1998. A novel phase unwrapping method based on network
programming. IEEE Transactions on Geoscience and Remote Sensing
36(3), 813–821.
Costantini, M. and Rosen, P.: 1999. A generalized phase unwrapping approach
for sparse data. International Geoscience and Remote Sensing Symposium,
Hamburg, Germany, 28 June–2 July, 1999. pp. 1–3 (cdrom).
Cumming, I. and Wong, F.: 2005. Digital Processing Of Synthetic Aperture
Radar Data: Algorithms And Implementation. Artech House Publishers.
New York. ISBN 1580530583.
Curlander, J. C. and McDonough, R. N.: 1991. Synthetic aperture radar:
systems and signal processing. John Wiley & Sons, Inc. New York.
De Heus, H. M., Joosten, P., Martens, M. H. F. and Verhoef, H. M. E.:
1994. Geodetische deformatie analyse: 1d- deformatieanalyse uit waterpas-
netwerken. Technical Report 5. Delft University of Technology, LGR Series.
Delft.
Delft University of Technology: 2005. Mathematical Geodesy and Positioning
web page. http://enterprise.lr.tudelft.nl/mgp/ (Accessed April, 2005).
Deutscher Wetterdienst: 2004. Deutscher Wetterdienst. http://www.dwd.de/
de/FundE/Klima/KLIS/daten/online/nat/ausgabe monatswerte.htm
(Accessed March, 2004).
Eineder, M. 2003. Efficient simulation of SAR interferograms of large areas and
of rugged terrain. IEEE Transactions on Geoscience and Remote Sensing
41(6), 1415–1427.
192 References
Eineder, M. and Adam, N.: 1997. A flexible system for the generation of
interferometric sar products. International Geoscience and Remote Sensing
Symposium, Singapore, 3–8 August, 1997.
Eineder, M. and Holzner, J.: 1999. Phase unwrapping of low coherence
differential interferograms. International Geoscience and Remote Sensing
Symposium, Hamburg, Germany, 28 June–2 July, 1999. pp. 1–4 (cdrom).
Elachi, C.: 1987. Introduction To The Physics and Techniques of Remote
Sensing. 2 edn. John Wiley & Sons. New York.
Evans, D. L., Price, J. L. and Barron, W. G.: 2000. Profiles of general de-
mographic characteristics; 2000 census of population and housing; Nevada.
Technical reportt. U.S. Census Bureau. http://www.census.gov/ (Accessed
March, 2004).
Farina, P.: 2003. Integration of permanent scatterers analysis and high
resolution optical images within landslide risk analysis. Third International
Workshop on ERS SAR Interferometry ‘FRINGE03’, Frascati, Italy, 1–5
December, 2003. pp. 1–3 (cdrom).
Fernandez, D. E., Meadows, P. J., Schaettler, B. and Mancini, P.: 1999. ERS
attitude errors and its impact on the processing of SAR data. CEOS SAR
Workshop, ESA-CNES, Toulouse, France, 26–29 October, 1999. pp. 1–9
(cdrom).
Ferretti, A., Prati, C. and Rocca, F. 1999a. Multibaseline InSAR DEM
reconstruction: The wavelet approach. IEEE Transactions on Geoscience
and Remote Sensing 37(2), 705–715.
Ferretti, A., Prati, C. and Rocca, F.: 1999b. Non-uniform motion monitoring
using the permanent scatterers technique. Second International Workshop
on ERS SAR Interferometry ‘FRINGE99’, Liège, ` Belgium, 10–12 Novem-
ber, 1999. ESA. pp. 1–6.
Ferretti, A., Prati, C. and Rocca, F.: 1999c. Permanent scatterers in SAR
interferometry. International Geoscience and Remote Sensing Symposium,
Hamburg, Germany, 28 June–2 July, 1999. pp. 1–3.
Ferretti, A., Prati, C. and Rocca, F. 2000a. Nonlinear subsidence rate
estimation using permanent scatterers in differential SAR interferometry.
IEEE Transactions on Geoscience and Remote Sensing 38(5), 2202–2212.
Ferretti, A., Prati, C. and Rocca, F.: 2000b. Process for radar measurements of
the movement of city areas and landsliding zones. International Application
Published under the Patent Cooperation Treaty (PCT).
Ferretti, A., Prati, C. and Rocca, F. 2001. Permanent scatterers in SAR
interferometry. IEEE Transactions on Geoscience and Remote Sensing
39(1), 8–20.
Freeman, A. 1992. SAR calibration: An overview. IEEE Transactions on
Geoscience and Remote Sensing 30(6), 1107–1121.
Gatelli, F., Monti Guarnieri, A., Parizzi, F., Pasquali, P., Prati, C. and Rocca,
F. 1994. The wavenumber shift in SAR interferometry. IEEE Transactions
on Geoscience and Remote Sensing 32(4), 855–865.
References 193
Usai, S. and Klees, R. 1999. SAR interferometry on very long time scale:
A study of the interferometric characteristics of man-made features. IEEE
Transactions on Geoscience and Remote Sensing 37(4), 2118–2123.
Van der Kooij, M. W. A.: 2003. Coherent target analysis. Third International
Workshop on ERS SAR Interferometry ‘FRINGE03’, Frascati, Italy, 1–5
December, 2003.
Van der Kooij, M. W. A. and Lambert, A.: 2002. Results of processing
and analysis of large volumes of repeat-pass InSAR data of Vancouver
and Mount Meager (B.C.). International Geoscience and Remote Sensing
Symposium, Toronto, Canada, 24–28 June, 2002.
Verhoef, H. M. E.: 1997. Geodetische deformatie analyse. Lecture notes, Delft
University of Technology, Faculty of Geodetic Engineering, in Dutch.
Walter, D., Hoffmann, J., Kampes, B. and Sroka, A.: 2004. Radar interfero-
metric analysis of mining induced surface subsidence using permanent scat-
terer. ENVISAT & ERS Symposium, Salzburg, Austria, 6–10 September,
2004. pp. 1–8 (cdrom).
Wegmuller, U.: 2003. Potential of interferometry point target analysis using
small data stacks. Third International Workshop on ERS SAR Interferom-
etry ‘FRINGE03’, Frascati, Italy, 1–5 December, 2003. pp. 1–3 (cdrom).
Werner, C., Wegmuller, U., Strozzi, T. and Wiesmann, A.: 2003. Interferomet-
ric point target analysis for deformation mapping. International Geoscience
and Remote Sensing Symposium,Toulouse, France, 21–25 July, 2003. pp. 1– 3
(cdrom).
Yong, Y., Chao, W., Hong, Z., Zhi, L. and Xin, G.: 2002. A phase unwrapping
method based on minimum cost flows method in irregular network. Interna-
tional Geoscience and Remote Sensing Symposium, Toronto, Canada, 24–28
June, 2002.
Zebker, H. A. and Lu, Y. 1998. Phase unwrapping algorithms for radar
interferometry: residue-cut least-squares, and synthesis algorithms. Journal
of the Optical Society of America A. 15(3), 586–598.
Zebker, H. A. and Villasenor, J. 1992. Decorrelation in interferometric radar
echoes. IEEE Transactions on Geoscience and Remote Sensing 30(5), 950–
959.
Zebker, H. A., Rosen, P. A., Goldstein, R. M., Gabriel, A. and Werner, C. L.
1994. On the derivation of coseismic displacement fields using differential
radar interferometry: The Landers earthquake. Journal of Geophysical
Research 99(B10), 19617–19634.
About the Author
Bert Kampes (1974, The Netherlands) was awarded the Master of Science
title from the department of Geodetic Engineering at Delft University of
Technology in 1998, and the PhD title from the department of Aerospace
Engineering in 2005. His MS graduation work focused on physical geodesy,
particularly on different methods for the estimation of spherical harmonic
coefficients that describe the gravity field of the earth. He worked as researcher
at Delft University of Technology from 1998 to 2001. In this time he developed
the “ Delft Object-oriented Radar Interferometric Software ” (Doris). This
software is in the public domain and used worldwide for the generation of
Digital Elevation Models and deformation maps. In 2001 he started his PhD
research on the application of the Permanent Scatterer technique, with the aim
to incorporate geodetic methodology in it, in order to increase its reliability. In
the same year he started working at the Remote Sensing Technology Institute
of the German Aerospace Center (DLR) as a project scientist to develop
algorithms for this technique, and to integrate them in the existing operational
radar interferometric processing chain. At present, he is with Vexcel Canada
Inc., located in Ottawa, Canada. His research interests are the application of
Persistent Scatterer techniques of radar interferometry, the integration with
optical data and Geographical Information Systems (GIS), and structured
software development.
199
Nomenclature
List of acronyms
am Ante meridian
APS Atmospheric Phase Screen
APSA Advanced Permanent Scatterer Analysis
ASCII American Standard Code for Information Interchange
C Computer language
C++ Computer language
C-band Frequency band with wavelength ∼6 cm
COSMO–SkyMed Italian next generation radar satellite constellation
CPU Central Processor Unit
csh C-shell (a UNIX shell)
CVS Concurrent Versions System
cycle Normalized phase difference (φ / (2π))
D/A Digital/Analog
DEM Digital elevation model
DIA Detection Identification Adaption alternative hypo -
theses testing procedure
DLR Deutsches Zentrum f¨ ffur Luft und Raumfahrt e.V.
(German Aerospace Center)
DLR-IMF DLR-Institut f¨ffür Methodiek der Fernerkundung
(Remote Sensing Technology Institute)
DSM Digital Surface Model
DTM Digital Terrain Model
ENVISAT Environmental Satellite (ESA)
ERS–1 First European Remote Sensing satellite (ESA)
ERS–2 Second European Remote Sensing satellite (ESA)
ESA European Space Agency
EW East-West
FORTRAN FORmula TRANslation (computer language)
fringe Phase difference of 2π
201
202 List of Acronyms
List of symbols
minute
◦
degree
∈ Element of
[a, b) Half-open interval {x|a≤x< b}
⊗ Kronecker tensor product
δl,0 Kronecker symbol (δl,m =1 for l = m, 0 otherwise)
Imaginary part
Real part
j Imaginary unit (j 2 = −1)
∠c Phase of complex number c
|.| Absolute value; Amplitude of complex number
. Norm
-1
{.} Inversion
{.}∗ Transposition
C{.} Covariance
D{.} Dispersion
E{.} Expectation
exp(p) Irrational number e (2.71828. . .) to power p
ln(p) Natural logarithm loge p
204 List of symbols
trace(.) Trace
W {.} Wrap
ξ Azimuth coordinate
η Range coordinate
λ Wavelength of carrier signal; Non-centrality
parameter
γ Complex coherence; Power of test
σo Normalized radar cross section
Δσ Radar cross section
θxk Local look angle (viewing angle, off-nadir angle)
k
θx, inc Local incidence angle
ϑxk Local squint angle
B⊥kx Local perpendicular baseline
Da Amplitude dispersion index
fx,kdc Local Doppler centroid frequency
Hamb Height ambiguity
k Sensor in orbit, corresponding position, and/or time
m Master sensor; Number of observations
rxk Geometric slant range from sensor k to point x
Tk Temporal baseline
v Sensor velocity
x Observed point
αd Coefficient of displacement base function d
βxk Local height-to-phase conversion factor
Δhx DEM error (height above reference surface)
Δrrx1,0 Line-of-sight deformation at x in the time interval
t 0 − t1
d(T , x, y) Spatio-temporal displacement function
f (T ) Temporal displacement function
g(x, y) Spatial displacement function
H Number of PSC points
K Number of interferograms
pd (t) Displacement base function
Sxk Atmospheric delay
ϕ Phase in SLC image
φ Wrapped interferometric phase
Φ Unwrapped interferometric phase
φatmo Phase induced by atmospheric delay
φtopo Interferometric phase caused by (uncompensated)
topography
φdefo Interferometric phase caused by displacement
List of symbols 205
207
208 Index
1. A. Stein, F. van der Meer and B. Gorte (eds.): Spatial Statistics for Remote Sensing.
1999 ISBN: 0-7923-5978-X
2. R.F. Hanssen: Radar Interferometry. Data Interpretation and Error Analysis. 2001
ISBN: 0-7923-6945-9
3. A.I. Kozlov, L.P. Ligthart and A.I. Logvin: Mathematical and Physical Modelling
of Microwave Scattering and Polarimetric Remote Sensing. Monitoring the Earth’s
Environment Using Polarimetric Radar: Formulation and Potential Applications.
2001 ISBN: 1-4020-0122-3
4. F. van der Meer and S.M. de Jong (eds.): Imaging Spectrometry. Basic Principles and
Prospective Applications. 2001 ISBN: 1-4020-0194-0
5. S.M. de Jong and F.D. van der Meer (eds.): Remote Sensing Image Analysis. Including
the Spatial Domain. 2004 ISBN: 1-4020-2559-9
6. G. Gutman, A.C. Janetos, C.O. Justice, E.F. Moran, J.F. Mustard, R.R. Rindfuss,
D. Skole, B.L. Turner II, M.A. Cochrane (eds.): Land Change Science. Observing,
Monitoring and Understanding Trajectories of Change on the Earth’s Surface. 2004
ISBN: 1-4020-2561-0
7. R.L. Miller, C.E. Del Castillo and B.A. McKee (eds.): Remote Sensing of Coastal
Aquatic Environments. Technologies, Techniques and Applications. 2005
ISBN: 1-4020-3099-1
8. J. Behari: Microwave Dielectric Behaviour of Wet Soils. 2005
ISBN 1-4020-3271-4
9. L.L. Richardson and E.F. LeDrew (eds.): Remote Sensing of Aquatic Coastal Ecosys-
tem Processes. Science and Management Applications. 2006 ISBN 1-4020-3967-0
10. To be published
11. To be published
12. B.M. Kampes: Radar Interferometry. Persistent Scatterer Technique. 2006
ISBN 1-4020-4576-X
springer.com