Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
20 views

Inverse Problem

The document discusses inverse problems, which involve calculating the causes that produced observed effects. It is called inverse because it starts with effects and calculates causes, unlike forward problems which do the opposite. Inverse problems are important in many fields like medical imaging, geophysics, and more. The history of inverse problems is discussed, along with conceptual understanding of how inverse problems relate to modeling physical systems and determining unknown model parameters from observations.

Uploaded by

BOBBY212
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Inverse Problem

The document discusses inverse problems, which involve calculating the causes that produced observed effects. It is called inverse because it starts with effects and calculates causes, unlike forward problems which do the opposite. Inverse problems are important in many fields like medical imaging, geophysics, and more. The history of inverse problems is discussed, along with conceptual understanding of how inverse problems relate to modeling physical systems and determining unknown model parameters from observations.

Uploaded by

BOBBY212
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Inverse problem

An inverse problem in science is the process of calculating from a set of observations the causal factors
that produced them: for example, calculating an image in X-ray computed tomography, source
reconstruction in acoustics, or calculating the density of the Earth from measurements of its gravity field. It
is called an inverse problem because it starts with the effects and then calculates the causes. It is the inverse
of a forward problem, which starts with the causes and then calculates the effects.

Inverse problems are some of the most important mathematical problems in science and mathematics
because they tell us about parameters that we cannot directly observe. They have wide application in
system identification, optics, radar, acoustics, communication theory, signal processing, medical imaging,
computer vision,[1][2] geophysics, oceanography, astronomy, remote sensing, natural language processing,
machine learning,[3] nondestructive testing, slope stability analysis[4] and many other fields.

History
Starting with the effects to discover the causes has concerned physicists for centuries. A historical example
is the calculations of Adams and Le Verrier which led to the discovery of Neptune from the perturbed
trajectory of Uranus. However, a formal study of inverse problems was not initiated until the 20th century.

One of the earliest examples of a solution to an inverse problem was discovered by Hermann Weyl and
published in 1911, describing the asymptotic behavior of eigenvalues of the Laplace–Beltrami operator.[5]
Today known as Weyl's law, it is perhaps most easily understood as an answer to the question of whether it
is possible to hear the shape of a drum. Weyl conjectured that the eigenfrequencies of a drum would be
related to the area and perimeter of the drum by a particular equation, a result improved upon by later
mathematicians.

The field of inverse problems was later touched on by Soviet-Armenian physicist, Viktor
Ambartsumian.[6][7]

While still a student, Ambartsumian thoroughly studied the theory of atomic structure, the formation of
energy levels, and the Schrödinger equation and its properties, and when he mastered the theory of
eigenvalues of differential equations, he pointed out the apparent analogy between discrete energy levels
and the eigenvalues of differential equations. He then asked: given a family of eigenvalues, is it possible to
find the form of the equations whose eigenvalues they are? Essentially Ambartsumian was examining the
inverse Sturm–Liouville problem, which dealt with determining the equations of a vibrating string. This
paper was published in 1929 in the German physics journal Zeitschrift für Physik and remained in obscurity
for a rather long time. Describing this situation after many decades, Ambartsumian said, "If an astronomer
publishes an article with a mathematical content in a physics journal, then the most likely thing that will
happen to it is oblivion."

Nonetheless, toward the end of the Second World War, this article, written by the 20-year-old
Ambartsumian, was found by Swedish mathematicians and formed the starting point for a whole area of
research on inverse problems, becoming the foundation of an entire discipline.

Then important efforts have been devoted to a "direct solution" of the inverse scattering problem especially
by Gelfand and Levitan in the Soviet Union.[8] They proposed an analytic constructive method for
determining the solution. When computers became available, some authors have investigated the possibility
of applying their approach to similar problems such as the inverse problem in the 1D wave equation. But it
rapidly turned out that the inversion is an unstable process: noise and errors can be tremendously amplified
making a direct solution hardly practicable. Then, around the seventies, the least-squares and probabilistic
approaches came in and turned out to be very helpful for the determination of parameters involved in
various physical systems. This approach met a lot of success. Nowadays inverse problems are also
investigated in fields outside physics, such as chemistry, economics, and computer science. Eventually, as
numerical models become prevalent in many parts of society, we may expect an inverse problem associated
with each of these numerical models.

Conceptual understanding
Since Newton, scientists have extensively attempted to model the world. In particular, when a mathematical
model is available (for instance, Newton's gravitational law or Coulomb's equation for electrostatics), we
can foresee, given some parameters that describe a physical system (such as a distribution of mass or a
distribution of electric charges), the behavior of the system. This approach is known as mathematical
modeling and the above-mentioned physical parameters are called the model parameters or simply the
model. To be precise, we introduce the notion of state of the physical system: it is the solution of the
mathematical model's equation. In optimal control theory, these equations are referred to as the state
equations. In many situations we are not truly interested in knowing the physical state but just its effects on
some objects (for instance, the effects the gravitational field has on a specific planet). Hence we have to
introduce another operator, called the observation operator, which converts the state of the physical
system (here the predicted gravitational field) into what we want to observe (here the movements of the
considered planet). We can now introduce the so-called forward problem, which consists of two steps:

determination of the state of the system from the physical parameters that describe it
application of the observation operator to the estimated state of the system so as to predict
the behavior of what we want to observe.

This leads to introduce another operator (F stands for "forward") which maps model parameters into
, the data that model predicts that is the result of this two-step procedure. Operator is called
forward operator or forward map. In this approach we basically attempt at predicting the effects
knowing the causes.

The table below shows, the Earth being considered as the physical system and for different physical
phenomena, the model parameters that describe the system, the physical quantity that describes the state of
the physical system and observations commonly made on the state of the system.

Governing Model parameters State of the physical Common observations on the


equations (input of the model) system system

Newton's law Measurement made by gravimeters at


Distribution of mass Gravitational field
of gravity different surface locations
Distribution of Magnetic field measured at different
Maxwell's
magnetic Magnetic field surface locations by magnetometers
equations
susceptibility (case of a steady state)

Distribution of wave- Wave-field caused by Particle velocity measured by


Wave
speeds and artificial or natural seismic seismometers placed at different
equation
densities sources surface locations

Diffusing material
Diffusion Distribution of Monitoring of this concentration
concentration as a function
equation Diffusion coefficient measured at different locations
of space and time
In the inverse problem approach we, roughly speaking, try to know the causes given the effects.

General statement of the inverse problem


The inverse problem is the "inverse" of the forward problem: instead of determining the data produced by
particular model parameters, we want to determine the model parameters that produce the data that is
the observation we have recorded (the subscript obs stands for observed). Our goal, in other words, is to
determine the model parameters such that (at least approximately)

where is the forward map. We denote by the (possibly infinite) number of model parameters, and by
the number of recorded data. We introduce some useful concepts and the associated notations that will
be used below:

The space of models denoted by : the vector space spanned by model parameters; it has
dimensions;
The space of data denoted by : if we organize the measured samples in a vector
with components (if our measurements consist of functions, is a vector space with
infinite dimensions);
: the response of model ; it consists of the data predicted by model ;
: the image of by the forward map, it is a subset of (but not a subspace unless is
linear) made of responses of all models;
: the data misfits (or residuals) associated with model : they can be arranged
as a vector, an element of .

The concept of residuals is very important: in the scope of finding a model that matches the data, their
analysis reveals if the considered model can be considered as realistic or not. Systematic unrealistic
discrepancies between the data and the model responses also reveals that the forward map is inadequate and
may give insights about an improved forward map.

When operator is linear, the inverse problem is linear. Otherwise, that is most often, the inverse problem
is nonlinear. Also, models cannot always be described by a finite number of parameters. It is the case when
we look for distributed parameters (a distribution of wave-speeds for instance): in such cases the goal of the
inverse problem is to retrieve one or several functions. Such inverse problems are inverse problems with
infinite dimension.

Linear inverse problems


In the case of a linear forward map and when we deal with a finite number of model parameters, the
forward map can be written as a linear system

where is the matrix that characterizes the forward map.

An elementary example: Earth's gravitational field


Only a few physical systems are actually linear with respect to the model parameters. One such system
from geophysics is that of the Earth's gravitational field. The Earth's gravitational field is determined by the
density distribution of the Earth in the subsurface. Because the lithology of the Earth changes quite
significantly, we are able to observe minute differences in the Earth's gravitational field on the surface of the
Earth. From our understanding of gravity (Newton's Law of Gravitation), we know that the mathematical
expression for gravity is:

here is a measure of the local gravitational acceleration, is the universal gravitational constant, is the
local mass (which is related to density) of the rock in the subsurface and is the distance from the mass to
the observation point.

By discretizing the above expression, we are able to relate the discrete data observations on the surface of
the Earth to the discrete model parameters (density) in the subsurface that we wish to know more about. For
example, consider the case where we have measurements carried out at 5 locations on the surface of the
Earth. In this case, our data vector, is a column vector of dimension (5×1): its -th component is
associated with the -th observation location. We also know that we only have five unknown masses in
the subsurface (unrealistic but used to demonstrate the concept) with known location: we denote by the
distance between the -th observation location and the -th mass. Thus, we can construct the linear system
relating the five unknown masses to the five data points as follows:

To solve for the model parameters that fit our data, we might be able to invert the matrix to directly
convert the measurements into our model parameters. For example:

A system with five equations and five unknowns is a very specific situation: our example was designed to
end up with this specificity. In general, the numbers of data and unknowns are different so that matrix is
not square.

However, even a square matrix can have no inverse: matrix can be rank deficient (i.e. has zero
eigenvalues) and the solution of the system is not unique. Then the solution of the inverse
problem will be undetermined. This is a first difficulty. Over-determined systems (more equations than
unknowns) have other issues. Also noise may corrupt our observations making possibly outside the space
of possible responses to model parameters so that solution of the system may not
exist. This is another difficulty.

Tools to overcome the first difficulty

The first difficulty reflects a crucial problem: Our observations do not contain enough information and
additional data are required. Additional data can come from physical prior information on the parameter
values, on their spatial distribution or, more generally, on their mutual dependence. It can also come from
other experiments: For instance, we may think of integrating data recorded by gravimeters and
seismographs for a better estimation of densities. The integration of this additional information is basically a
problem of statistics. This discipline is the one that can answer the question: How to mix quantities of
different nature? We will be more precise in the section "Bayesian approach" below.

Concerning distributed parameters, prior information about their spatial distribution often consists of
information about some derivatives of these distributed parameters. Also, it is common practice, although
somewhat artificial, to look for the "simplest" model that reasonably matches the data. This is usually
achieved by penalizing the norm of the gradient (or the total variation) of the parameters (this approach
is also referred to as the maximization of the entropy). One can also make the model simple through a
parametrization that introduces freedom degrees only when necessary.

Additional information may also be integrated through inequality constraints on the model parameters or
some functions of them. Such constraints are important to avoid unrealistic values for the parameters
(negative values for instance). In this case, the space spanned by model parameters will no longer be a
vector space but a subset of admissible models denoted by in the sequel.

Tools to overcome the second difficulty

As mentioned above, noise may be such that our measurements are not the image of any model, so that we
cannot look for a model that produces the data but rather look for the best (or optimal) model: that is, the
one that best matches the data. This leads us to minimize an objective function, namely a functional that
quantifies how big the residuals are or how far the predicted data are from the observed data. Of course,
when we have perfect data (i.e. no noise) then the recovered model should fit the observed data perfectly. A
standard objective function, , is of the form: where is the Euclidean norm (it
will be the norm when the measurements are functions instead of samples) of the residuals. This
approach amounts to making use of ordinary least squares, an approach widely used in statistics. However,
the Euclidean norm is known to be very sensitive to outliers: to avoid this difficulty we may think of using
other distances, for instance the norm, in replacement of the norm.

Bayesian approach

Very similar to the least-squares approach is the probabilistic approach: If we know the statistics of the
noise that contaminates the data, we can think of seeking the most likely model m, which is the model that
matches the maximum likelihood criterion. If the noise is Gaussian, the maximum likelihood criterion
appears as a least-squares criterion, the Euclidean scalar product in data space being replaced by a scalar
product involving the co-variance of the noise. Also, should prior information on model parameters be
available, we could think of using Bayesian inference to formulate the solution of the inverse problem. This
approach is described in detail in Tarantola's book.[9]

Numerical solution of our elementary example

Here we make use of the Euclidean norm to quantify the data misfits. As we deal with a linear inverse
problem, the objective function is quadratic. For its minimization, it is classical to compute its gradient using
the same rationale (as we would to minimize a function of only one variable). At the optimal model ,
this gradient vanishes which can be written as:

where FT denotes the matrix transpose of F. This equation simplifies to:

This expression is known as the normal equation (https://en.wikipedia.org/?title=Normal_equations&redire


ct=no) and gives us a possible solution to the inverse problem. In our example matrix turns out to be
generally full rank so that the equation above makes sense and determines uniquely the model parameters:
we do not need integrating additional information for ending up with a unique solution.

Mathematical and computational aspects

Inverse problems are typically ill-posed, as opposed to the well-posed problems usually met in
mathematical modeling. Of the three conditions for a well-posed problem suggested by Jacques Hadamard
(existence, uniqueness, and stability of the solution or solutions) the condition of stability is most often
violated. In the sense of functional analysis, the inverse problem is represented by a mapping between
metric spaces. While inverse problems are often formulated in infinite dimensional spaces, limitations to a
finite number of measurements, and the practical consideration of recovering only a finite number of
unknown parameters, may lead to the problems being recast in discrete form. In this case the inverse
problem will typically be ill-conditioned. In these cases, regularization may be used to introduce mild
assumptions on the solution and prevent overfitting. Many instances of regularized inverse problems can be
interpreted as special cases of Bayesian inference.[10]

Numerical solution of the optimization problem

Some inverse problems have a very simple solution, for instance, when one has a set of unisolvent
functions, meaning a set of functions such that evaluating them at distinct points yields a set of linearly
independent vectors. This means that given a linear combination of these functions, the coefficients can be
computed by arranging the vectors as the columns of a matrix and then inverting this matrix. The simplest
example of unisolvent functions is polynomials constructed, using the unisolvence theorem, so as to be
unisolvent. Concretely, this is done by inverting the Vandermonde matrix. But this a very specific situation.

In general, the solution of an inverse problem requires sophisticated optimization algorithms. When the
model is described by a large number of parameters (the number of unknowns involved in some diffraction
tomography applications can reach one billion), solving the linear system associated with the normal
equations can be cumbersome. The numerical method to be used for solving the optimization problem
depends in particular on the cost required for computing the solution of the forward problem. Once
chosen the appropriate algorithm for solving the forward problem (a straightforward matrix-vector
multiplication may be not adequate when matrix is huge), the appropriate algorithm for carrying out the
minimization can be found in textbooks dealing with numerical methods for the solution of linear systems
and for the minimization of quadratic functions (see for instance Ciarlet[11] or Nocedal[12]).

Also, the user may wish to add physical constraints to the models: In this case, they have to be familiar with
constrained optimization methods, a subject in itself. In all cases, computing the gradient of the objective
function often is a key element for the solution of the optimization problem. As mentioned above,
information about the spatial distribution of a distributed parameter can be introduced through the
parametrization. One can also think of adapting this parametrization during the optimization.[13]

Should the objective function be based on a norm other than the Euclidean norm, we have to leave the area
of quadratic optimization. As a result, the optimization problem becomes more difficult. In particular, when
the norm is used for quantifying the data misfit the objective function is no longer differentiable: its
gradient does not make sense any longer. Dedicated methods (see for instance Lemaréchal[14]) from non
differentiable optimization come in.

Once the optimal model is computed we have to address the question: "Can we trust this model?" The
question can be formulated as follows: How large is the set of models that match the data "nearly as well"
as this model? In the case of quadratic objective functions, this set is contained in a hyper-ellipsoid, a subset
of ( is the number of unknowns), whose size depends on what we mean with "nearly as well", that
is on the noise level. The direction of the largest axis of this ellipsoid (eigenvector associated with the
smallest eigenvalue of matrix ) is the direction of poorly determined components: if we follow this
direction, we can bring a strong perturbation to the model without changing significantly the value of the
objective function and thus end up with a significantly different quasi-optimal model. We clearly see that
the answer to the question "can we trust this model" is governed by the noise level and by the eigenvalues
of the Hessian of the objective function or equivalently, in the case where no regularization has been
integrated, by the singular values of matrix . Of course, the use of regularization (or other kinds of prior
information) reduces the size of the set of almost optimal solutions and, in turn, increases the confidence we
can put in the computed solution.

Stability, regularization and model discretization in infinite dimension

We focus here on the recovery of a distributed parameter. When looking for distributed parameters we have
to discretize these unknown functions. Doing so, we reduce the dimension of the problem to something
finite. But now, the question is: is there any link between the solution we compute and the one of the initial
problem? Then another question: what do we mean with the solution of the initial problem? Since a finite
number of data does not allow the determination of an infinity of unknowns, the original data misfit
functional has to be regularized to ensure the uniqueness of the solution. Many times, reducing the
unknowns to a finite-dimensional space will provide an adequate regularization: the computed solution will
look like a discrete version of the solution we were looking for. For example, a naive discretization will
often work for solving the deconvolution problem: it will work as long as we do not allow missing
frequencies to show up in the numerical solution. But many times, regularization has to be integrated
explicitly in the objective function.

In order to understand what may happen, we have to keep in mind that solving such a linear inverse
problem amount to solving a Fredholm integral equation of the first kind:
where is the kernel, and are vectors of , and is a domain in . This holds for a 2D
application. For a 3D application, we consider . Note that here the model parameters consist of
a function and that the response of a model also consists of a function denoted by . This equation is an
extension to infinite dimension of the matrix equation given in the case of discrete problems.

For sufficiently smooth the operator defined above is compact on reasonable Banach spaces such as the
. F. Riesz theory states that the set of singular values of such an operator contains zero (hence the
existence of a null-space), is finite or at most countable, and, in the latter case, they constitute a sequence
that goes to zero. In the case of a symmetric kernel, we have an infinity of eigenvalues and the associated
eigenvectors constitute a hilbertian basis of . Thus any solution of this equation is determined up to an
additive function in the null-space and, in the case of infinity of singular values, the solution (which
involves the reciprocal of arbitrary small eigenvalues) is unstable: two ingredients that make the solution of
this integral equation a typical ill-posed problem! However, we can define a solution through the pseudo-
inverse of the forward map (again up to an arbitrary additive function). When the forward map is compact,
the classical Tikhonov regularization will work if we use it for integrating prior information stating that the
norm of the solution should be as small as possible: this will make the inverse problem well-posed. Yet,
as in the finite dimension case, we have to question the confidence we can put in the computed solution.
Again, basically, the information lies in the eigenvalues of the Hessian operator. Should subspaces
containing eigenvectors associated with small eigenvalues be explored for computing the solution, then the
solution can hardly be trusted: some of its components will be poorly determined. The smallest eigenvalue
is equal to the weight introduced in Tikhonov regularization.

Irregular kernels may yield a forward map which is not compact and even unbounded if we naively equip
the space of models with the norm. In such cases, the Hessian is not a bounded operator and the notion
of eigenvalue does not make sense any longer. A mathematical analysis is required to make it a bounded
operator and design a well-posed problem: an illustration can be found in.[15] Again, we have to question
the confidence we can put in the computed solution and we have to generalize the notion of eigenvalue to
get the answer.[16]

Analysis of the spectrum of the Hessian operator is thus a key element to determine how reliable the
computed solution is. However, such an analysis is usually a very heavy task. This has led several authors
to investigate alternative approaches in the case where we are not interested in all the components of the
unknown function but only in sub-unknowns that are the images of the unknown function by a linear
operator. These approaches are referred to as the " Backus and Gilbert method[17]", Lions's sentinels
approach,[18] and the SOLA method:[19] these approaches turned out to be strongly related with one
another as explained in Chavent[20] Finally, the concept of limited resolution, often invoked by physicists,
is nothing but a specific view of the fact that some poorly determined components may corrupt the solution.
But, generally speaking, these poorly determined components of the model are not necessarily associated
with high frequencies.

Some classical linear inverse problems for the recovery of distributed


parameters

The problems mentioned below correspond to different versions of the Fredholm integral: each of these is
associated with a specific kernel .

Deconvolution
The goal of deconvolution is to reconstruct the original image or signal which appears as noisy and
blurred on the data .[21] From a mathematical point of view, the kernel here only depends on
the difference between and .

Tomographic methods

In these methods we attempt at recovering a distributed parameter, the observation consisting in the
measurement of the integrals of this parameter carried out along a family of lines. We denote by the line
in this family associated with measurement point . The observation at can thus be written as:

where is the arc-length along and a known weighting function. Comparing this equation with
the Fredholm integral above, we notice that the kernel is kind of a delta function that peaks on line
. With such a kernel, the forward map is not compact.

Computed tomography

In X-ray computed tomography the lines on which the parameter is integrated are straight lines: the
tomographic reconstruction of the parameter distribution is based on the inversion of the Radon transform.
Although from a theoretical point of view many linear inverse problems are well understood, problems
involving the Radon transform and its generalisations still present many theoretical challenges with
questions of sufficiency of data still unresolved. Such problems include incomplete data for the x-ray
transform in three dimensions and problems involving the generalisation of the x-ray transform to tensor
fields. Solutions explored include Algebraic Reconstruction Technique, filtered backprojection, and as
computing power has increased, iterative reconstruction methods such as iterative Sparse Asymptotic
Minimum Variance.[22]

Diffraction tomography

Diffraction tomography is a classical linear inverse problem in exploration seismology: the amplitude
recorded at one time for a given source-receiver pair is the sum of contributions arising from points such
that the sum of the distances, measured in traveltimes, from the source and the receiver, respectively, is
equal to the corresponding recording time. In 3D the parameter is not integrated along lines but over
surfaces. Should the propagation velocity be constant, such points are distributed on an ellipsoid. The
inverse problems consists in retrieving the distribution of diffracting points from the seismograms recorded
along the survey, the velocity distribution being known. A direct solution has been originally proposed by
Beylkin (http://amath.colorado.edu/~beylkin/papers/BEYLKI-1983a.pdf) and Lambaré et al.:[23] these
works were the starting points of approaches known as amplitude preserved migration (see Beylkin[24][25]
and Bleistein[26]). Should geometrical optics techniques (i.e. rays (https://www.encyclopediaofmath.org/ind
ex.php/Ray_method)) be used for the solving the wave equation, these methods turn out to be closely
related to the so-called least-squares migration methods[27] derived from the least-squares approach (see
Lailly,[28] Tarantola[29]).

Doppler tomography (astrophysics)


If we consider a rotating stellar object, the spectral lines we can observe on a spectral profile will be shifted
due to Doppler effect. Doppler tomography aims at converting the information contained in spectral
monitoring of the object into a 2D image of the emission (as a function of the radial velocity and of the
phase in the periodic rotation movement) of the stellar atmosphere. As explained by Tom Marsh[30] this
linear inverse problem is tomography like: we have to recover a distributed parameter which has been
integrated along lines to produce its effects in the recordings.

Inverse heat conduction

Early publications on inverse heat conduction arose from determining surface heat flux during atmospheric
re-entry from buried temperature sensors.[31][32] Other applications where surface heat flux is needed but
surface sensors are not practical include: inside reciprocating engines, inside rocket engines; and, testing of
nuclear reactor components.[33] A variety of numerical techniques have been developed to address the ill-
posedness and sensitivity to measurement error caused by damping and lagging in the temperature
signal.[34][35][36]

Non-linear inverse problems


Non-linear inverse problems constitute an inherently more difficult family of inverse problems. Here the
forward map is a non-linear operator. Modeling of physical phenomena often relies on the solution of a
partial differential equation (see table above except for gravity law): although these partial differential
equations are often linear, the physical parameters that appear in these equations depend in a non-linear way
of the state of the system and therefore on the observations we make on it.

Some classical non-linear inverse problems

Inverse scattering problems

Whereas linear inverse problems were completely solved from the theoretical point of view at the end of the
nineteenth century, only one class of nonlinear inverse problems was so before 1970, that of inverse
spectral and (one space dimension) inverse scattering problems, after the seminal work of the Russian
mathematical school (Krein, Gelfand, Levitan, Marchenko). A large review of the results has been given by
Chadan and Sabatier in their book "Inverse Problems of Quantum Scattering Theory" (two editions in
English, one in Russian).

In this kind of problem, data are properties of the spectrum of a linear operator which describe the
scattering. The spectrum is made of eigenvalues and eigenfunctions, forming together the "discrete
spectrum", and generalizations, called the continuous spectrum. The very remarkable physical point is that
scattering experiments give information only on the continuous spectrum, and that knowing its full
spectrum is both necessary and sufficient in recovering the scattering operator. Hence we have invisible
parameters, much more interesting than the null space which has a similar property in linear inverse
problems. In addition, there are physical motions in which the spectrum of such an operator is conserved as
a consequence of such motion. This phenomenon is governed by special nonlinear partial differential
evolution equations, for example the Korteweg–de Vries equation. If the spectrum of the operator is
reduced to one single eigenvalue, its corresponding motion is that of a single bump that propagates at
constant velocity and without deformation, a solitary wave called a "soliton".
A perfect signal and its generalizations for the Korteweg–de Vries equation or other integrable nonlinear
partial differential equations are of great interest, with many possible applications. This area has been
studied as a branch of mathematical physics since the 1970s. Nonlinear inverse problems are also currently
studied in many fields of applied science (acoustics, mechanics, quantum mechanics, electromagnetic
scattering - in particular radar soundings, seismic soundings, and nearly all imaging modalities).

A final example related to the Riemann hypothesis was given by Wu and Sprung, the idea is that in the
semiclassical old quantum theory the inverse of the potential inside the Hamiltonian is proportional to the
half-derivative of the eigenvalues (energies) counting function n(x).

Permeability matching in oil and gas reservoirs

The goal is to recover the diffusion coefficient in the parabolic partial differential equation that models
single phase fluid flows in porous media. This problem has been the object of many studies since a
pioneering work carried out in the early seventies.[37] Concerning two-phase flows an important problem is
to estimate the relative permeabilities and the capillary pressures.[38]

Inverse problems in the wave equations

The goal is to recover the wave-speeds (P and S waves) and the density distributions from seismograms.
Such inverse problems are of prime interest in seismology and exploration geophysics. We can basically
consider two mathematical models:

The acoustic wave equation (in which S waves are ignored when the space dimensions are
2 or 3)
The elastodynamics equation in which the P and S wave velocities can be derived from the
Lamé parameters and the density.

These basic hyperbolic equations can be upgraded by incorporating attenuation, anisotropy, ...

The solution of the inverse problem in the 1D wave equation has been the object of many studies. It is one
of the very few non-linear inverse problems for which we can prove the uniqueness of the solution.[8] The
analysis of the stability of the solution was another challenge.[39] Practical applications, using the least-
squares approach, were developed.[39][40] Extension to 2D or 3D problems and to the elastodynamics
equations was attempted since the 80's but turned out to be very difficult ! This problem often referred to as
Full Waveform Inversion (FWI), is not yet completely solved: among the main difficulties are the existence
of non-Gaussian noise into the seismograms, cycle-skipping issues (also known as phase ambiguity), and
the chaotic behavior of the data misfit function.[41] Some authors have investigated the possibility of
reformulating the inverse problem so as to make the objective function less chaotic than the data misfit
function.[42][43]

Travel-time tomography

Realizing how difficult is the inverse problem in the wave equation, seismologists investigated a simplified
approach making use of geometrical optics. In particular they aimed at inverting for the propagation
velocity distribution, knowing the arrival times of wave-fronts observed on seismograms. These wave-
fronts can be associated with direct arrivals or with reflections associated with reflectors whose geometry is
to be determined, jointly with the velocity distribution.
The arrival time distribution ( is a point in physical space) of a wave-front issued from a point
source, satisfies the Eikonal equation:

where denotes the slowness (reciprocal of the velocity) distribution. The presence of makes this
equation nonlinear. It is classically solved by shooting rays (trajectories about which the arrival time is
stationary) from the point source.

This problem is tomography like: the measured arrival times are the integral along the ray-path of the
slowness. But this tomography like problem is nonlinear, mainly because the unknown ray-path geometry
depends upon the velocity (or slowness) distribution. In spite of its nonlinear character, travel-time
tomography turned out to be very effective for determining the propagation velocity in the Earth or in the
subsurface, the latter aspect being a key element for seismic imaging, in particular using methods mentioned
in Section "Diffraction tomography".

Mathematical aspects: Hadamard's questions

The questions concern well-posedness: Does the least-squares problem have a unique solution which
depends continuously on the data (stability problem)? It is the first question, but it is also a difficult one
because of the non-linearity of . In order to see where the difficulties arise from, Chavent[44] proposed to
conceptually split the minimization of the data misfit function into two consecutive steps ( is the
subset of admissible models):

projection step: given find a projection on (nearest point on according


to the distance involved in the definition of the objective function)
given this projection find one pre-image that is a model whose image by operator is this
projection.

Difficulties can - and usually will - arise in both steps:

1. operator is not likely to be one-to-one, therefore there can be more than one pre-image,
2. even when is one-to-one, its inverse may not be continuous over ,
3. the projection on may not exist, should this set be not closed,
4. the projection on can be non-unique and not continuous as this can be non-convex
due to the non-linearity of .

We refer to Chavent[44] for a mathematical analysis of these points.

Computational aspects

A non-convex data misfit function

The forward map being nonlinear, the data misfit function is likely to be non-convex, making local
minimization techniques inefficient. Several approaches have been investigated to overcome this difficulty:

use of global optimization techniques such as sampling of the posterior density function and
Metropolis algorithm in the inverse problem probabilistic framework,[45] genetic algorithms
(alone or in combination with Metropolis algorithm: see[46] for an application to the
determination of permeabilities that match the existing permeability data), neural networks,
regularization techniques including multi scale analysis;
reformulation of the least-squares objective function so as to make it smoother (see[42][43] for
the inverse problem in the wave equations.)

Computation of the gradient of the objective function

Inverse problems, especially in infinite dimension, may be large size, thus requiring important computing
time. When the forward map is nonlinear, the computational difficulties increase and minimizing the
objective function can be difficult. Contrary to the linear situation, an explicit use of the Hessian matrix for
solving the normal equations does not make sense here: the Hessian matrix varies with models. Much more
effective is the evaluation of the gradient of the objective function for some models. Important
computational effort can be saved when we can avoid the very heavy computation of the Jacobian (often
called "Fréchet derivatives"): the adjoint state method, proposed by Chavent and Lions,[47] is aimed to
avoid this very heavy computation. It is now very widely used.[48]

Applications
Inverse problem theory is used extensively in weather predictions, oceanography, hydrology, and petroleum
engineering.[49][50][51]

Inverse problems are also found in the field of heat transfer, where a surface heat flux[52] is estimated
outgoing from temperature data measured inside a rigid body; and, in understanding the controls on plant-
matter decay.[53] The linear inverse problem is also the fundamental of spectral estimation and direction-of-
arrival (DOA) estimation in signal processing.

Inverse lithography is used in photomask design for semiconductor device fabrication.

See also
Atmospheric sounding
Backus–Gilbert method
Computed tomography
Algebraic reconstruction technique
Filtered backprojection
Iterative reconstruction
Data assimilation
Engineering optimization
Grey box model
Mathematical geophysics
Optimal estimation
Seismic inversion
Tikhonov regularization
Compressed sensing

Academic journals

Four main academic journals cover inverse problems in general:

Inverse Problems
Journal of Inverse and Ill-posed Problems[54]
Inverse Problems in Science and Engineering[55]
Inverse Problems and Imaging[56]

Many journals on medical imaging, geophysics, non-destructive testing, etc. are dominated by inverse
problems in those areas.

References
1. Mohamad-Djafari, Ali (2013-01-29). Inverse Problems in Vision and 3D Tomography (https://
books.google.com/books?id=ef8DREm_9OMC&q=%22inverse+problem%22). John Wiley
& Sons. ISBN 978-1-118-60046-7.
2. Pizlo, Zygmunt. "Perception viewed as an inverse problem (https://www.sciencedirect.com/s
cience/article/pii/S0042698901001730)." Vision research 41.24 (2001): 3145-3161.
3. Vito, Ernesto De, et al. "Learning from examples as an inverse problem (http://www.jmlr.org/p
apers/volume6/devito05a/devito05a.pdf)." Journal of Machine Learning Research 6.May
(2005): 883-904.
4. Cardenas, IC (2019). "On the use of Bayesian networks as a meta-modeling approach to
analyse uncertainties in slope stability analysis". Georisk: Assessment and Management of
Risk for Engineered Systems and Geohazards. 13 (1): 53–65.
doi:10.1080/17499518.2018.1498524 (https://doi.org/10.1080%2F17499518.2018.149852
4). S2CID 216590427 (https://api.semanticscholar.org/CorpusID:216590427).
5. Weyl, Hermann (1911). "Über die asymptotische Verteilung der Eigenwerte" (https://web.arc
hive.org/web/20130801090504/http://gdz.sub.uni-goettingen.de/dms/load/img/?IDDOC=630
48). Nachrichten der Königlichen Gesellschaft der Wissenschaften zu Göttingen: 110–117.
Archived from the original (http://gdz.sub.uni-goettingen.de/dms/load/img/?IDDOC=63048)
on 2013-08-01. Retrieved 2018-05-14.
6. » Epilogue — Ambartsumian’ s paper Viktor Ambartsumian (http://ambartsumian.ru/en/paper
s/epilogue-ambartsumian’-s-paper/)
7. Ambartsumian, Rouben V. (1998). "A life in astrophysics. Selected papers of Viktor A.
Ambartsumian". Astrophysics. 41 (4): 328–330. doi:10.1007/BF02894658 (https://doi.org/10.
1007%2FBF02894658). S2CID 118952753 (https://api.semanticscholar.org/CorpusID:1189
52753).
8. Burridge, Robert (1980). "The Gelfand-Levitan, the Marchenko, and the Gopinath-Sondhi
integral equations of inverse scattering theory, regarded in the context of inverse impulse-
response problems". Wave Motion. 2 (4): 305–323. doi:10.1016/0165-2125(80)90011-6 (http
s://doi.org/10.1016%2F0165-2125%2880%2990011-6).
9. Tarantola, Albert (1987). Inverse problem theory (https://archive.org/details/inverseproblemth
0000tara) (1st ed.). Elsevier. ISBN 9780444599674.
10. Tarantola, Albert (2005). "Front Matter" (http://www.ipgp.fr/~tarantola/Files/Professional/Book
s/InverseProblemTheory.pdf) (PDF). Inverse Problem Theory and Methods for Model
Parameter Estimation. SIAM. pp. i–xii. doi:10.1137/1.9780898717921.fm (https://doi.org/10.1
137%2F1.9780898717921.fm). ISBN 978-0-89871-572-9.
11. Ciarlet, Philippe (1994). Introduction à l'analyse numérique matricielle et à l'optimisation.
Paris: Masson. ISBN 9782225688935.
12. Nocedal, Jorge (2006). Numerical optimization. Springer.
13. Ben Ameur, Hend; Chavent, Guy; Jaffré, Jérôme (2002). "Refinement and coarsening
indicators for adaptive parametrization: application to the estimation of hydraulic
transmissivities" (https://hal.inria.fr/docs/00/07/22/95/PDF/RR-4292.pdf) (PDF). Inverse
Problems. 18 (3): 775–794. Bibcode:2002InvPr..18..775B (https://ui.adsabs.harvard.edu/abs/
2002InvPr..18..775B). doi:10.1088/0266-5611/18/3/317 (https://doi.org/10.1088%2F0266-56
11%2F18%2F3%2F317). S2CID 250892174 (https://api.semanticscholar.org/CorpusID:250
892174).
14. Lemaréchal, Claude (1989). Optimization, Handbooks in Operations Research and
Management Science. Elsevier. pp. 529–572.
15. Delprat-Jannaud, Florence; Lailly, Patrick (1993). "Ill‐posed and well‐posed formulations of
the reflection travel time tomography problem". Journal of Geophysical Research. 98 (B4):
6589–6605. Bibcode:1993JGR....98.6589D (https://ui.adsabs.harvard.edu/abs/1993JGR....9
8.6589D). doi:10.1029/92JB02441 (https://doi.org/10.1029%2F92JB02441).
16. Delprat-Jannaud, Florence; Lailly, Patrick (1992). "What information on the Earth model do
reflection traveltimes provide". Journal of Geophysical Research. 98 (B13): 827–844.
Bibcode:1992JGR....9719827D (https://ui.adsabs.harvard.edu/abs/1992JGR....9719827D).
doi:10.1029/92JB01739 (https://doi.org/10.1029%2F92JB01739).
17. Backus, George; Gilbert, Freeman (1968). "The Resolving Power of Gross Earth Data" (http
s://doi.org/10.1111%2Fj.1365-246X.1968.tb00216.x). Geophysical Journal of the Royal
Astronomical Society. 16 (10): 169–205. Bibcode:1968GeoJ...16..169B (https://ui.adsabs.har
vard.edu/abs/1968GeoJ...16..169B). doi:10.1111/j.1365-246X.1968.tb00216.x (https://doi.or
g/10.1111%2Fj.1365-246X.1968.tb00216.x).
18. Lions, Jacques Louis (1988). "Sur les sentinelles des systèmes distribués". C. R. Acad. Sci.
Paris. I Math: 819–823.
19. Pijpers, Frank; Thompson, Michael (1993). "The SOLA method for helioseismic inversion".
Astronomy and Astrophysics. 281 (12): 231–240. Bibcode:1994A&A...281..231P (https://ui.a
dsabs.harvard.edu/abs/1994A&A...281..231P).
20. Chavent, Guy (1998). Least-Squares, Sentinels and Substractive Optimally Localized
Average in Equations aux dérivées partielles et applications (https://hal.inria.fr/inria-000733
57/document). Paris: Gauthier Villars. pp. 345–356.
21. Kaipio, J., & Somersalo, E. (2010). Statistical and computational inverse problems. New
York, NY: Springer.
22. Abeida, Habti; Zhang, Qilin; Li, Jian; Merabtine, Nadjim (2013). "Iterative Sparse Asymptotic
Minimum Variance Based Approaches for Array Processing" (https://qilin-zhang.github.io/_p
ages/pdfs/SAMVpaper.pdf) (PDF). IEEE Transactions on Signal Processing. 61 (4): 933–
944. arXiv:1802.03070 (https://arxiv.org/abs/1802.03070). Bibcode:2013ITSP...61..933A (http
s://ui.adsabs.harvard.edu/abs/2013ITSP...61..933A). doi:10.1109/tsp.2012.2231676 (https://d
oi.org/10.1109%2Ftsp.2012.2231676). ISSN 1053-587X (https://www.worldcat.org/issn/1053
-587X). S2CID 16276001 (https://api.semanticscholar.org/CorpusID:16276001).
23. Lambaré, Gilles; Virieux, Jean; Madariaga, Raul; Jin, Side (1992). "Iterative asymptotic
inversion in the acoustic approximation" (https://semanticscholar.org/paper/3f9a47ceda54e1
2325ac2eb7ff9df3e2b7d780ea). Geophysics. 57 (9): 1138–1154.
Bibcode:1992Geop...57.1138L (https://ui.adsabs.harvard.edu/abs/1992Geop...57.1138L).
doi:10.1190/1.1443328 (https://doi.org/10.1190%2F1.1443328). S2CID 55836067 (https://ap
i.semanticscholar.org/CorpusID:55836067).
24. Beylkin, Gregory (1984). "The inversion problem and applications of The generalized Radon
transform" (http://amath.colorado.edu/faculty/beylkin/papers/BEYLKI-1984.pdf) (PDF).
Communications on Pure and Applied Mathematics. XXXVII (5): 579–599.
doi:10.1002/cpa.3160370503 (https://doi.org/10.1002%2Fcpa.3160370503).
25. Beylkin, Gregory (1985). "Imaging of discontinuities in the inverse scaterring problem by
inversion of a causal generalized Radon transform". J. Math. Phys. 26 (1): 99–108.
Bibcode:1985JMP....26...99B (https://ui.adsabs.harvard.edu/abs/1985JMP....26...99B).
doi:10.1063/1.526755 (https://doi.org/10.1063%2F1.526755).
26. Bleistein, Norman (1987). "On the imaging of reflectors in the earth" (https://semanticscholar.
org/paper/f2b8da29167e31e4a46ce370b683dfa3edb04aa8). Geophysics. 52 (7): 931–942.
Bibcode:1987Geop...52..931B (https://ui.adsabs.harvard.edu/abs/1987Geop...52..931B).
doi:10.1190/1.1442363 (https://doi.org/10.1190%2F1.1442363). S2CID 5095133 (https://api.
semanticscholar.org/CorpusID:5095133).
27. Nemeth, Tamas; Wu, Chengjun; Schuster, Gerard (1999). "Least‐squares migration of
incomplete reflection data" (https://csim.kaust.edu.sa/web/FWI&LSM_papers/LSM/Geophysi
cs1999Nemeth.pdf) (PDF). Geophysics. 64 (1): 208–221. Bibcode:1999Geop...64..208N (htt
ps://ui.adsabs.harvard.edu/abs/1999Geop...64..208N). doi:10.1190/1.1444517 (https://doi.or
g/10.1190%2F1.1444517).
28. Lailly, Patrick (1983). The seismic inverse problem as a sequence of before stack
migrations. Philadelphia: SIAM. pp. 206–220. ISBN 0-89871-190-8.
29. Tarantola, Albert (1984). "Inversion of seismic reflection data in the acoustic approximation"
(https://semanticscholar.org/paper/e51c0ba606e40c99b2a87f32728ba1e18c183540).
Geophysics. 49 (8): 1259–1266. Bibcode:1984Geop...49.1259T (https://ui.adsabs.harvard.e
du/abs/1984Geop...49.1259T). doi:10.1190/1.1441754 (https://doi.org/10.1190%2F1.144175
4). S2CID 7596552 (https://api.semanticscholar.org/CorpusID:7596552).
30. Marsh, Tom (2005). "Doppler tomography". Astrophysics and Space Science. 296 (1–4):
403–415. arXiv:astro-ph/0011020 (https://arxiv.org/abs/astro-ph/0011020).
Bibcode:2005Ap&SS.296..403M (https://ui.adsabs.harvard.edu/abs/2005Ap&SS.296..403
M). doi:10.1007/s10509-005-4859-3 (https://doi.org/10.1007%2Fs10509-005-4859-3).
S2CID 15334110 (https://api.semanticscholar.org/CorpusID:15334110).
31. Shumakov, N. V. (1957). "A method for the experimental study of the process of heating a
solid body". Soviet Physics –Technical Physics (Translated by American Institute of
Physics). 2: 771.
32. Stolz, G., Jr. (1960). "Numerical solutions to an inverse problem of heat conduction for
simple shapes". Journal of Heat Transfer. 82: 20–26. doi:10.1115/1.3679871 (https://doi.org/
10.1115%2F1.3679871).
33. Beck, J. V.; Blackwell, B.; St. Clair, C. R., Jr. (1985). Inverse Heat Conduction. Ill‐Posed
Problems. New York: J. Wiley & Sons. ISBN 0471083194.
34. Beck, J. V.; Blackwell, B.; Haji-Sheikh, B. (1996). "Comparison of some inverse heat
conduction methods using experimental data". International Journal of Heat and Mass
Transfer. 39 (17): 3649–3657. doi:10.1016/0017-9310(96)00034-8 (https://doi.org/10.1016%
2F0017-9310%2896%2900034-8).
35. Ozisik, M. N.; Orlande, H. R. B. (2021). Inverse Heat Transfer, Fundamentals and
Applications (2nd ed.). CRC Press. ISBN 9780367820671.
36. Inverse Engineering Handbook, edited by K. A. Woodbury. CRC Press. 2002.
ISBN 9780849308611.
37. Chavent, Guy; Lemonnier, Patrick; Dupuy, Michel (1975). "History Matching by Use of
Optimal Control Theory". Society of Petroleum Engineers Journal. 15 (2): 74–86.
doi:10.2118/4627-PA (https://doi.org/10.2118%2F4627-PA).
38. Chavent, Guy; Cohen, Gary; Espy, M. (1980). "Determination of relative permeabilities and
capillary pressures by an automatic adjustment method". Society of Petroleum Engineers
(January). doi:10.2118/9237-MS (https://doi.org/10.2118%2F9237-MS).
39. Bamberger, Alain; Chavent, Guy; Lailly, Patrick (1979). "About the stability of the inverse
problem in the 1D wave equation, application to the interpretation of the seismic profiles".
Journal of Applied Mathematics and Optimization. 5: 1–47. doi:10.1007/bf01442542 (https://
doi.org/10.1007%2Fbf01442542). S2CID 122428594 (https://api.semanticscholar.org/Corpu
sID:122428594).
40. Macé, Danièle; Lailly, Patrick (1986). "Solution of the VSP one dimensional inverse
problem". Geophysical Prospecting. 34 (7): 1002–1021. Bibcode:1986GeopP..34.1002M (htt
ps://ui.adsabs.harvard.edu/abs/1986GeopP..34.1002M). doi:10.1111/j.1365-
2478.1986.tb00510.x (https://doi.org/10.1111%2Fj.1365-2478.1986.tb00510.x).
OSTI 6901651 (https://www.osti.gov/biblio/6901651).
41. Virieux, Jean; Operto, Stéphane (2009). "An overview of full-waveform inversion in
exploration geophysics" (https://www.researchgate.net/publication/228078264). Geophysics.
74 (6): WCC1–WCC26. doi:10.1190/1.3238367 (https://doi.org/10.1190%2F1.3238367).
42. Clément, François; Chavent, Guy; Gomez, Suzana (2001). "Migration-based traveltime
waveform inversion of 2-D simple structures: A synthetic example". Geophysics. 66 (3): 845–
860. Bibcode:2001Geop...66..845C (https://ui.adsabs.harvard.edu/abs/2001Geop...66..845
C). doi:10.1190/1.1444974 (https://doi.org/10.1190%2F1.1444974).
43. Symes, William; Carrazone, Jim (1991). "Velocity inversion by Differential semblance
optimization". Geophysics. 56 (5): 654–663. Bibcode:1991Geop...56..654S (https://ui.adsab
s.harvard.edu/abs/1991Geop...56..654S). doi:10.1190/1.1443082 (https://doi.org/10.1190%2
F1.1443082).
44. Chavent, Guy (2010). Nonlinear Least Squares for Inverse problems. Springer. ISBN 978-
90-481-2785-6.
45. Koren, Zvi; Mosegaard, Klaus; Landa, Evgeny; Thore, Pierre; Tarantola, Albert (1991).
"Monte Carlo Estimation and Resolution Analysis of Seismic Background Velocities".
Journal of Geophysical Research. 96 (B12): 20289–20299. Bibcode:1991JGR....9620289K
(https://ui.adsabs.harvard.edu/abs/1991JGR....9620289K). doi:10.1029/91JB02278 (https://d
oi.org/10.1029%2F91JB02278).
46. Tahmasebi, Pejman; Javadpour, Farzam; Sahimi, Muhammad (August 2016). "Stochastic
shale permeability matching: Three-dimensional characterization and modeling" (https://ww
w.researchgate.net/publication/307626119). International Journal of Coal Geology. 165:
231–242. doi:10.1016/j.coal.2016.08.024 (https://doi.org/10.1016%2Fj.coal.2016.08.024).
47. Chavent, Guy (1971). Identification de coefficients répartis dans les équations aux dérivées
partielles. Université Paris 6: Thèse d'Etat.
48. Plessix, René (2006). "A review of the adjoint-state method for computing the gradient of a
functional with geophysical applications" (https://doi.org/10.1111%2Fj.1365-246X.2006.029
78.x). Geophysical Journal International. 167 (2): 495–503. Bibcode:2006GeoJI.167..495P
(https://ui.adsabs.harvard.edu/abs/2006GeoJI.167..495P). doi:10.1111/j.1365-
246X.2006.02978.x (https://doi.org/10.1111%2Fj.1365-246X.2006.02978.x).
49. Carl Wunsch (13 June 1996). The Ocean Circulation Inverse Problem (https://books.google.
com/books?id=ugHsLF1RNacC&pg=PR9). Cambridge University Press. pp. 9–. ISBN 978-
0-521-48090-1.
50. Tahmasebi, Pejman; Javadpour, Farzam; Sahimi, Muhammad (August 2016). "Stochastic
shale permeability matching: Three-dimensional characterization and modeling".
International Journal of Coal Geology. 165: 231–242. doi:10.1016/j.coal.2016.08.024 (http
s://doi.org/10.1016%2Fj.coal.2016.08.024).
51. Knighton, James; Singh, Kanishka; Evaristo, Jaivime (2020). "Understanding Catchment-
Scale Forest Root Water Uptake Strategies Across the Continental United States Through
Inverse Ecohydrological Modeling" (https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/
2019GL085937). Geophysical Research Letters. 47 (1): e2019GL085937.
Bibcode:2020GeoRL..4785937K (https://ui.adsabs.harvard.edu/abs/2020GeoRL..4785937
K). doi:10.1029/2019GL085937 (https://doi.org/10.1029%2F2019GL085937). ISSN 1944-
8007 (https://www.worldcat.org/issn/1944-8007). S2CID 213914582 (https://api.semanticsch
olar.org/CorpusID:213914582).
52. Patric Figueiredo (December 2014). Development Of An Iterative Method For Solving
Multidimensional Inverse Heat Conduction Problems (https://www.academia.edu/9823088).
Lehrstuhl für Wärme- und Stoffübertragung RWTH Aachen.
53. Forney, David C.; Rothman, Daniel H. (2012-09-07). "Common structure in the heterogeneity
of plant-matter decay" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3405759). Journal of
the Royal Society Interface. 9 (74): 2255–2267. doi:10.1098/rsif.2012.0122 (https://doi.org/1
0.1098%2Frsif.2012.0122). PMC 3405759 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3
405759). PMID 22535699 (https://pubmed.ncbi.nlm.nih.gov/22535699).
54. "Journal of Inverse and Ill-posed Problems" (https://archive.today/20130201045242/http://w
ww.reference-global.com/loi/jiip). Archived from the original (http://www.reference-global.co
m/loi/jiip) on February 1, 2013.
55. "Inverse Problems in Science and Engineering: Vol 25, No 4" (http://www.tandf.co.uk/journal
s/titles/17415977.asp).
56. "IPI" (https://web.archive.org/web/20061011090005/http://aimsciences.org/journals/ipi/ipi_on
line.jsp). Archived from the original (http://aimsciences.org/journals/ipi/ipi_online.jsp) on 11
October 2006.

References
Chadan, Khosrow & Sabatier, Pierre Célestin (1977). Inverse Problems in Quantum
Scattering Theory. Springer-Verlag. ISBN 0-387-08092-9
Aster, Richard; Borchers, Brian, and Thurber, Clifford (2018). Parameter Estimation and
Inverse Problems, Third Edition, Elsevier. ISBN 9780128134238, ISBN 9780128134238
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 19.4. Inverse
Problems and the Use of A Priori Information" (http://apps.nrbook.com/empanel/index.html#p
g=1001). Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York:
Cambridge University Press. ISBN 978-0-521-88068-8.

Further reading
Bunge, Mario. 2006. “From Z to A: Inverse Problems," in Mario Bunge, Chasing Reality:
Strife over Realism (Toronto: University of Toronto Press).
Bunge, Mario. 2019. “Inverse Problems.” Foundations of Science 24(3): 483-525.
C. W. Groetsch (1999). Inverse Problems: Activities for Undergraduates. Cambridge
University Press. ISBN 978-0-88385-716-8.

External links
Inverse Problems International Association (http://www.inverse-problems.net/)
Eurasian Association on Inverse Problems (http://www.eurasianip.org)
Finnish Inverse Problems Society (http://fips.fi/)
Inverse Problems Network (https://web.archive.org/web/20070925210904/http://www.mth.ms
u.edu/ipnet/)
Albert Tarantola's website (http://www.ipgp.jussieu.fr/~tarantola/), includes a free PDF
version of his Inverse Problem Theory book, and some online articles on Inverse Problems
Inverse Problems page at the University of Alabama (http://www.me.ua.edu/inverse/)
Archived (https://web.archive.org/web/20140405095149/http://www.me.ua.edu/inverse/)
2014-04-05 at the Wayback Machine
Inverse Problems and Geostatistics Project (http://imgp.nbi.ku.dk/), Niels Bohr Institute,
University of Copenhagen
Andy Ganse's Geophysical Inverse Theory Resources Page (http://research.ganse.org/lectu
res/invtheory)
Finnish Centre of Excellence in Inverse Problems Research (http://math.tkk.fi/inverse-coe)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Inverse_problem&oldid=1156284329"

You might also like