Figures
Abstract
The conventional approach to data-driven inversion framework is based on Gaussian statistics that presents serious difficulties, especially in the presence of outliers in the measurements. In this work, we present maximum likelihood estimators associated with generalized Gaussian distributions in the context of Rényi, Tsallis and Kaniadakis statistics. In this regard, we analytically analyze the outlier-resistance of each proposal through the so-called influence function. In this way, we formulate inverse problems by constructing objective functions linked to the maximum likelihood estimators. To demonstrate the robustness of the generalized methodologies, we consider an important geophysical inverse problem with high noisy data with spikes. The results reveal that the best data inversion performance occurs when the entropic index from each generalized statistic is associated with objective functions proportional to the inverse of the error amplitude. We argue that in such a limit the three approaches are resistant to outliers and are also equivalent, which suggests a lower computational cost for the inversion process due to the reduction of numerical simulations to be performed and the fast convergence of the optimization process.
Citation: dos Santos Lima GZ, de Lima JVT, de Araújo JM, Corso G, da Silva SLEF (2023) Generalized statistics: Applications to data inverse problems with outlier-resistance. PLoS ONE 18(3): e0282578. https://doi.org/10.1371/journal.pone.0282578
Editor: Ning Cai, Beijing University of Posts and Telecommunications, CHINA
Received: May 12, 2022; Accepted: February 21, 2023; Published: March 30, 2023
Copyright: © 2023 dos Santos Lima et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All data used in this study are available from Figshare (https://doi.org/10.6084/m9.figshare.21878394.v1).
Funding: This work was supported by the Fundação Norte-Rio-Grandense de Pesquisa e Cultura - FUNPEC [grant number 512019 - FUNPEC/UFRN/PETROBRAS/INVERSÃO MULTIMETRICA]; J.M. de Araújo received financial support from the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) [grant no. 313431/2018-3]; and G. Corso received financial support from the CNPq [grant number 307907/2019-8].
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
The estimation of physical model parameters from observed data is a frequent problem in many areas, such as in machine learning [1, 2], geophysics [3, 4], biology [5, 6], physics [7, 8], among others [9–11]. Such a task is solved through the so-called inverse problem, which consists of identifying physical parameters that can not be directly measured from the observations [12]. Such is a process where one can use stable or unstable distributions. In the case of α-stable distributions, the focus of the inverse problems is on finding efficient estimators of a α-stable distribution that represents the data set, since the joint distribution has a density function similar to the marginal one [13, 14]. In contrast, unstable distributions have been used in inverse problems to determine efficient estimators from observed data by assuming that the parameters of the statistical distribution are known and included in the inverse problems as constraints. From a practical viewpoint, in the inverse problem, physical model parameters are estimated by matching the estimates to the observed data by optimizing an objective function [15]. The objective function in the least-squares sense is widely used, which is based on the assumption that errors are independent and identically distributed (iid) according to a standard Gaussian probability distribution [12]. Although this approach is quite popular, the least-squares-based estimator is biased if the errors are non-Gaussian, violating the Gauss-Markov theorem [16, 17]. Indeed, just a few outliers are enough for the least-squares criterion to be inappropriate [18].
In this way, a lot of non-Gaussian criteria have been proposed to mitigate the inverse problem sensitivity to aberrant measurements (outliers). The most common criterion to deal with non-Gaussian errors is based on the L1-norm of the difference between the estimated and the observed data, in which the errors are assumed to be iid according to a double exponential distribution (Laplace distribution) [19]. Although this approach is known for being outlier-insensitive, this criterion is unique in the case where the difference between the estimates and the observed data is zero (or very close to zero). Thus, it is necessary to assume that the absolute error is greater than zero according to the machine precision used, which generates an indeterminacy problem from a computational point of view. To avoid the singularity of this approach, hybrid criteria which combine the least-squares distance with the least-absolute-values criterion (L1-norm) have been proposed and successfully applied for parameter robust estimation [20–22]. However, hybrid approaches require the determination of a threshold parameter, which demands boring trial-and-error investigations, increasing the computational cost [23].
Indeed, objective functions based on heavy-tailed probability distributions, such as the Cauchy-Lorentz distribution [24] and the Student’s t-distribution [25], have demonstrated robust properties for unbiased data inversion. However, both approaches assume a fixed probability distribution of errors, not adapting to the particularities of the model or data at hand. In this sense, objective functions based on generalized distributions are interesting because they might be adapted to the specificity of the erratic data by selecting an adequate free-parameter. In fact, several generalized approaches have been proposed to deal with erratic data [26–30]. Thus, generalized distributions based on the Rényi, Tsallis and Kaniadakis statistics have generated objective functions robust to erratic noise [31].
In this work, we present a review of robust approaches based on the field of statistical physics to estimates physical parameters from a data set polluted by outliers. In particular, we consider deformed Gaussian distributions associated to generalized statistical mechanics in the sense of Rényi (η-statistics) [32], Tsallis (q-statistics) [33] and Kaniadakis (κ-statistics) [34]. In this context, we place objective functions based on η-, q- and κ-generalizations in the broad context of the Gauss’ law of error [35–37], see Refs. [31, 38, 39]. The three deformed Gaussian distributions mentioned above have already demonstrated robust properties in many applications [31, 38–44]. However, the entropic index associated with each of these approaches that make the inversion process more robust requires thorough investigation. Furthermore, we analyze and compare the generalized objective functions from a statistical and numerical point of view in order to obtain the optimum value of the entropic index. In Fig 1 we summarize the workflow of the experiments employed in this work, which is a flowchart represented of an inverse problem. We call attention that in generalized frameworks, generalized statistics define the norm employed in the inversion problem solution.
We have organized this article as follows. In Section 2 we present a brief review on the solution of inverse problems using the maximum likelihood method in the conventional framework, as well as in the framework of Rényi, Tsallis and Kaniadakis. Moreover, in Section 3.1 we discuss the similarities among the generalized objective functions by considering a numerical test whose purpose is to estimate the parameters of a line that represents the observations; and finally, Section 3 is destined to apply the methodology presented in the article to address a classic geophysical problem that consists of estimating the acoustic impedance model using seismic data post-stack contaminated with spike noise.
2 A brief review of generalized statistical in inverse theory: Maximum likelihood methods
An inverse problem is formulated as an optimization task, in which an objective function describes how well the estimated model m generates a modeled dataset (or estimates) that matches the measurements [12]. In this regard, the estimates dest are obtained by the following operation [12, 15]: (1) where represents the forward operator. In the linear case, the latter equation is usually represented by the direct relationship . The forward operator makes the connection between the model space and the measurements (data) space through an adequate physical law. To illustrate the nature of a forward operator, let consider the mass distribution (model m) of the surface of a solid plate from the measures of the mass at different plate locations (observed data dobs) recorded by gravimeters. In this case, the forward operator may be represented by the Newton’s law of universal gravitation. So, the inverse problem consists of obtaining a model m by matching estimates dest computed by Eq (1) with observed data dobs by means of an objective function, as we will discuss further.
In the conventional approach, model parameters are estimated by maximizing the objective function is derived from the maximization of the Boltzmann-Gibbs-Shannon entropy (BGS): (2) subject to the following constraints: (3) (4) where p is a probability function and x = dobs − dest = {x1, x2, …, xN} represents the difference between observed and estimates. As well known in the literature, the probability distribution determined from the optimization of the BGS functional entropy subject to the constraints in Eqs (3) and (4) corresponds to the standard Gaussian distribution (see, for instance, Section 2 of Ref. [42]): (5)
In other words, inverse problems, in the conventional framework, are solved based on the premise that errors are independent and identically distributed (iid) according to a standard Gaussian likelihood function, which is formulated as the following optimization problem: (6)
A likelihood function is a useful tool to estimate physical parameters from the empirical data. In practice, inverse problems based on the Gauss’ law of error are formulated in a least-squares sense. To see this, we notice that maximizing the likelihood function in (6) is equivalent to minimizing the negative of the log-likelihood: (7)
However, it is worth emphasizing that in several problems the errors are non-Gaussian and, therefore, in such contexts the conventional approach becomes inefficient, especially in the presence of aberrant measures (outliers) [45].
For an objective function to admit a minimum, it must satisfy the condition (8) where mk is the k-th element of the parameter model vector m.
In Eq (8) ci are arbitrary constants and is the so-called influence function [42, 46, 47]. The influence function Eq (8) informs us about the sensitivity of the objective function to outliers. In this sense, if for a certain observed data , the objective function is sensitive to outliers and, therefore, it is not a robust estimator. A robust estimate, however, will indicate for . What we have in the conventional approach, however, is that the conventional objective function, Eq (7), is linearly influenced by outliers, as in the following equation (9)
To simplify the notation we restrict ourselves to the linear case: . In this case, the classical influence function is given by: (10)
Note that when there are outliers (dobs → ∞), the classical influence function tends to infinity (), which confirms the non-robustness of the classical framework.
The conventional theory of inverse problems, based on Gaussian statistics, fails with errors outside the Gaussian domain. In this sense, we look for alternatives to generalize Gaussian statistics in order to find more robust methods to deal with outliers.
2.1 Rényi’s Framework
Based on information theory, A. Rényi [32, 48] introduced a general information entropy as a one-parameter generalization of the BGS entropy. Rényi entropy (η-entropy) functional is expressed by: (11) where η is the entropic index. Furthermore, the entropy in Eq (11) shares many of the properties of the BGS entropy Eq (2), such as: it is non-negative, it is additive, and it has an extreme for the case of equiprobability [32]. Rényi’s entropy recovers Shannon’s entropy at the limit η → 1. Applications of Rényi entropy can be found in several fields [49–51].
Taking into account the constraints in Eqs (3) and (4), η-entropy is maximized by the η-generalized Gaussian distribution, η-Gaussian, which is expressed in the form [35, 52, 53]: (12) where [x]+ = 0 if x < 0 and [x]+ = x. In addition, Aη is the normalizing constant: (13) with Γ(⋅) representing the gamma function. At the limit η → 1, the ordinary Gaussian probability distribution Eq (5) is recovered. Fig 2 shows some curves of the η-Gaussian probability distribution. In particular, we note that at the limit η → 1/3 the probability distribution approaches a strongly peaked function.
The black dashed line represents the conventional curves.
Following the same path discussed of the previous Section, that is, using the maximum likelihood method, we find a generalized objective function [31]: (14)
This function will be called η-objective function and at limit η → 1 the conventional objective function, Eq (7), is recovered. To investigate the behavior of the η-objective function regarding outliers we compute the influence function, as defined in Eq (8), named η-influence function: (15)
A couple of illustrative curves of the influence function are shown in Fig 3, we draw our attention to the limit case η → 1/3. In this region the influence of the outliers is minimized: , in contrast, the conventional objective function Eq (7) is strongly influenced by outliers since .
The black dashed line represents the conventional curves.
2.2 Tsallis’s framework
Based on multifractals quantities and long-range interactions, C. Tsallis postulates an alternative form for the entropy to generalize the standard statistical mechanics [33]. Since then, a wide variety of applications have been performed based on Tsallis q-statistics [54–59]. The Tsallis approach is based on the q-entropy, defined as follows: (16) where is the entropic index (also known as nonextensive parameter). The choice of the entropic index q assigns new properties to the entropy functional, and in the limit case q → 1 it recovers the conventional BGS entropy.
By considering the maximum entropy principle for q-entropy, a q-generalization of Gauss’ law of error was formulated in Ref. [36] assuming that the errors x follow a probability distribution. In this regard, the probability function is computed by maximizing the q-entropy constrained to the normalization condition, Eq (3), and the q-variance [60, 61] given by: (17) in which Pq is the escort probability function [61–63]. The probability distribution resulting from the aforementioned optimization problem is known by the q-Gaussian distribution: (18) where the normalization constant is given by [64]: (19)
A comparison between the conventional Eq (2) and Tsallis approach Eq (16) reveals that most probable events gain greater weight in the entropy calculation for the case in which q ≠ 1. In this sense, the usual average is replaced by an average that depends on the choice of the index and so the higher the value of this index [65], the most likely events receive higher weights. Fig 4 show illustrative curves of the q-Gaussian probability distribution. It is important to note that at the limit q → 3 we have a behavior that reminds us of the Rényi distribution in η → 1/3: at this limit both distributions display a peaked behavior.
The black dashed line represents the conventional curves.
Applying the probabilistic maximum-likelihood method in the q-Gaussian distribution, we have the following objective function [38] (20)
In order to check the influence of outliers for our objective function, we calculate the influence function [42]: (21)
Eq (21) reveals that the Tsallis framework also shows a robust objective function that is resistant to outliers. Fig 5 displays a couple of influence function curves. At the limit xi → ±∞, the choice of index q < 1 implies because the sum inside the brackets will always result in a negative quantity: . On the other hand, for q > 1 the sum inside the brackets turns into a large positive number, leading to .
The black dashed line represents the conventional curves.
2.3 Kaniadakis’s framework
G. Kaniadakis proposed a new way to calculate the entropy based on the principle of Kinetic Interaction [34, 66]. This new κ-entropy that generalizes the BGS statistics is given by: (22)
Kaniadakis statistics has been applied in different contexts [67–69]. The conventional entropy (BGS) is recovered in the limit of the entropic index κ → 0. Kaniadakis’ framework not only includes conventional BGS statistics, but it is related to other statistics, such as the famous quantum statistics of Fermi-Dirac and Bose-Einstein, as well as the Tsallis [34, 70].
Based on the κ-Gaussian statistic, the references [71, 72] presents an error law that can be applied to a variety of problems and that, because it has a heavy tails distribution, it is able to satisfactorily work with outliers. The κ-Gaussian distribution is given by (23) where Aκ is the normalization constant given by (24) and βκ > 0 depends on the κ index and is given by: (25)
Some curves of the κ-Gaussian distribution are shown in Fig 6. We notice that the distribution has heavy tails when choosing κ → 2/3 and at this limit, as well as in the η-Gaussian and q-Gaussian distributions, the distribution shows a peaked behavior that resembles the Dirac delta distribution.
The black dashed line represents the conventional curves.
In this scenario, the inverse problem is therefore formulated as the problem of optimizing the κ-objective function that derives from the principle of maximum likelihood (26) and the analysis of the robustness of this objective function can be performed by the κ-influence function, which is given by: (27)
The curves of the κ-objective and κ-influence functions are shown in Fig 7. In Fig 7(c) and 7(d) we notice that as we increase the index κ the values of the influence function far from x = 0 vicinity are negligible. In particular, at the limit κ → 2/3 we observe the curve that indicates less influence for x → ±∞ errors. Looking at Eq (27) we noticed that for xi → ±∞ we have .
The black dashed line represents the conventional curves.
3 Numerical experiments
3.1 Comparing the performance of objective functions
In this section, we present a simple numerical experiment in order to quantitatively analyze the robustness of objective functions based on generalized statistics. The experiment consists of estimating the coefficients m = {m1, m2} of a linear polynomial, dest = m1x + m2, from observed data dobs contaminated by outliers. In this regard, we consider the independent variable within the range [−1, 1] to generate 50 numbers obeying a linear polynomial with coefficients m1 = 1 and m2 = 2. Then we contaminate the numbers generated with a Gaussian distribution with zero-mean and variance σ2 = 0.2. In addition, the variable dobs is contaminated with outliers in the region 0.4 ≤ x < 0.9, of dobs, the outliers are given by where f is a Gaussian random variable with zero-mean and standard-deviation σ2 = 1.
Conventionally, this problem is treated by minimizing the square of the residuals based on Gaussian statistics [15, 73]. However, the Gaussian estimate is not appropriate to solve problems with discrepant values. In this sense, we propose to estimate the coefficients m using the objective functions of Rényi (Eq (14)), Tsallis (Eq (20)) and Kaniadakis (Eq (26)). The values of the entropic index were used between 0.3334 ≤ η ≤ 1, 1 ≤ q ≤ 2.9999 and 0 ≤ κ ≤ 0.6666. Inside each interval, 200 uniform spaced values were taken. Fig 8 shows the fit between the observed data and the estimated lines with generalized objective functions.
The dashed line indicates an ideal line, and the points highlighted in red indicate the inserted outliers.
The estimated models, , are compared with the ideal model m = {1, 2} to find for each objective function the best entropic index that achieve an optimal fitting. Fig 9 correlates the estimated values of the intercept, , and the slope, , with the employed entropic index. We can see that as we move away from the Gaussian limit, we find a better fit.
The zoom in window in (a) and (b) emphasizes the regions 0.5 ≤ η < 1/3, 2.5 ≤ q < 3 and 0.5 ≤ κ < 2/3.
We compared the lines obtained with the estimated parameters, calculating the Mean Absolute Error, MAE = ∑i∣mest − m∣/N and the symmetric Mean Absolute Percentage Error, sMAPE = 2/N∑i∣mest − m∣/(∣mest∣ + ∣m∣). The obtained results are: MAEη = 0.0136 with index η = 0.3635 and sMAPEη = 0.0861 with index η = 0.3602; MAEq = 0.0134 with q = 2.7587 and sMAPEq = 0.0853 with index q = 2.7487; and MAEκ = 0.0127 with κ = 0.6532 and sMAPEκ = 0.0806 with index κ = 0.6566. The MAE represents the average deviation between predicted and reference values, and the best result is achieved for MAE close to zero. In contrast, the MAE calculated with the conventional objective function was 1.2000, while the sMAPE was 0.5791.
With this test, we observe that the results improve as we move away from the Gaussian limit. In particular, we notice that in Fig 9 the green curve corresponding to the Tsallis result shows the steepest decrease after the Gaussian limit (q = 1), while the red curve (Kaniadakis) presents a curve that slowly decreases after the Gaussian limit.
An analysis of the influence functions, Eqs (15), (21) and (27), reveals that they are zero for xi → ±∞, regardless of the choice of entropic indexes (obviously, disregarding the conventional limit). In addition, we notice that in the limits η → 1/3, q → 3 and κ → 2/3 the influence functions are merely a function that depends on the inverse of xi. Thus, the three objective functions, Eqs (14), (20) and (26), are resistant to outliers and have an entropic index limit in which they are equivalent.
3.2 Post-Stack inversion
In this section, we present numerical experiments to demonstrate the outlier-resistance of the data-inversion method based on generalized statistics by considering an important problem that comes from geophysics, which is an important process to obtain estimates of subsurface properties. In particular, we address a problem of seismic inversion known as Post-Stack Inversion (PSI) [74]. The goal of PSI is to estimate, from the observation of the seismic data, the acoustic impedance, which is a property of the rock defined as the product of the density of the rock and the speed of the acoustic wave in the subsurface [75].
The forward problem is formulated through the following convolutional model relationship [75]: (28)
The latter equation states that the seismic data is given by the convolution between a source wavelet and the acoustic impedance of the subsurface medium. Eq (28) can be written in a compact form as: . Here, represents the seismic data calculated by the parameters which is used to estimate the acoustic impedance Z. The operators and are described by [76]: (29) Where is the wavelet operator, which computes the convolution between the seismic signal and . Finally, represents the first order derivative operator. In addition, we can group these two matrices into a single operator . In this way, we compute the residuals between the observed data and the estimated data .
To analyze the outlier-resistance of the objective functions presented in Section 2, we consider a portion of the synthetic geological Marmousi2 [77, 78] model as a benchmark (true model). In particular, we take into account the acoustic impedance model that consists of 5km of depth and 1km of distance, as depicted in Fig 10. The seismic source used was a Ricker wavelet [79, 80] with the peak frequency νp = 55Hz (the most energetic frequency). For a general frequency νp the Ricker wavelet ω(t) is defined as: (30)
In (a) the synthetic acoustic impedance model called Marmousi2. In (b) we show the initial model employed in the inversion methodology.
We test the robustness of the generalized objective functions using seismic data contaminated with white Gaussian noise with low intensity (taking a signal-to-noise ratio equal to 80dB in all scenarios) and spikes (outliers) with different intensities, as shown in Fig 11. The spikes were added by randomly choosing positions in the seismic data and adding peaks with intensities between 5f and 15f times the original amplitude, where f is a Gaussian random variable with zero-mean and standard-deviation σ2 = 1. We considered 161 noise scenarios, where the difference is in the percentages of samples contaminated by the outliers. In this regard, the number of samples were chosen from 0% to 80% of the data samples, with steps of 0.5%.
The noiseless seismic data in (a). The same data contaminated with white-noise (signal-to-noise ratio SNR = 80) and spike noise with (b) 0.5%, (c) 5%, and (d) 80%. The black line in panels (e) represents a single seismic trace from the middle of panel (a). The same trace contaminated by noise is represented in (f) spikes (25%).
For each spiky-noise scenario, we carried out data-inversions employing the η-, q- and κ-objective functions, Eqs (14), (20) and (26), respectively with 1/3 < η ≤ 1, 1 ≤ q < 3 and 0 ≤ κ ≤ 2/3 using 200 values for each of these parameters with uniform spacing between intervals. In total, for each objective function, we get 32.000 results. To minimize the objective functions, we employ the conjugate gradient method [81, 82], defining a maximum of 10 iterations and a tolerance error ϵ = 10−12.
We consider the Pearson’s correlation coefficient [83] as the statistical metric to compare the PSI results, which is defined as: (31) where with mtrue = m and is the difference between the true and recovered models, and their respective averages, μ. The coefficient R assumes values between −1 and +1. The case of R close to zero implies the absence of correlation. The correlation is strong when it approaches one. Figs 12 and 13 show the recovered acoustic impedance for conventional and generalized objective functions. We notice that the PSI results obtained with the conventional objective function are severely affected by the presence of outliers. On the other hand, at the limits η → 1/3, q → 3 and κ → 2/3 the influence of spikes are minimized, and good estimates are obtained.
We summarize the PSI results for all numerical simulations performed in the present work in Fig 14. In this figure, we remark that the objective functions present satisfactory results in η → 1/3, q → 3 and κ → 2/3 limit cases, as predicted by the numerical experiment presented in Section 3.1. Indeed, in these limit cases, the generalized objective functions are robust tools capable of mitigating the influence of outliers, which leads to good results even for high spike contamination. In addition, it should be noted that the PSI results related with the reddish regions of the heat map in Fig 14 show a strong correlation regardless of the contamination rate employed.
The white markings indicate points such that R = 0.9 (strong correlation).
We also calculated MAE and sMAPE distances between the reconstructed models in each numerical experiment and the true model. In Table 1, we present the smallest errors for each statistic for some values of the percentage of spikes (%Sp). Analyzing these results, we observed that the conventional approach is superior to those based on generalized statistics when the noise is purely Gaussian (%Sp = 0). However, when non-Gaussian errors are considered, the generalized approaches are superior to the conventional approach, especially in the Kaniadakis statistics case, which presents minor errors.
%Sp indicates the percentage of spikes, while ζ represents a normalization constant for the MAE to present this error metric best. The maximum MAE value is ζ = 4, 020.91, which happened in the numerical test with %Sp = 80 using the conventional objective function.
4 Conclusions
In this work we explore robust methods based on the Rényi, Tsallis and Kaniadakis generalized statistics. Since the solution of the data-inversion strongly depends on the employed objective function, the generalized objective functions are indeed valuable for this purpose. In fact, given a proper choice of the entropic index, it is possible to get a robust objective function that handles errors that do not obey the Gaussian statistics.
The manuscript studied three statistics: Rényi, Tsallis and Kaniadakis. Each of them is associated with a parameter η, q and κ. The performance of the statistics depends on the values of these parameters, as it is indicated in Fig 14. However, the parameters span along distinct ranges: 1 < η < 1/3, 1 < q < 3 and 0 < κ < 2/3 which make the comparison tricky. In figure (14) we equally sampled 200 points along the x and y axes in order to compare the relative performance of the statistics. In the figure is indicated by a white curve the points corresponding to correlation coefficient equal to 0.9. Considering the number of points below this curve (correlation coefficients above 0.9) a visual inspection shows that the Tsallis statistics shows a better performance in the inversion for this particular seismic problem. However, the error metrics (MAE and sMAPE) indicated that the reconstructed models using the Kaniadakis κ-statistics are quantitatively better and closer to the true model in the limit κ → 2/3.
In particular, we investigate a special example of non-Gaussian errors: outliers. In this scenario, we seek to answer some basic questions: (i) what is the most appropriate choice for entropic indexes? (ii) Which of the proposed methods is more resistant to outliers? For the first question, we find that there is a limit for every method in which the objective functions are able to ignore aberrant values without compromising the results. In addition, we note that the properties of the η-, q- and κ-generalized distributions and the respective objective functions have similar characteristics at the limit: (η, q, κ)→(1/3, 3, 2/3). These findings are relevant not only in the inverse problem theory, but also in a broad statistical context [12, 84, 85].
To conclude, it is worth emphasizing that although these methodologies have been successfully employed in geophysical applications, our proposals are easily adaptable to a wide variety of parameter estimation problems. In this regard, we hope that the methodologies proposed in this work are of great value for the modelling of complex systems with numerous unknown variables, as the generalized objective functions are able to reduce the computational cost by accelerating the convergence of the process of optimization, as shown by analyzing the influence function.
Acknowledgments
J.V.T. de Lima, J.M. de Araújo, G. Corso and G.Z. dos Santos Lima gratefully acknowledge support from Petrobras through the project Statistical physics inversion for multi-parameters in reservoir characterization at Federal University of Rio Grande do Norte. J.M. de Araújo and G. Corso acknowledge Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) for support through productivity fellowships. Gustavo Z. dos Santos Lima acknowledge Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) for support through productivity fellowships- PQ (309440/2022-0).
References
- 1. Raissi M, Perdikaris P, Karniadakis GE. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J Comput Phys. 2019;378:686–707.
- 2. Harlim J, Jiang SW, Liang S, Yang H. Machine learning for prediction with missing dynamics. J Comput Phys. 2021;428:109922.
- 3. da Silva SLEF, Julià J, Bezerra FHR. Deviatoric Moment Tensor Solutions from Spectral Amplitudes in Surface Network Recordings: Case Study in São Caetano, Pernambuco, Brazil. Bull Seism Soc Am. 2017;107:1495–1511.
- 4. da Silva SLEF, Carvalho PTC, da Costa CAN, de Araújo JM, Corso G. An objective function for full-waveform inversion based on frequency-dependent offset-preconditioning. PLoS One. 2020;15:e0240999. pmid:33112904
- 5. Engl HW, Flamm C, Kügler P, Lu J, Müller S, Schuster P. Inverse problems in systems biology. Inverse Probl. 2009;25:123014.
- 6. Clermont G, Zenker S. The inverse problem in mathematical biology. Math Biosci. 2015;260:11–15. pmid:25445734
- 7. Ba Y, Jiang L, Ou N. A two-stage ensemble Kalman filter based on multiscale model reduction for inverse problems in time fractional diffusion-wave equations. J Comput Phys. 2018;374:300–330.
- 8.
Razavy M. An Introduction to Inverse Problems in Physics. World Scientific Publishing Company, New Jersey; 2012.
- 9.
Kuramshina G. Inverse Problems of Molecular Spectra Data Processing. In: Wang Y., Yang C., Yagola A.G. (eds) Optimization and Regularization for Computational Inverse Problems and Applications. Springer, Berlin, Heidelberg; 2010.
- 10. de Freitas Silva FW, da Silva SLEF, Henriques MVC, Corso G. Using fish lateral line sensing to improve seismic acquisition and processing. PLoS ONE. 2019;14:e0213847.
- 11. Heidel G, Khoromskaia V, Khoromskij BN, Schulz V. Tensor product method for fast solution of optimal control problems with fractional multidimensional Laplacian in constraints. J Comput Phys. 2021;424:109865.
- 12.
Tarantola A. Inverse Problem Theory and Methods for Model Parameter Estimation. SIAM; 2005.
- 13. Liang Y, Chen W. A Survey on Computing Lévy Stable Distributions and a New MATLAB Toolbox. Signal Process. 10, 242, 2013.
- 14.
Nolan JP. Univariate Stable Distributions. Springer, 2020.
- 15.
Menke W. Geophysical Data Analysis: Discrete Inverse Theory. Academic Press; 2012.
- 16.
Kendall M, Stuart A. The Advanced Theory of Statistics: Inference and Relationship. Hodder Arnold, London; 1979.
- 17.
Thomson RE, Emery WJ. Chapter 3—Statistical Methods and Error Handling. In: Thomson RE, Emery WJ, editors. Data Analysis Methods in Physical Oceanography (Third Edition). third edition ed. Boston: Elsevier; 2014. p. 219–311.
- 18. Scales JA, Gersztenkorn A. Robust methods in inverse theory. Inverse Probl. 1988;4:1071–1091.
- 19.
Tarantola A. Inverse problem theory: Methods for data fitting and model parameter estimation. Elsevier Scientific Publ. Co.; 1987.
- 20. Huber PJ, et al. Robust Regression: Asymptotics, Conjectures and Monte Carlo. Ann Stat. 1973;1(5):799–821.
- 21. Guitton A, Symes WW. Robust inversion of seismic data using the Huber norm. Geophysics. 2003;68(4):1310–1319.
- 22. Bube KP, Langan RT. Hybrid l1/l2 minimization with applications to tomography. Geophysics. 1997;62:1183–1195.
- 23. Brossier R, Operto S, Virieux J. Which data residual norm for robust elastic frequency-domain full waveform inversion? Geophysics. 2010;75(3):R37–R46.
- 24. Zhou P, Lv Y, Wang H, Chai T. Data-Driven Robust RVFLNs Modeling of a Blast Furnace Iron-Making Process Using Cauchy Distribution Weighted M-Estimation. IEEE Trans Ind Electron. 2017;64:7141–7151.
- 25. Ubaidillah A, Notodiputro KA, Kurnia A, Fitrianto A, Mangku IW. A robustness study of student-t distributions in regression models with application to infant birth weight data in Indonesia. IOP Conf Ser: Earth Environ Sci. 2017;58:012013.
- 26.
Li Z, Liu Z, Song C, Hu G, Zhang J. In: Generalized Gaussian distribution based adaptive mixed-norm inversion for non-Gaussian noise. Society of Exploration Geophysicists; 2015. p. 3926–3930.
- 27.
Carozzi F, Sacchi MD. In: Making seismic reconstruction more robust via a generalized loss function. Society of Exploration Geophysicists; 2020. p. 3149–3153.
- 28. da Silva SLEF, Kaniadakis G. Robust parameter estimation based on the generalized log-likelihood in the context of Sharma-Taneja-Mittal measure. Phys Rev E. 2021;104:024107. pmid:34525653
- 29. Silva SA, da Silva SLEF, de Souza RF, Marinho AA, de Araújo JM, Bezerra CG. Improving Seismic Inversion Robustness via Deformed Jackson Gaussian. Entropy. 2021;23:1081. pmid:34441220
- 30. da Silva SLEF, de Araújo JM, Corso G. Full-waveform Inversion Based on q-Laplace Distribution. Pure Appl Geophys. 2021;178:3415.
- 31. da Silva SLEF, dos Santos Lima GZ, de Araújo JM, Corso G. Extensive and nonextensive statistics in seismic inversion. Physica A: Statistical Mechanics and its Applications. 2021;563:125496.
- 32. Rényi A. On the Foundations of Information Theory. Rev Inst Int Stat. 1965;33(1):1–14.
- 33. Tsallis C. Possible generalization of Boltzmann-Gibbs statistics. J Stat Phys. 1988;52:479–487.
- 34. Kaniadakis G. Non-linear kinetics underlying generalized statistics. Physica A. 2001;296(3-4):405–425.
- 35. Tanaka HA, Nakagawa M, Oohama Y. A Direct Link between Rényi–Tsallis Entropy and Hölder’s Inequality—Yet Another Proof of Rényi–Tsallis Entropy Maximization. Entropy. 2019;21(6):549. pmid:33267263
- 36. Suyari H, Tsukada M. Law of error in Tsallis statistics. IEEE Trans Inf Theory. 2005;51(2):753–757.
- 37. Wada T, Suyari H. κ-generalization of Gauss’ law of error. Phys Lett A. 2006;348(3):89–93.
- 38. da Silva SLEF, da Costa CA, Carvalho PTC, de Araújo JM, dos Santos Lucena L, Corso G. Robust full-waveform inversion using q-statistics. Physica A. 2020;548:124473.
- 39. da Silva SLEF, Carvalho PTC, de Araújo JM, Corso G. Full-waveform inversion based on Kaniadakis statistics. Phys Rev E. 2020;101:053311. pmid:32575242
- 40. de Lima IP, da Silva SLEF, Corso G, de Araújo JM. Tsallis Entropy, Likelihood, and the Robust Seismic Inversion. Entropy. 2020;22:464. pmid:33286238
- 41.
Da Silva S, Da Costa C, Carvalho P, Araújo J, Lucena L, Corso G. An objective function based on q-Gaussian distribution for full-waveform inversion. In: 82nd EAGE Annual Conference & Exhibition. vol. 2020. European Association of Geoscientists & Engineers; 2020. p. 1–5.
- 42. de Lima JVT, da Silva SLEF, de Araújo JM, Corso G, dos Santos Lima GZ. Nonextensive statistical mechanics for robust physical parameter estimation: the role of entropic index. Eur Phys J Plus. 2021;136(3).
- 43. da Silva SLEF, dos Santos Lima GZ, Volpe EV, et al. Robust approaches for inverse problems based on Tsallis and Kaniadakis generalised statistics. Eur Phys J Plus. 2021;136:518.
- 44. da Silva SLEF, Lopez J, de Araújo JM, Corso G. Multi-scale q-FWI applied to circular shot OBN acquisition for accurate pre-salt velocity estimates. IMAGE Technical Program Expanded Abstracts. 2021; p. 712–716.
- 45. Claerbout JF, Muir F. Robust modeling with erratic data. Geophysics. 1973;38(5):826–844.
- 46. Hampel FR. The influence curve and its role in robust estimation, J. Amer. Statist. Assoc. 69, 383 (1974).
- 47.
Hampel FR, Ronchetti EM, Rousseeuw PJ, Stahel WA. Robust Statistics: The Approach Based on Influence Functions. Wiley-Interscience, 2005.
- 48.
Rényi A, et al. On measures of entropy and information. In: Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics. The Regents of the University of California; 1961.
- 49. Wang SH, Muhammad K, Lv Y, Sui Y, Han L, Zhang YD. Identification of Alcoholism Based on Wavelet Renyi Entropy and Three-Segment Encoded Jaya Algorithm. Complexity. 2018;2018:1–13.
- 50. Sánchez-Moreno P, Dehesa JS, Manzano D, Yáñez RJ. Spreading lengths of Hermite polynomials. J Comput Appl Math. 2010;233(9):2136–2148.
- 51. Dong X. The gravity dual of Rényi entropy. Nat Commun. 2016;7(1):1–6. pmid:27515122
- 52.
Costa J, Hero A, Vignat C. On solutions to multivariate maximum α-entropy problems. In: International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition. Springer; 2003. p. 211–226.
- 53. Johnson O, Vignat C. Some results concerning maximum Rényi entropy distributions. Ann Inst H Poincare B Probab Stat. 2007;43(3):339–351.
- 54. Picoli S Jr, Mendes RS, Malacarne LC, Santos RPB. q-distributions in complex systems: a brief review. Braz J Phys. 2009;39:468.
- 55. da Silva SLEF. Newton’s cooling law in generalised statistical mechanics. Physica A. 2021;565:125539.
- 56. de la Barra E, Vega-Jorquera P. On q-pareto distribution: some properties and application to earthquakes. Eur Phys J B. 2021;94:32.
- 57. de la Barra E, Vega-Jorquera P, da Silva SLEF, Torres H. Hydraulic fracturing assessment on seismic hazard by Tsallis statistics. Eur Phys J B. 2021;95:92.
- 58. da Silva SLEF, Karsou A, de Souza A, Capuzzo F, Costa F, Moreira R, et al. A graph-space optimal transport objective function based on q-statistics to mitigate cycle-skipping issues in FWI. Geophys J Int. 2022;231(2):1363–1385.
- 59. da Silva SLEF, Corso G. Nonextensive Gutenberg-Richter law and the connection between earthquakes and marsquakes. Eur Phys J B. 2021;94:25.
- 60. Tsallis C, Mendes RS, Plastino AR. The role of constraints within generalized nonextensive statistics. Physica A. 1998;261:534.
- 61. Abe S, Bagci GB. Necessity of q-expectation value in nonextensive statistical mechanics. Phys Rev E. 2005;71:016139. pmid:15697690
- 62.
Schlögl F. Thermodynamics of chaotic systems: an introduction. Cambridge University Press; 1993.
- 63. Abe S. Remark on the Escort distribution representation of nonextensive statistical mechanics. Phys Lett A. 2000;275(4):250–253.
- 64. Prato D, Tsallis C. Nonextensive foundation of Lévy distributions. Phys Rev E. 1999;60(2):2398. pmid:11970038
- 65. Hasegawa Y, Arita M. Properties of the maximum q-likelihood estimator for independent random variables. Physica A. 2009;388(17):3399–3412.
- 66. Kaniadakis G. Statistical mechanics in the context of special relativity. Phys Rev E. 2002;66:056125. pmid:12513574
- 67. Kaniadakis G, Baldi MM, Deisboeck TS, Grisolia G, Hristopulos DT, Scarfone AM, et al. The κ-statistics approach to epidemiology. Sci Rep. 2020;10:19949. pmid:33203913
- 68. da Silva SLEF. κ-generalised Gutenberg–Richter law and the self-similarity of earthquakes. Chaos Solitons Fractals. 2021;143:110622.
- 69. Kaniadakis G. New power-law tailed distributions emerging in κ-statistics. EPL. 2021;133:10002.
- 70. Santos AP, Silva R, Alcaniz JS, Anselmo DHAL. Kaniadakis statistics and the quantum H-theorem. Phys Lett A. 2011;375(3):352–355.
- 71. da Silva SLEF, R Silva GZd, de Araújo JM, Corso G. An outlier-resistent κ-generalized approach for robust physical parameter estimation. Physica A. 2022;600:127554.
- 72. da Silva SLEF, Kaniadakis G. κ-statistics approach to optimal transport waveform inversion. Phys Rev E. 2022;106:034113. pmid:36266843
- 73.
Weisberg S. Applied linear regression. vol. 528. John Wiley & Sons; 2005.
- 74.
Russell B, Hampson D. In: Comparison of poststack seismic inversion methods. SEG Technical Program Expanded Abstracts; 1991. p. 876.
- 75.
Sen MK. Seismic inversion. SPE; 2006.
- 76. Wu H, Li S, Chen Y, Peng Z. Seismic impedance inversion using second-order overlapping group sparsity with A-ADMM. J Geophys Eng. 2020;17(1):97–116.
- 77. Versteeg R. The Marmousi experience: Velocity model determination on a synthetic complex data set. Lead Edge. 1994;13:927–936.
- 78. Martin GS, Wiley R, Marfurt KJ. Marmousi2: An elastic upgrade for Marmousi. Lead Edge. 2006;25:156–166.
- 79. Ricker N. Further developments in the wavelet theory of seismogram structure. Bull Seismol Soc Am. 1943;3:197–228.
- 80. Ricker N. Wavelet functions and their polynomials. Geophysics. 1944;9:314–323.
- 81. Stiefel E. Methods of conjugate gradients for solving linear systems. J Res Nat Bur Standards. 1952;49:409–435.
- 82.
Scales JA, Smith ML, Treitel S. Introductory geophysical inverse theory. Citeseer; 1994.
- 83.
Evans JD. Straightforward statistics for the behavioral sciences. Thomson Brooks/Cole Publishing Co; 1996.
- 84.
Cressie NAC. Statistics for Spatial Data. Wiley Series in Probability and Statistics, 1993.
- 85.
Hristopulos DT. Random Fields for Spatial Data Modeling: A Primer for Scientists and Engineers. Springer Nature, 2020.