I develop low-rank tensor methods for solving multi-parametric partial differential equations or systems with uncertainties. This task is strongly connected with inverse problems, Bayesian update, data assimilation, optimal design of experiment and optimal control. All these tasks require a huge amount of computational resources, therefore I am also interested in effective parallel algorithms and implementations. Supervisors: David Keyes, Raul Tempone, Hermann G. Matthies, Wolfgang Hackbusch
In this article we introduce new methods for the analysis of high dimensional data in tensor form... more In this article we introduce new methods for the analysis of high dimensional data in tensor formats, where the underling data come from the stochastic elliptic boundary value problem. After discretisation of the deterministic operator as well as the presented random fields via KLE and PCE, the obtained high dimensional operator can be approximated via sums of elementary tensors. This tensors representation can be effectively used for computing different values of interest, such as maximum norm, level sets and cumulative distribution function. The basic concept of the data analysis in high dimensions is discussed on tensors represented in the canonical format, however the approach can be easily used in other tensor formats. As an intermediate step we describe efficient iterative algorithms for computing the characteristic and sign functions as well as pointwise inverse in the canonical tensor format. Since during majority of algebraic operations as well as during iteration steps the representation rank grows up, we use lower-rank approximation and inexact recursive iteration schemes.
To approximate a random field with as few random variables as possible,
but still retaining the e... more To approximate a random field with as few random variables as possible, but still retaining the essential information, the Karhunen-Lo`eve expansion (KLE) becomes important. Often the random field is characterised by its covariance function. The KLE of a random field requires the solution of eigenvalue problem with the integral operator which has the covariance matrix as its kernel. Usually this eigenvalue problem is solved by a Krylov subspace method with a sparse matrix approximation. We demonstrate the use of the sparse hierarchical matrix (H-matrix) technique with a log-linear computational cost of the matrix-vector product and a log-linear storage requirement.
The paper deals with the statistical pattern recognition problem for discrete characteristics. We... more The paper deals with the statistical pattern recognition problem for discrete characteristics. We study the behaviour of the minimal empirical error, classifier on the set of arbitrary distributions and the corresponding samples.
ABSTRACT In this work we research the propagation of uncertainties in parameters and airfoil geom... more ABSTRACT In this work we research the propagation of uncertainties in parameters and airfoil geometry to the solution. Typical examples of uncertain parameters are the angle of attack and the Mach number. The discretisation techniques which we used here are the Karhunen-Loève and the polynomial chaos expansions. To integrate high-dimensional integrals in probabilistic space we used Monte Carlo simulations and collocation methods on sparse grids. To reduce storage requirement and computing time, we demonstrate an algorithm for data compression, based on a low-rank approximation of realisations of random fields. This low-rank approximation allows us an efficient postprocessing (e.g. computation of the mean value, variance, etc) with a linear complexity and with drastically reduced memory requirements. Finally, we demonstrate how to compute the Bayesian update for updating a priori probability density function of uncertain parameters. The Bayesian update is also used for incorporation of measurements into the model.
Uncertainty quantification in aerodynamic simulations calls for efficient numerical methods {\col... more Uncertainty quantification in aerodynamic simulations calls for efficient numerical methods {\color{noblue} to reduce computational cost}, especially for the uncertainties caused by random geometry variations which involve a large number of variables. This paper compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry which is parameterized by 9 independent Gaussian variables. The results show that gradient-enhanced surrogate methods achieve better accuracy than direct integration methods with the same computational cost.
We apply the Tensor Train (TT) approximation to construct the Polynomial Chaos Expansion (PCE) of... more We apply the Tensor Train (TT) approximation to construct the Polynomial Chaos Expansion (PCE) of a random field, and solve the stochastic elliptic diffusion PDE with the stochastic Galerkin discretization. We compare two strategies of the polynomial chaos expansion: sparse and full polynomial (multi-index) sets. In the full set, the polynomial orders are chosen independently in each variable, which provides higher flexibility and accuracy. However, the total amount of degrees of freedom grows exponentially with the number of stochastic coordinates. To cope with this curse of dimensionality, the data is kept compressed in the TT decomposition, a recurrent low-rank factorization. PCE computations on sparse grids sets are extensively studied, but the TT representation for PCE is a novel approach that is investigated in this paper. We outline how to deduce the PCE from the covariance matrix, assemble the Galerkin operator, and evaluate some post-processing (mean, variance, Sobol indice...
In this article we introduce new methods for the analysis of high dimensional data in tensor form... more In this article we introduce new methods for the analysis of high dimensional data in tensor formats, where the underling data come from the stochastic elliptic boundary value problem. After discretisation of the deterministic operator as well as the presented random fields via KLE and PCE, the obtained high dimensional operator can be approximated via sums of elementary tensors. This tensors representation can be effectively used for computing different values of interest, such as maximum norm, level sets and cumulative distribution function. The basic concept of the data analysis in high dimensions is discussed on tensors represented in the canonical format, however the approach can be easily used in other tensor formats. As an intermediate step we describe efficient iterative algorithms for computing the characteristic and sign functions as well as pointwise inverse in the canonical tensor format. Since during majority of algebraic operations as well as during iteration steps the representation rank grows up, we use lower-rank approximation and inexact recursive iteration schemes.
To approximate a random field with as few random variables as possible,
but still retaining the e... more To approximate a random field with as few random variables as possible, but still retaining the essential information, the Karhunen-Lo`eve expansion (KLE) becomes important. Often the random field is characterised by its covariance function. The KLE of a random field requires the solution of eigenvalue problem with the integral operator which has the covariance matrix as its kernel. Usually this eigenvalue problem is solved by a Krylov subspace method with a sparse matrix approximation. We demonstrate the use of the sparse hierarchical matrix (H-matrix) technique with a log-linear computational cost of the matrix-vector product and a log-linear storage requirement.
The paper deals with the statistical pattern recognition problem for discrete characteristics. We... more The paper deals with the statistical pattern recognition problem for discrete characteristics. We study the behaviour of the minimal empirical error, classifier on the set of arbitrary distributions and the corresponding samples.
ABSTRACT In this work we research the propagation of uncertainties in parameters and airfoil geom... more ABSTRACT In this work we research the propagation of uncertainties in parameters and airfoil geometry to the solution. Typical examples of uncertain parameters are the angle of attack and the Mach number. The discretisation techniques which we used here are the Karhunen-Loève and the polynomial chaos expansions. To integrate high-dimensional integrals in probabilistic space we used Monte Carlo simulations and collocation methods on sparse grids. To reduce storage requirement and computing time, we demonstrate an algorithm for data compression, based on a low-rank approximation of realisations of random fields. This low-rank approximation allows us an efficient postprocessing (e.g. computation of the mean value, variance, etc) with a linear complexity and with drastically reduced memory requirements. Finally, we demonstrate how to compute the Bayesian update for updating a priori probability density function of uncertain parameters. The Bayesian update is also used for incorporation of measurements into the model.
Uncertainty quantification in aerodynamic simulations calls for efficient numerical methods {\col... more Uncertainty quantification in aerodynamic simulations calls for efficient numerical methods {\color{noblue} to reduce computational cost}, especially for the uncertainties caused by random geometry variations which involve a large number of variables. This paper compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry which is parameterized by 9 independent Gaussian variables. The results show that gradient-enhanced surrogate methods achieve better accuracy than direct integration methods with the same computational cost.
We apply the Tensor Train (TT) approximation to construct the Polynomial Chaos Expansion (PCE) of... more We apply the Tensor Train (TT) approximation to construct the Polynomial Chaos Expansion (PCE) of a random field, and solve the stochastic elliptic diffusion PDE with the stochastic Galerkin discretization. We compare two strategies of the polynomial chaos expansion: sparse and full polynomial (multi-index) sets. In the full set, the polynomial orders are chosen independently in each variable, which provides higher flexibility and accuracy. However, the total amount of degrees of freedom grows exponentially with the number of stochastic coordinates. To cope with this curse of dimensionality, the data is kept compressed in the TT decomposition, a recurrent low-rank factorization. PCE computations on sparse grids sets are extensively studied, but the TT representation for PCE is a novel approach that is investigated in this paper. We outline how to deduce the PCE from the covariance matrix, assemble the Galerkin operator, and evaluate some post-processing (mean, variance, Sobol indice...
Uploads
Papers by Alexander Litvinenko
but still retaining the essential information, the Karhunen-Lo`eve expansion (KLE) becomes important. Often the random field is characterised by its covariance
function. The KLE of a random field requires the solution of eigenvalue problem with the integral operator which has the covariance matrix as its kernel.
Usually this eigenvalue problem is solved by a Krylov subspace method with a sparse matrix approximation. We demonstrate the use of the sparse hierarchical
matrix (H-matrix) technique with a log-linear computational cost of the matrix-vector product and a log-linear storage requirement.
but still retaining the essential information, the Karhunen-Lo`eve expansion (KLE) becomes important. Often the random field is characterised by its covariance
function. The KLE of a random field requires the solution of eigenvalue problem with the integral operator which has the covariance matrix as its kernel.
Usually this eigenvalue problem is solved by a Krylov subspace method with a sparse matrix approximation. We demonstrate the use of the sparse hierarchical
matrix (H-matrix) technique with a log-linear computational cost of the matrix-vector product and a log-linear storage requirement.