Stochastic upscaling via linear Bayesian updating
Sadiq Sarfaraz, Bojana Rosić, Hermann Matthies, Adnan Ibrahimbegović
To cite this version:
Sadiq Sarfaraz, Bojana Rosić, Hermann Matthies, Adnan Ibrahimbegović. Stochastic upscaling
via linear Bayesian updating. Coupled systems mechanics, Techno-Press, 2018, 7, pp.211 - 232.
10.12989/csm.2018.7.2.211. hal-01996691
HAL Id: hal-01996691
https://hal.utc.fr/hal-01996691
Submitted on 5 Feb 2019
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Coupled Systems Mechanics, Vol. 7, No. 2 (2018) 211-232
211
DOI: https://doi.org/10.12989/csm.2018.7.2.211
Stochastic upscaling via linear Bayesian updating
Sadiq M. Sarfaraz*1, Bojana V. Rosić1a, Hermann G. Matthies1b and
Adnan Ibrahimbegović2c
1
Institute of Scientific Computing, Technische Universität Braunschweig
38106 Braunschweig, Germany
2
Lab. de Mécanique Roberval / Centre de Recherche Royallieu,
Université de Technologie de Compiègne, 60203 Compiègne, France
(Received August 23, 2017, Revised September 26, 2017, Accepted September 28, 2017)
Abstract. In this work we present an upscaling technique for multi-scale computations based on a
stochastic model calibration technique. We consider a coarse-scale continuum material model described in
the framework of generalized standard materials. The model parameters are considered uncertain, and are
determined in a Bayesian framework for the given fine scale data in a form of stored energy and dissipation
potential. The proposed stochastic upscaling approach is independent w.r.t. the choice of models on coarse
and fine scales. Simple numerical examples are shown to demonstrate the ability of the proposed approach
to calibrate coarse scale elastic and inelastic material parameters.
Keywords: Upscaling; Bayesian updating; Gauss-Markov-Kalman filter; coupled plasticity-damage
1. Introduction
Many naturally existing or man-made materials such as rock/soils, bones and concrete are
known to be heterogeneous on the spatial scales that are orders of magnitudes smaller than the
respective scales related to response predictions. Additionally, the micro- and macro-scale models
may be of an entirely different mathematical nature as the former ones can be discrete, and the
latter ones continuum models. In such a case the classical “homogenization” approaches are
known to be insufficient for upscaling purposes as refereed in Ibrahimbegović and Matthies
(2012), Matthies and Ibrahimbegović (2014). To bridge the two scales in a more unified manner
independent of the class of the mathematical model or the type of heterogeneity, in this paper the
stochastic upscaling procedure is considered (Arsigny et al. 2006, Clément et al. 2013, Ghanem
and Das 2011, Demmie and Ostaja-Starzewski 2015, Gorguluarslan and Choi 2014, Brady et al.
2006, Starzewski 2008, Stefanou et al. 2015, Steven et al. 2011).
Many researchers have contributed in this field. The common goal is to somehow capture fine
Corresponding author, E-mail: m.sarfaraz@tu-bs.de
Ph.D., E-mail: bojana.rosic@tu-bs.de
b
Professor, E-mail: wire@tu-bs.de
c
Professor, E-mail: adnan.ibrahimbegovic@utc.fr
a
Copyright © 2018 Techno-Press, Ltd.
http://www.techno-press.org/?journal=csm&subpage=8
ISSN: 2234-2184 (Print), 2234-2192 (Online)
Sadiq M. Sarfaraz Bojana V. Rosić, Hermann G. Matthies and Adnan Ibrahimbegović
scale features in a stochastic setting. Stefanou et al. (2015) employed computational
homogenization and XFEM to study the effect of uncertainty in material properties and
geometrical features on macro-scale. Clément et al. (2013) have proposed a strategy to construct
stochastic energy functional from realizations of random micro-structure. Brady et al. (2006) have
utilized a “Moving Window” approach to characterize micro-scale randomness, a similar idea is
used to infer meso-scale random fields of material stiffness tensor from bi-phased micro scale by
Demmie and Ostaja-Starzewski (2015). Another way to achieve the coupling between scales with
possibly completely different descriptions is to use concepts of machine learning as in
Koutsourelakis (2007), the theory of which is often, at least conceptually, grounded in Bayesian
ideas.
In this paper a Bayesian approach (Kaipio and Somersalo 2004, Kennedy and O´Hagan 2001,
Hawkins-Daarud et al. 2013) is taken directly in its computationally cheaper Gauss-MarkovKalman filter form, a generalisation of classical Kalman filtering that allows direct estimation of
non-Gaussian distributions without sampling (Pajonk et al. 2012, Rosić et al. 2012, Rosić et al.
2016). The general set-up we propose here is as follows: on the macro scale a continuum material
model is derived which not only covers the mean (i.e., homogenised) behavior, but also the
possible deviations from it. As the micro-scale mechanical behavior, we have in mind involves
both reversible (i.e., elastic) as well as irreversible (i.e., inelastic) behavior, this has to be reflected
also in the constitutive models considered on the macro scale. Here the main goal is to show a
proof-of-concept, so we will limit ourselves to a simple but sufficiently representative case of
inelastic behavior (Liu et al. 2013). For the sake of simplicity, we limit ourselves to isothermal
conditions and we shall exclude strain-rate dependent behavior. Thus, for the inelastic or
irreversible part we only consider ductile non-softening behavior, i.e., strain-rate independent
plasticity and damage with hardening. However, one can consider choice of more complex
structural/continuum models for upscaling e.g. (Do and Ibrahimbegović 2015, Do et al. 2015, Ngo
et al. 2014, Ngo et al. 2014).
As this is to be a model for possibly more complex behavior, we shall assume that the macroscale continuum model can be described as a generalised standard material model (Halpen and
Nguyen 1974, Halpen and Nguyen 1975, Nguyen, 1977). This has the advantage that these
materials are completely characterized by the specification of two scalar functions, the stored
energy resp. Helmholtz free energy, and the dissipation pseudo-potential. In this way the simple
case chosen here can be generalized to very complex material behavior. In our view this
description is also a nice and simple illustration for the connection with the micro-scale behavior.
No matter how the physical and mathematical/computational description on the micro scale has
been chosen, in all cases where the description is based on physical principles it will be possible to
define the stored (Helmholtz free) energy and the dissipation (entropy production). These two
thermodynamic functions will thus be used as measurements in Bayesian inference to identify the
macro-scale model parameters given micro-scale response energy.
In some more detail, the identification of the macro-structure generalized standard material
constitutive model proceeds as follows: the micro-structure is exposed to some external action
resp. stimulus, here purely mechanical case this is chosen as large scale homogeneous
deformation. The response is measured in the change of the two thermodynamic functions alluded
to: the stored resp. Helmholtz free energy and the dissipation resp. entropy production. The main
goal is to show that this idea is computationally feasible for identifying the macro-model material
parameters.
The outline of this paper is as follows: In Section 2 the problem is defined in an abstract sense
Stochastic upscaling via linear Bayesian updating
to motivate the explanation of the proposed strategy for its solution in the following discussion. In
Section 3 the stochastic upscaling is described employing the Bayesian identification resp.
calibration ideas (Pajonk et al. 2012, Rosić et al. 2012, 2016). The coarse- and fine-scale models
used in this paper will be described in Sections 4 and 5, respectively. These theoretical concepts
are numerically applied to several illustrative examples of non-linear inelastic behavior in Section
6. Conclusions are stated in Section 7.
2. Problem formulation
Let us assume to be given a symbolic mathematical description of the coarse/macro-scale
computational model
𝐴𝑐 (𝑢𝑐 , 𝒒) = 𝑓𝑐 ,
(1)
𝐴𝑓 (𝑢𝑓 ) = 𝑓𝑓 ,
(2)
𝑧 = 𝑦𝑐 + 𝜖 = 𝑌𝑐 (𝒒, 𝑢𝑐 (𝒒, 𝑓𝑐 )) + 𝜖,
(3)
in which the operator 𝐴𝑐 describes the system under consideration, 𝑢𝑐 ∈ 𝒰𝑐 stands for the system
state living in a vector space 𝒰𝑐 , 𝒒 = [𝑞1 , … . , 𝑞𝑛 ]𝑇 are parameters to calibrate the model, and 𝑓𝑐
describes the external influences: the loading, action, initial conditions or experimental set-up.
Note that the description given in Eq. (1) is not necessarily stationary but may also cover timeevolution problems. Additionally, the set of parameters 𝒒 may depend on the state 𝑢𝑐 or parts of it
as well as on the initial conditions in case of a time-evolution problem.
On the other hand, the same physical phenomena can be modelled quite differently when
considered on the fine-scale (detail) level. In an abstract form the corresponding mathematical
model reads
in which 𝐴𝑓 stands for the linear or non-linear operator describing discrete or continuum possibly
time dependent model, 𝑢𝑓 ∈ 𝒰𝑓 is the corresponding state and 𝑓𝑓 is the loading program identical
to 𝑓𝑐 .
Hence, in order to realistically describe physical phenomena, additional information outlining
the finer resolution of the problem in Eq. (2) has to be incorporated into Eq. (1). To achieve this,
one has to evaluate at possibly very high cost the response of the fine-scale model and to infer
parameters 𝒒 in Eq. (1) in such a way that the predictions of Eq. (1) match those of Eq. (2) as
accurately as possible. However, as the two scales do not match with each other (𝒰𝑐 ≠ 𝒰𝑓 ), the
states 𝑢𝑐 and 𝑢𝑓 cannot be directly compared. Instead, the two models are to be compared by some
observables or measurements
𝑦𝑓 = 𝑌𝑓 (𝑢𝑓 (𝑓𝑓 )),
(4)
on the coarse and fine scale, respectively. The goal of calibration is now to estimate 𝒒 such that 𝑦𝑐
and 𝑦𝑓 , resp. 𝑧 and 𝑦𝑓 deviate as little as possible up to the error 𝜖 depicting the discrepancy
between the coarse- and fine-scale models.
Sadiq M. Sarfaraz Bojana V. Rosić, Hermann G. Matthies and Adnan Ibrahimbegović
3. Bayesian stochastic upscaling
The set of parameters is not known, and is to be estimated given observable z and yf. The
deterministic fit is not easy as in general the mapping 𝒒 ↦ 𝑌𝑐 (𝒒) is not invertible, i.e., 𝑧 does not
contain information to uniquely determine 𝒒, or there are more than one instances of 𝒒 that give a
good fit. Therefore, the corresponding ill-posed problem has to be regularized. In a Bayesian view,
see e.g., Tarantola (2005), the unknown resp. uncertain parameter of 𝒒 is modelled as a random
variable (RV) following the so-called prior distribution taking into account the modeler’s limited
knowledge. The prior information is seen as a regularization term and is to be corrected to the
posterior one by gathering the measurement data. The Bayes theorem is then acting as a decision
rule in modelling of 𝒒, i.e., if the measurement data are to be trusted more or the prior information.
Since the parameters of the model to be estimated are uncertain, all relevant information may
be obtained via their stochastic description. Formally, the set of parameters is defined as mapping
𝒒 ∶ 𝛺 → ℝ𝑛 RVs on a probability space (𝛺, 𝔄, ℙ)
(5)
in which Ω is the set of elementary events, 𝔄 is a 𝜎-algebra of measurable events, and ℙ is a
probability measure. The expectation corresponding to ℙ is denoted by 𝔼(), e.g., the expected
value of 𝒒 is given by: ̅𝒒 ∶= 𝔼(𝒒) ∶= ∫Ω 𝒒(𝜔)ℙ(𝑑𝜔). With 𝒒 formally RVs, the state 𝑢𝑐 , and also
the prediction of the “true” measurement 𝑦𝑐 in Eq. (3) are also RVs. Finally, by assuming that the
error 𝜖(𝜔) is a RV, then the total prediction of the observation or measurement in Eq. (3) 𝑧(𝜔) =
𝑦𝑐 (𝜔) + 𝜖(𝜔) also becomes a RV. In other words, one deals with the probabilistic model of the
observation, i.e., prediction or forecast of the measurement.
3.1 The theorem of Bayes and conditional expectation
Once the fine-scale measurement data are available, the prior information can be updated via
Bayes’ theorem as formulated by Laplace, commonly accepted as a consistent way to incorporate
new knowledge into a probabilistic description (Tarantola 2005). The elementary textbook
statement of the theorem is
ℙ(ℐ𝒒 |ℳ𝑧 ) =
ℙ(ℳ𝑧 |ℐ𝒒 )
ℙ(ℐ𝒒 ), if ℙ(ℳ𝑧 ) > 0
ℙ(ℳ𝑧 )
(6)
in which ℐ𝒒 is some subset of possible 𝒒 on which one would like to gain some information, and
ℳ𝑧 is the new information of non-vanishing measure provided by the measurement. The term
depicts ℙ(ℐ𝒒 ) prior, i.e. the expert's knowledge before the observation ℳ𝑧 is made, whereas the
quantity ℙ(ℳ𝑧 |ℐ𝒒) stands for likelihood, the conditional probability of ℙ(ℳ𝑧 |ℐ𝒒 ) assuming that
ℐ𝒒 is given. Finally, the term ℙ(ℳ𝑧 ) is the so-called evidence, the probability of observing ℳ𝑧 in
the first place.
Instead of dealing with the conditional probabilities as in Eq. (6), one may look at the more
fundamental notion of Kolmogorov's conditional expectation from which conditional probabilities
may easily be recovered. The conditional expectation is defined w.r.t sub-𝜎-algebras 𝔅 ⊂ 𝔄 of the
underlying 𝜎-algebra 𝔄. The 𝜎-algebra may be seen as the collection of subsets of Ω on which one
can make statements about their probability, or in simpler words, as the collection of subsets on
which one can learn something through the observation (Bobrowski 2005). By considering RVs
with finite variance, i.e., restricted to the Hilbert-space
Stochastic upscaling via linear Bayesian updating
𝒮 ∶= 𝐿2 (𝛺, 𝔄, ℙ) ∶= {𝑟 ∶ Ω ⟶ ℝ ∶ 𝑟 measurable w.r.t. 𝔄, 𝔼(|𝑟|2 ) < ∞}.
one may define 𝔅 ⊂ 𝔄 as sub-𝜎-algebra such that
𝒮𝔅 ∶= 𝐿2 space (𝛺, 𝔅, ℙ) ∶= {𝑟: ∈ 𝒮: 𝑟 measurable w.r.t 𝔅}
holds. With 𝒮𝔅 being a closed subspace, there exists a well-defined continuous orthogonal
projection 𝑃𝔅 ∶ 𝒮 → 𝒮𝔅 such that the conditional expectation of a RV 𝑟 ∈ 𝒮 w.r.t. a
sub-𝜎-algebra 𝔅 reads
𝔼(𝑟|𝔅) ∶= 𝑃𝔅 (𝑟) ∈ 𝒮𝔅
(7)
𝔼(|𝑟 − 𝔼(𝑟|𝔅)|2 ) = min {𝔼(|𝑟 − 𝑟̃ |2 ) ∶ 𝑟̃ ∈ 𝒮𝔅 },
(8)
Being an orthogonal projection, the conditional expectation can be obtained by minimizing the
square of error
leading to the variational equation or orthogonality relation
∀𝑟̃ ∈ 𝒮𝔅 ∶
𝔼 (𝑟̃ (𝑟 − 𝔼(𝑟|𝔅))) = 0 .
(9)
In our case of an observation of a RV 𝑧 , the sub-𝜎-algebra 𝔅 is the one generated by the
observation 𝑧, i.e. 𝔅 = 𝜎(𝑧), and the corresponding conditional expectation is simply denoted as
𝔼(𝑟|𝑧) ∶= 𝔼(𝑟|𝜎(𝑧)). According to the Doob-Dynkin lemma (Bobrowski 2005), 𝒮𝜎(𝑧) is given by
functions of the observation
𝒮𝜎(𝑧) ∶= {𝑟 ∈ 𝒮 ∶ 𝑟(𝜔) = 𝜙(𝑧(𝜔)),
𝜙 measurable}
(10)
This means intuitively that anything we learn from an observation is a function of the observation,
and the subspace 𝒮𝜎(𝑧) ⊂ 𝒮 is where the information from the measurement lies. Therefore,
following Eq. (7) a RV 𝑟 may be decomposed into its orthogonal components w.r.t. 𝒮𝜎(𝑧) by using
(11)
⊥
in which (𝐼𝒮 − 𝑃𝜎(𝑧) )(𝑟) ∈ 𝒮𝜎(𝑧)
, the orthogonal complement of 𝒮𝜎(𝑧) . Obviously, 𝑃𝜎(𝑧) (𝑟) is the
2
best estimator for 𝑟 measured in the error norm squared ‖𝑟 − 𝑃𝜎(𝑧) (𝑟)‖𝒮 from the subspace 𝒮𝜎(𝑧) .
The orthogonal decomposition in Eq. (11) allows the construction of an identification filter by
knowing that from a measurement 𝑧 one learns something about the component 𝑃𝜎(𝑧) (𝑟) in 𝒮𝜎(𝑧) .
Hence, one simple approach is the least-square approximation, which also underlies the GaussMarkov theorem and its extensions (Luenberger 1969). If 𝒒𝑝 is our prior knowledge before the
measurement, or the forecast, one thus defines the filtered, analyzed, or assimilated RV 𝒒𝑎 having
the observation 𝑦̌ from Eq. (11) as
𝒒𝑎 = 𝔼(𝒒𝑝 |𝑦̌) + (𝒒𝑝 − 𝔼(𝒒𝑝 |𝑧))
= 𝒒𝑝 + (𝔼(𝒒𝑝 |𝑦̌) − 𝔼(𝒒𝑝 |𝑧)) = 𝒒𝑝 + 𝒒𝑖
(12)
in which 𝒒𝑖 = 𝔼(𝒒𝑝 |𝑦̌) − 𝔼(𝒒𝑝 |𝑧) is called the innovation, and as 𝔼(𝒒𝑎 |𝑦̌) = 𝔼(𝒒𝑝 |𝑦̌) , it
Sadiq M. Sarfaraz Bojana V. Rosić, Hermann G. Matthies and Adnan Ibrahimbegović
follows, that 𝔼(𝒒𝑖 |𝑦̌) = 0. Eq. (12) is the nonlinear conditional expectation-filter (Matthies et al.
2016), but as 𝔼(𝒒𝑝 |𝑧) can be a complicated function of 𝑧, it may be difficult to compute. A
simpler version results if in Eq. (10) one takes only the affine functions, i.e., a smaller subspace
𝒮𝜎(𝑧),1 ∶= {𝑟 ∈ 𝒮 ∶ 𝑟(𝜔) = 𝐻(𝑧(𝜔)) + 𝑏,
𝐻 linear} ⊂ 𝒮
(13)
and the minimization Eq. (8) is performed over this smaller subspace, resulting in an optimal linear
map 𝑲𝒒 (the so-called Kalman-gain) (Matthies et al. 2016, Luenberger 1969). With this
simplification in Eq. (12) one arrives at the Gauss-Markov-Kalman-filter (GMKF)
(14)
given in terms of RVs 𝒒𝑝 (𝜔) and 𝑧(𝜔), which for computational purposes have to be discretized.
3.2 Spectral or functional approximation
Having that Eq. (14) is a relation between RVs, it certainly also holds for samples of the RVs,
and this is the basis of the ensemble Kalman filter, the EnKF (Evensen 2009). The sampling points
are sometimes also denoted as particles, and the EnKF is a simple version of a particle filter.
However, here we want to pursue the more promising functional or spectral approximation
(Matthies et al. 2016, Matthies 2007) for all the RVs in Eq. (14). This means that all RVs, say
𝒒(𝜔), are described as functions of known RVs{𝜃1 (𝜔), … , 𝜃𝑙 (𝜔), … . }. Often, when for example
stochastic processes or random fields are involved, one has to deal here with infinitely many RVs,
which for an actual computation have to be truncated to a finite number of significant RVs stored
in a vector 𝜽(𝜔) = [𝜃1 (𝜔), … , 𝜃𝑛 (𝜔)]. We shall assume that these have been chosen as Gaussian
and uncorrelated, thus they can be considered as independent. This allows a choice of a finite set
of linearly independent functions {Ψ𝛼 }𝛼∈𝒥𝑀 of the variables 𝜽(𝜔), where the index 𝛼 is a multiindex, and the set 𝒥𝑀 is a finite set of multi-indices with cardinality (size) 𝑀. Among different
systems of functions that can be used, here the classical choice of multivariate polynomials is
made - leading to the polynomial chaos expansion (PCE) (Matthies 2007). Thus, a RV 𝒒(𝜔) is
replaced by a functional approximation
̂(𝜔) = ∑ 𝒒𝛼 Ψ𝛼 (𝜽(𝜔)) = ∑ 𝒒𝛼 Ψ𝛼 (𝜽) = 𝒒
̂(𝜽)
𝒒
𝛼∈ℐ𝑀
(15)
𝛼∈ℐ𝑀
The argument 𝜔 will be omitted from here on, as the probability measure ℙ on Ω is transported
to 𝚯 = Θ1 × … × Θ𝑛 , the range of 𝜃, giving ℙ𝜃 = ℙ1 × … × ℙ𝑛 as a product measure, in which
ℙ𝑙 = (θ𝑙 ) ∗ ℙ is the distribution measure of the RV 𝜃𝑙 , as the RVs 𝜃𝑙 are independent. All
computations following this stage are performed on 𝚯, typically some subset of ℝ𝑛 . Hence, 𝑛 is
the dimension of the problem, and if 𝑛 is large, one faces a high-dimensional problem. The filter
Eq. (14) then reads (see Matthies et al. (2016) for more details)
̂𝑎 (𝜽) = 𝒒
̂𝑝 (𝜽) + 𝑪𝒒̂𝑝𝑧̂ 𝑪−1
̂𝑝 (𝜽) + 𝑲𝒒̂ (𝑦̌ − 𝑧̂ (𝜽))
𝒒
̌ − 𝑧̂ (𝜽)) = 𝒒
𝑧̂ (𝑦
(16)
If the approximating functions are polynomials, the last expression is known as a spectral
Kalman filter (SPKF). Inserting the functional approximations into Eq. (16), one obtains an
Stochastic upscaling via linear Bayesian updating
explicit and easy to evaluate expression for the assimilated or updated variable in terms of the
input.
4. The coarse-scale model
For simplicity reasons the continuum model on the coarse scale in Eq. (1) in Section 2 is
assumed to be of standard generalized type (Halpen and Nguyen 1974, Halpen and Nguyen 1975,
Nguyen 1977) characterized by infinitesimal displacements/strains and spatially constant material
properties. In particular here are considered the pressure sensitive materials such as concrete and
rocks described in a simplified manner using the Drucker-Prager yield criterion for plastic and
damage criterion based on a spherial part of the stress tensor in compression as proposed in
Ibrahimbegović (2009). The behavior of such materials is completely characterized by two
functions: the stored resp. Helmholtz free energy density 𝜓𝑐 (𝜀, 𝒘, 𝒒) for the reversible part, and
the dissipation pseudo-potential density 𝜑𝑐 (𝜀̇, 𝜀, 𝒘, 𝒘̇, 𝒒) for the irreversible part, and the
assumption of maximal dissipation. Here, 𝜀 is the strain, 𝒘 is a collection of internal
phenomenological variables (the memory of the material), and 𝒒 is a collection of parameters
specifying the detailed character of the functions 𝜓𝑐 and 𝜑𝑐 .
4.1 Constitutive equations
The constitutive description of the coarse-scale material is assumed to follow an associated
rate-independent law with linear hardening described by the Helmholtz free energy
1
1
1
1
𝜓𝑐 (𝑥, 𝜀, 𝒘, 𝒒) = 𝜀𝑒 𝐶𝜀𝑒 + 𝜎𝐷𝜎 + 𝐾𝑝 𝜐𝑝2 + 𝐾𝑑 𝜐𝑑2
⏟
⏟
2
2
2
2
(17)
1
𝜑𝑐 (𝑥, 𝜀̇, 𝜀, 𝒘, 𝒘̇, 𝒒) = ⏟
(𝜎 ⋅ 𝜀̇𝑝 + 𝜒𝑝 𝜈̇𝑝 ) + ( 𝜎 ⋅ 𝐷𝜎 + 𝜒𝑑 𝜈̇ 𝑑 )
⏟2
𝑝
(18)
𝜓𝑑
𝜓𝑒
in which the vector 𝒘 contains the plastic {𝜀𝑝 , 𝜈𝑝 } and damage {𝜀𝑑 , 𝜈𝑑 } internal variables, whereas
the parameter 𝒒 consists of the isotropic and homogeneous elastic constitutive tensor 𝐶 given as a
function of bulk 𝜅 and shear 𝐺 moduli, the plastic 𝐾𝑝 and damage 𝐾𝑑 isotropic hardening
coefficients, and the damage compliance tensor 𝐷 relating damage strain 𝜀𝑑 and 𝜎. Moreover, the
dissipation functional is given by
𝜑𝑐
𝜑𝑐𝑑
in which 𝜒𝑝 and 𝜒𝑑 are the plastic and damage hardening forces related to the respective strainlike hardening variables 𝜈𝑝 and 𝜈𝑑 . Finally, the evolution is driven by the prescribed admissible
stress domain represented by plastic and damage yield functions
1
2
𝑓𝑝 (𝜎, 𝜒𝑝 ) = √dev(𝜎):dev(𝜎) − tr(𝜎)tan(𝛼) − √ (𝑐 − 𝜒𝑝 ) ,
3
3
𝑓𝑑 (𝜎, 𝜒𝑑 ) =< −tr(σ) > −(𝜎𝑓 − 𝜒𝑑 ),
(19)
(20)
Sadiq M. Sarfaraz Bojana V. Rosić, Hermann G. Matthies and Adnan Ibrahimbegović
respectively. Here, Eq. (19) describes the Drucker-Prager yield function for plasticity, in which 𝑐
𝑐
denotes the cohesion and 𝛼 is the friction angle here modelled via the parameter 𝑐𝛼 = tan (𝛼) .
Similarly, Eq. (20) represents the damage yield function in which 𝜎𝑓 signifies the failure stress. By
gathering all model parameters into one vector we have
𝒒 = [log 𝜅, log 𝐺, log 𝑐, log 𝐾𝑝 , log 𝑐𝛼 , log 𝜎𝑓 log 𝐾𝑑 ,]
(21)
𝑌𝑐 (𝒒) = [∫ 𝜓𝑐 (𝑥, 𝜀, 𝒘, 𝒒) 𝑑𝑉, ∫ 𝜑𝑐 (𝑥, 𝜀, 𝜀̇, 𝒘, 𝒘̇, 𝒒)𝑑𝑉]
(22)
The goal is to infer 𝒒 given the fine scale measurement data and their coarse-scale prediction as
described in Eq. (3). The latter one is formulated in a more concrete form
as the spatial averages of the stored and dissipated energies in the domain - one quadrilateral
element of the coarse-scale model.
4.2 Variational formulation
For the sake of completeness with regards to the material description, in this section we briefly
dwell on the details of the numerical implementation of the constitutive model under consideration
following Markovic and Ibrahimbegović (2006). By taking the displacement and stress variables
as unknown state, the mixed weak formulation of the problem is obtained from the HellingerReissner principle given below.
Π𝑐𝑜𝑚𝑝 (𝑢, 𝜎) = ∫ (−𝜙𝑐𝑒 (𝜎) − 𝜙𝑐𝑑 (𝜎, 𝐷) + 𝜎(∇u − 𝜀 𝑝 ))𝑑𝑉 − ∫ 𝑢 ⋅ 𝑡𝑑𝐴
𝒢
(23)
Γ𝜎
where 𝜙𝑐𝑒 (𝜎) and 𝜙𝑐𝑑 (𝜎, 𝐷) are the complementary energy densities. By enforcing the stationary
condition on Eq. (23) with respect to the variation of the states 𝜎 and 𝑢 we get
𝜕Π𝑐𝑜𝑚𝑝
𝜕Π𝑐𝑜𝑚𝑝
𝛿𝑢 +
𝛿𝜎 = 0,
∀𝛿𝑢, 𝛿𝜎.
𝜕𝑢
𝜕𝑢
from which the equilibrium equation and the additive decomposition of the strain read
𝛿Π𝑐𝑜𝑚𝑝 =
(24)
∫ ∇𝑠 𝛿𝑢𝑑𝑉 − ∫ 𝛿𝑢 ⋅ 𝑡𝑑𝐴 = 0, ∀𝛿𝑢,
(25)
∫ [∇𝑠 − 𝜀 𝑝 − 𝒟𝜎 − 𝒞 −1 𝜎]𝑑𝑉 = 0, ∀𝛿𝜎.
(26)
𝒢
𝒢
Γ𝜎
Note that this formulation directly leads to the hypothesis of the additive decomposition of
strain, which is usually assumed a priori in the strain-based approach (Ibrahimbegović et al.
(2003). Regarding the spatial discretisation of Eq. (25) to Eq. (26), both displacement and stress
fields are discretised in the finite element setting as proposed in Pian and Sumihara (1984).
In the deterministic setting the last two equations would represent the full discretisation of the
problem given in Eqs. (25) to (26). The estimation of the state in time sequence {𝑡𝑛 } follows the
computational algorithm consisting of three stages: global, elemental and the local (integration
point) one. On the global or node level, the discretised equilibrium in Eq. (25) (i.e., the non-linear
Stochastic upscaling via linear Bayesian updating
residual equation) is solved by a Newton-like procedure for the increment of displacement Δ𝑢𝑛 .
On the element level, the stress interpolation parameters are determined by solving the discrete
form of Eq. (26). The internal variables associated with damage/plasticity are evaluated using the
closest point projection scheme at the Gauss integration points.
Finally, we would like to remark that the stress-based approach is computationally more
efficient than the strain-based counterpart, because it does not need an additional iterative loop to
enforce equivalence of computed stress, required in the latter approach (Ibrahimbegović et al.
2003).
4.3 Stochastic formulation- forecast
The constitutive model as described in the previous section is deterministic, and is to be
extended into its probabilistic counterpart in order to take into account the prior uncertainty of the
material properties describing the expert's knowledge as depicted in Section 3. As all the model
parameters are positive, one actually models their logarithms, see Eq. (21), which are
unconstrained and at the same time producing the proper metric. This allows 𝒒 to be a priori
modelled as a vector of independent normally distributed random variables according to the
maximum entropy principle such that the original parameters follow a log-normal distribution as
prior.
In the Bayesian identification the uncertain material parameters are random variables, and
hence the coarse scale material is a stochastic one. In other words, the stored energy 𝜓𝑐 and the
dissipation 𝜑𝑐 densities in Eqs. (17) and (18) become random variables. Additionally, to their
spatial dependence on the location 𝑥, both densities are also formally functions of the variable 𝜔 ∈
Ω, i.e., elementary probability events as described in Section 3. The simulation of the coarse- scale
model hence corresponds to the process of solving a stochastic problem which has a similar form
to the deterministic one (Ibrahimbegović and Matthies 2012, Matthies and Ibrahimbegović 2014).
Computationally, the striking difference upon full discretization lies in the problem dimension.
Namely, the state variable lives in a space obtained as the tensor product of the corresponding
deterministic space and the space of random variables 𝒮. Thus, the stochastic problem requires a
temporal, spatial, and stochastic discretization, largely increasing the problem dimension and thus
the computational effort. For a full discussion of such computations, see (Rosić and Matthies
2014).
In this paper the stochastic discretization is done in a functional approximation setting as
already described in Section 3.2, i.e., for both the displacement and stress variables as ansatz are
taken the polynomial chaos expansions, the coefficients of which are found by pseudo-spectral
projection resp. regression (Rosić and Matthies 2014). Once the solution response is approximated,
the prior predictive, i.e., the prediction of the measurement, is evaluated by obtaining the spatial
averages of the energy densities
̂) = [∫ 𝜓𝑐 (𝑥, 𝜔, 𝜀̂, 𝒘
̂)𝑑𝑉]
̂) 𝑑𝑉, ∫ 𝜑𝑐 (𝑥, 𝜔, 𝜀̂, 𝜀̂,̇ 𝒘
̂, 𝒒
̂̇ , 𝒘
̂, 𝒒
𝑌𝑐 (𝒒
(27)
In contrast to Eq. (22) here the measurements are random variables, which also can be
approximated by the polynomial chaos expansions.
5. The fine-scale model
Sadiq M. Sarfaraz Bojana V. Rosić, Hermann G. Matthies and Adnan Ibrahimbegović
To sketch the upscaling procedure, for simplicity reasons the fine-scale or “micro-scale model”
is taken to be the fine-discretised version of the coarse-scale continuum model based on the
standard generalized theory, see Section 4. However, any other kind of model which allows a
“measurement” resp. computation of stored and dissipated energies could be also used.
The discretisation is refined such that the only one quadrilateral element on the coarse scale is
split into 2500 finer quadrilateral elements. Additionally, the material parameters 𝒒 are assumed to
be isotropic and heterogeneous, the spatial dependence of which is modelled by one realization of
normally distributed random fields described by isotropic stationary Gaussian correlation
functions. Even though described by random fields, the spatially varying 𝒒 is in the bottom line
deterministic and unknown to the coarse-scale model in the identification procedure. The
information about the parameter value can be only indirectly observed via spatially averaged the
fine-scale stored resp. Helmholtz free energy density 𝜓𝑓 (𝑥, 𝜀𝑓 , 𝒘𝑓 ) and the fine-scale dissipation
pseudo-potential density 𝜑𝑓 (𝑥, 𝜀𝑓 , 𝜀̇𝑓 , 𝒘𝑓 , 𝒘̇𝒇 ), i.e.,
𝑌𝑓 = [∫ 𝜓𝑓 (𝑥, 𝜀𝑓 , 𝒘𝑓 ) 𝑑𝑉, ∫ 𝜑𝑓 (𝑥, 𝜀𝑓 , 𝜀̇𝑓 , 𝒘𝑓 , 𝒘̇𝒇 )𝑑𝑉].
(28)
Here, the variables 𝜀𝑓 and 𝒘𝑓 as well as their evolution rates have the same physical meaning
as in the coarse-scale model given in Section 4.
6. Numerical results
To illustrate the proposed strategy, the upscaling is tested on several numerical examples in
which the fine-scale configurations are taken in both homogeneous (one realisation of a random
variable) and heterogeneous (one realisation of a random field) forms. The former one is of
particular importance for the validation purposes, whereas the latter one explores more practical
examples with spatially varying material properties. Additionally, different loading cases necessary
to trigger the identification of all the relevant material parameters are used. These correspond to
the boundary displacements enforced by specifying the respective displacement gradient given as
𝒖𝑏 = 𝑯𝒙𝑏 ,
(29)
in which 𝒖𝑏 and 𝒙𝑏 stand for the boundary displacements and nodal coordinates, respectively.
Here, only two modes of the deformation gradient 𝑯 are taken as specified in Table 1. However,
these are further linearly combined in various manners such that each combination defines one
experiment. Moreover, the update of the coarse-scale parameters is performed in a sequential way
such that the energy measurements from the first experiment are used to obtain the intermediate
posterior which further serves as a prior for the second experiment, etc.
Table 1 Deformation matrix for different loading cases
𝐻𝐼
0.0 1.0
[
]
1.0 0.0
𝐻𝐼𝐼
1.0 0.0
[
]
0.0 1.0
Stochastic upscaling via linear Bayesian updating
6.1 The homogeneous case
In order to validate the identification procedure for the non-linear type of measurements such as
the energy integrals in this paper, both the fine- and coarse-scale descriptions are taken as
homogeneous ones, and the corresponding variational formulations are discretised by same
quadrilateral element. The coarse-scale parameters follow the log-normal distributions, a priori
characterized by the second order characteristics, whose mean values have a 10% off-set from
their deterministic fine scale counterpart (shown in Table 2) and have 10% coefficient of variation.
Table 2 Fine scale truth and coarse scale prior statistics (in MPa)
Property
Fine truth
𝜅
204000
𝐺
92000
𝑐
300
𝐾𝑝
450
𝑐𝛼
500
𝜎𝑓
300
𝐾𝑑
450
Fig. 1 Load evolution on coarse scale to update material parameter using homogeneous fine-scale
measurements
The upscaling is considered for elastic, plastic or damage parameters separately, i.e., for the
given loading case, the parameters other than those being identified, are kept known and
deterministic. To trigger different dissipation mechanisms under consideration, several loading
histories are constructed as shown in Fig. 1. As results we show the percentile estimates of the
updated parameters.
We first consider the update of elastic parameters for which the first two steps of Load I
followed by the first two steps of Load III (see Fig. 1) are used. As apparent from Fig. 2(a)-(b), the
coarse-scale elastic parameters are updated quite well with the posterior distribution pivoted about
the fine-scale truth and the prior uncertainty significantly reduced. Having that the first two
loading steps characterize shear deformation, one may note that in the beginning only 𝐺 is
updated, whereas the information on 𝜅 is contained in the energy measurements from the third step
onwards. From this fine-scale measurement the posterior mean value shifts in the proximity of true
one and the variance reduces. This is expected as the third and fourth load steps of Load III impart
predominantly volumetric deformation characterizing 𝜅. In order to illustrate the effect of loading
on the identification, the same numerical setup as before is considered only with loading sequence
being reversed i.e. the first two steps of Load III followed by the first two steps of Load I are
applied. In this case, as expected, 𝜅 is identified before 𝐺, see Fig. 3.
Sadiq M. Sarfaraz Bojana V. Rosić, Hermann G. Matthies and Adnan Ibrahimbegović
For the plastic parameters shown in Fig. 4(a)-(c) with Load I considered for the identification
process, the prior distributions describing 𝑐 and 𝐾𝑝 are updated significantly around step 4-6,
indicating that the plastic deformation has kicked in, and the posterior converges uniformly to the
true value in the subsequent steps. One thing to note is that it takes fewer steps for 𝑐 to update
than 𝐾𝑝 . On the other hand, the update of 𝑐𝛼 seems less satisfactory as the mean converges to the
true value around the sixth loading step.
Finally, the update of the damage parameters is shown in Fig. 5(a)-(b) with the compression
Load II used for the identification procedure. The mean of posteriors describing 𝜎𝑓 and 𝐾𝑑
converges to the true value around points 4 and 10, respectively. Moreover, the mean of 𝜎𝑓 shifts
in fewer steps to the true value as compared to 𝐾𝑑 which is akin to the trend observed in the update
of plastic parameters. This is understandable behaviour as the update of cohesion or fracture stress
(for plastic and damage case respectively) requires just the inception of plastic and damage
phenomena as information. However, the hardening parameters need significant changes in
material hardening in order to have sufficient information for update.
(a)
(b)
Fig. 2 Updated elastic material parameters using homogeneous fine-scale measurements
(a)
(b)
Fig. 3 Updated elastic material parameters using homogeneous fine-scale measurements with loading
sequence reversed
Stochastic upscaling via linear Bayesian updating
(a)
(b)
(c)
Fig. 4 Updated plastic material parameters using homogeneous fine-scale measurements
(a)
(b)
Fig. 5 Updated damage material parameters using homogeneous fine-scale measurements
Sadiq M. Sarfaraz Bojana V. Rosić, Hermann G. Matthies and Adnan Ibrahimbegović
6.2 The heterogeneous case
We now turn our attention to the more realistic case in which the material properties on the fine
scale are assumed to be spatially varying. In this scenario, the fine scale is discretised using 2500
elements, the spatial variability of elastic {𝜅, 𝐺}, plastic{𝑐, 𝐾𝑝 , 𝑐𝛼 }and damage {𝜎𝑓 , 𝐾𝑑 } parameters
is realized by taking them as a realization of a log - normal random field with second order
characteristics. To generate the realizations, the mean value of parameters are taken same as the
“Fine truth” for the homogeneous case (shown in Table. 1). The coefficient of variation is taken as
5% and the correlation length is 10 times the characteristic length of the fine-scale element with a
Gaussian covariance function. The upscaling is performed for each phenomenon separately
meaning that on the coarse scale the parameters other than those being updated are kept
deterministic. Moreover, two updating approaches are considered:
• Sequential: The parameters are updated sequentially i.e., the information from fine scale is
added at each load step and the current posterior is taken as a new prior for the next update.
• Smoothing: The whole history of measurements is used in one go to update the coarse- scale
parameters.
As results, we show the percentiles and updated distributions for the coarse-scale material
parameters using the sequential and smoothing approach respectively. In addition, we also
Fig. 6 load evolution on coarse scale to update material parameters using heterogeneous fine-scale
measurements
compare the percentiles of different energy measures (used for upscaling) computed from the prior
and updated distribution of the coarse-scale values with the corresponding fine-scale ones. In this
case, we denote the prior and posterior percentile estimates as 𝑝𝑟 and 𝑝𝑓 respectively.
6.2.1 Update of elastic parameters
For the update of the elastic parameters, the loading consists of three steps from Load I (biaxial tension, notice that there will be some shear component in this case as we consider plain
strain case) followed by three steps from Load II (pure shear). The loading is graphically
illustrated in Fig. 6. The loading cases are executed independently i.e., each loading programme
Stochastic upscaling via linear Bayesian updating
(a)
(b)
(c)
(d)
Fig. 7 Updated elastic material parameters using heterogeneous fine-scale measurements from sequential (a),
(c) and smoothing (b), (d) approaches
(a)
(b)
Fig. 8 Comparison of the percentile estimates of energy measures computed from the prior
(𝑝𝑟5 , 𝑝𝑟50 and 𝑝𝑟95 ) and the updated (𝑝𝑓5 , 𝑝𝑓50 and 𝑝𝑓95 ) elastic parameters with the fine-scale values using
sequential (a) and smoothing (b) approaches
starts with an undeformed configuration.
The update using the sequential approach is shown in Fig. 7(a)-(c). During the first three
Sadiq M. Sarfaraz Bojana V. Rosić, Hermann G. Matthies and Adnan Ibrahimbegović
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 9 Updated plastic material parameters using heterogeneous fine-scale measurements from sequential
(a), (c), (e) and smoothing (b), (d), (f) approaches
loading steps the mean of 𝜅 starts moving with reduced uncertainty illustrated by 𝑝5 and 𝑝95
quantiles. For 𝐺, the reduction in uncertainty becomes noticeable only from 4th step onwards as
the load steps are characterised by shear deformation. On the other hand, the update using the
whole history is shown in Fig. 7(b)-(d). In this case the whole loading history for the two cases is
Stochastic upscaling via linear Bayesian updating
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 10 Comparison of the percentile estimates of energy measures computed from the prior
(𝑝𝑟5 , 𝑝𝑟50 and 𝑝𝑟95 ) and the updated (𝑝𝑓5 , 𝑝𝑓50 and 𝑝𝑓95 ) plastic parameters with the fine-scale values using
sequential (a), (c), (e) and smoothing (b), (d), (f) approaches
added sequentially, meaning that the update is performed by concatenating the two loading
histories one after the other. Similar to the sequential case 𝜅 is updated during the first step (i.e., by
using measurements from steps 1-3) whereas 𝐺 remains oblivious to added information. The shear
modulus only updates in the second step (i.e. by using measurements from step 4-6).
In order to check the validity of the update, the posterior predictive of elastic energies, obtained
by propagating the posterior 𝜅 and 𝐺 values through the coarse-scale model under the same
Sadiq M. Sarfaraz Bojana V. Rosić, Hermann G. Matthies and Adnan Ibrahimbegović
(a)
(b)
(c)
(d)
Fig. 11 Updated plastic material parameters using heterogeneous fine-scale measurements from sequential
(a), (c) and smoothing (b), (d) approaches
loading program, are compared to the prior predictive energies and the fine-scale counterpart in
Fig. 8(a)-(b). As the posterior 95% interval shrinks to the fine-scale truth for both sequential and
smoothing approaches, one may conclude that the upscaling procedure is successful.
6.2.2 Update of plastic parameters
The loading program for the update of plastic parameters is shown as Load III (combination of
bi-axial tension and pure shear) in Fig. 6. The load steps are chosen such that one hits the yield
surface, and thus extracts the information about the plastic phenomenon. The evolution of
sequential updates is shown in Fig. 9(a), (c) and (e) in which one could immediately observe that
there is no convergence in the mean for 𝑐 and 𝐾𝑝 . On the other hand, the mean of 𝑐𝛼 stabilizes
after 13 steps. The results for the smoothing update are shown in Fig. 9(a), (d) and (f). To validate
such an obtained posterior distribution, the posterior predictive energies (the elastic energy, plastic
dissipation and stored hardening energy) evaluated in a similar manner as in the elastic case are
depicted in Fig. 10. By comparing the respective 95% regions to the fine-scale measurement, one
may conclude that the smoothing update performs better than the sequential one in terms of
mimicking the energy response of the fine scale.
Stochastic upscaling via linear Bayesian updating
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 12 Comparison of the percentile estimates of energy measures computed from the prior
(𝑝𝑟5 , 𝑝𝑟50 and 𝑝𝑟95 ) and the updated (𝑝𝑓5 , 𝑝𝑓50 and 𝑝𝑓95 ) damage parameters with the fine-scale values
using sequential (a), (c), (e) and smoothing (b), (d), (f) approaches
6.2.3 Update of damage parameters
The upscaling of the damage parameters 𝜎𝑓 and 𝐾𝑑 is shown in Fig. 11. The corresponding
loading program which triggers the damage phenomenon is depicted by Load IV in Fig. 6. By
Sadiq M. Sarfaraz Bojana V. Rosić, Hermann G. Matthies and Adnan Ibrahimbegović
validating the posterior predictive coarse-scale energies (elastic energy, damage dissipation and
hardening) with the fine-scale counterpart in Fig. 12, one may conclude that updating performs
comparatively better than sequential approach in a similar manner as in the plastic case.
7. Conclusions
In this paper, we have proposed a probabilistic approach to estimate unknown coarse-scale
material parameters using fine-scale information. The material parameters on the coarse scale are
considered random and are updated using fine-scale measurements in a Bayesian framework. To
demonstrate the application of the proposed strategy, we considered the calibration of coupled
damage-plasticity model on coarse scale. We used stored energy and dissipation to update the
parameters governing the reversible and irreversible behaviour. The numerical examples were
shown for homogeneous and heterogeneous fine scales. In case of homogeneous fine scale, the
mean values of the coarse-scale parameters converge to their fine-scale counterpart with vanishing
variation by performing loading experiments suitable for triggering different elastic and inelastic
mechanisms. After validating the functioning of our approach and gaining knowledge about the
loading cases conducive to identify material parameters, we turn to more realistic case of
heterogeneous fine scale. In this case we performed the upscaling using sequential and smoothing
approaches. By observing the comparison of different energy measures from the updated
parameters with the fine-scale values, we can conclude that both approaches perform equally well
for the elastic case. However, for plasticity and damage cases, in terms of matching stored
hardening, the sequential approach performs better than the smoothing approach, whereas the
latter performs better in terms of matching elastic energy and dissipation. The deviation between
the coarse- and fine-scale energy responses for the inelastic phenomena can be attributed to a
localized nature of the irreversible behavior on the fine scale which is understandably impossible
to capture accurately with just one element on the coarse scale. This is substantiated by observing
the severe jumps in the evolution of the update of inelastic parameters for the smoothing
approaches. Nevertheless, the proposed approach provides a promising outlook to investigate
further into the problems encountered in the heterogeneous case and to experiment with different
fine- and coarse-scale models e.g., (Do and Ibrahimbegović 2015, Do et al. 2015, Ngo et al. 2014).
Acknowledgments
This material is partially based upon the work supported by the DFG (Deutsche
Forschungsgemeinschaft), Germany and the ANR (Agence Nationale de la Recherche), France
under grant of the SELF-TUM project.
References
Arsigny, V., Fillard, P., Pennec, X. and Ayache, N. (2006), “Geometric means in a novel vector space
structure on symmetric positive-definite matrices”, SIAM J. Matr. Analy. Appl.
Asokan, B. and Zabaras, N. (2006), “A stochastic variational multiscale method for diffusion in
heterogeneous random media”, J. Comput. Phys., 654-676.
Bobrowski, A. (2005), Functional Analysis for Probability and Stochastic Processes, Cambridge, Cambridge
Stochastic upscaling via linear Bayesian updating
University Press.
Brady, L., Arwade, S., Corr, D., Gutierrez, M., Breysse, D., Grigoriu, M. and Zabaras, N. (2006),
“Probability and materials: From nano-to macro-scale: A summary”, Probab. Eng. Mech., 193-199.
Clément, A., Soize, C. and Yvonnet, J. (2013), “Uncertainity quantification in computational stochastic
multiscale analysis of nonlinear elastic materials”, Comput. Meth. Appl. Mech. Eng., 61-82.
Del Maso, G., De Simone, A. and Mora, M. (2006), “Quasistatic evolution problems for linearly elastic
perfectly plastic materials”, Arch. Rat. Mech. Analy., 237-291.
Demmie, P. and Ostaja-Starzewski, M. (2015), “Local and non-local material models,spatial randomness and
impact loading”, Arch. Appl. Mech.
Do, X.N., Ibrahimbegović, A. and Brancherie, D. “Localized failure in damage dynamics”, Coupled Syst.
Mech., 4, 211-235.
Do, X.N., Ibrahimbegović, A. and Brancherie, D. (2015), “Combined hardening and localized failure with
softening plasticity in dynamics”, Coupled Syst. Mech., 4, 115-136.
Evensen, G. (2009), Data Assimilation-The Ensemble Kalman Filter, Berlin, Springer.
Gelman, A., Carlin, J., Stern, H. and Rubin, D. (2014), Bayesian Data Analysis, Boca Raton, Taylor and
Francis.
Ghanem, R. and Das, S. (2011), Stochastic Upscaling for Inelastic Material Behavior from Limited
Experimental Data, Computational Methods for Microstructure-Property Relationships, Springer, Berlin.
Gorguluarslan, R. and Choi, S.K. (2014), “A simulation based upscaling technique for multiscale modeling
of engineering systems under uncertainity”, J. Multisc. Comput. Eng., 549-566.
Halpen, B. and Nguyen, Q. (1974), “Plastic and visco-plastic materials with generalized potential”, Mech.
Res. Commun., 43-47.
Halpen, B. and Nguyen, Q. (1975), “Sur les matériaux standard généralisés”, J. de Mécan., 39-63.
Han, W. and Daya Reddy, B. (2013), Plasticity, Mathematical Theory and Numerical Analysis, Springer
Verlag, New York, U.S.A.
Hawkins-Daarud, A., Prudhomme, S., Van der Zee, K. and Oden, J. (2013), “Bayesian calibration, validation
and uncertainity quantification of diffuse interface models of tumor growth”, J. Math. Biol., 1457-1485.
Ibrahimbegović, A. (2009), Nonlinear Solid Mechanics, Springer, Berlin.
Ibrahimbegović, A. and Matthies, H.G. (2012), “Probabilistic multiscale analysis of inelastic localized
failure in solid mechanics”, Comput. Assist. Meth. Eng. Sci., 277-304.
Ibrahimbegović, A., Gharzeddine, F. and Chorfi, L. (1998), “Classical plasticity and viscoplasticity models
reformulated: theoretical basis and numerical implementation”, J. Numer. Meth. Eng., 1499-1535.
Ibrahimbegović, A., Markovic, D. and Gatuingt, F. (2003). “Constitutuve model of coupled damageplasticity and its finte element implementation”, Rev. Européenn. Des Elem., 381-405.
Kaipio, J. and Somersalo, E. (2004), Statistical and Computational Inverse Problems, Springer, Berlin.
Kennedy, M. and O’Hagan, A. (2001), “Bayesian calibration of computer models”, J. Roy. Stat. Ser. B, 425464.
Koutsourelakis, P. (2007), “Stochastic upscaling in solid mechanics: An exercise in machine learning”, J.
Comput. Physi., 301-325.
Liu, Y., Steven Greene, M., Chen, W., Dikin, D. and Liu, W. (2013), “Computational microstructure
characterization and reconstruction for stochastc multiscale material design”, Comput. Aid. Des., 65-76.
Luenberger, D. (1969), Optimization by Vector Space Methods, John Wiley and Sons, Chichester.
Markovic, D. and Ibrahimbegović, A. (2006), “Complementary energy based FE modeling of coupled
elasto-plastic and damage behavior for continuum microstructure compurtations”, Comput. Meth. Appl.
Mech. Eng., 5077-5093.
Matthies, H.G. (1991), “Computation of constitutive response”, In P. Wriggers, and W. Wagner, Nonlinear
Computational Mechanics: State of the Art, Springer Verlag Berlin, Heidelberg.
Matthies, H.G. (2007), “Uncertainity quantification with stochastic finite elements”, Encyclop. Comput.
Mech.
Matthies, H.G. and Ibrahimbegović, A. (2014), “Stochastic multiscale coupling of inelastic processes in
solid mechanics. In M. Papadrakakis, and G. Stefanou”, Multiscale Modelling and Uncertainity
Sadiq M. Sarfaraz Bojana V. Rosić, Hermann G. Matthies and Adnan Ibrahimbegović
Quantification of Materials and Structures, Springer, Berlin.
Matthies, H.G., Zander, E., Rosić, B., Litvinenko, A. and Pajonk, O. (2016), “Inverse problems in a
Bayesian setting”, In A. Ibrahimbegović, Computational Methods for Solids and Fluids-Multiscale
Analysis, Probability Aspects, and Model Reduction, Springer, Berlin.
Ngo, V.M., Ibrahimbegović, A. and Brancherie, D. (2014), “Stress-resultant model and finite element
analysis of reinforced concrete frames under combined mechanical and thermal loads”, Coupled Syst.
Mech., 3, 111-144.
Ngo, V.M., Ibrahimbegović, A. and Hajdo, E. (2014), “Nonlinear instability problems including localized
plastic failure and large deformations for extreme thermomechanical load”, Coupled Syst. Mech., 3, 89110.
Nguyen, Q. (1977), “On the elastic plastic initial-boundary value problem and its numerical
implementation”, J. Numer. Meth. Eng., 817-832.
Pajonk, O., Rosić, B., Litvinenko, A. and Matthies, H.M. (2012), “A deterministic filter for non- Gaussian
Bayesian estimation-applications to dynamical system estimation with noisy measurements”, Phys. D,
775-788.
Papoulis, A. (1991), Probability Random Variables, and Stochastic Processes, McGraw-Hill, New York,
U.S.A.
Pian, T. and Sumihara, K. (1984), “Rational approach for assumed stress finite elements”, J. Numer. Meth.
Eng., 1685-1695.
Rosić, B. and Matthies, H.G. (2014), “Variational theory and computations in stochastic plasticity”, Arch.
Comput. Meth. Eng., 457-509.
Rosić, B., Litvinenko, A., Pajonk, O. and Matthies, H.G. (2012), “Sampling-free linear Bayesian update of
polynomial chaos representation”, J. Comput. Phys., 5761-5787.
Rosić, B., Sýkora, J., Pajonk, O., Kučerová, A. and Matthies, H.G. (2016), “Comparison of numerical
approaches to Bayesian updating”, In A. Ibrahimbegović, Comutational Methods for Solids and FluidsMultiscale Analysis, Probability Aspects and Model Reduction, Springer, Berlin.
Starzewski, M. (2008), Microstructural Randomness and Scaling in Mechanics of Materials, Boca Raton,
Chapman and Hall.
Stefanou, G., Savvas, D. and Papadrakakis, M. (2015), “Stochastic finite element analysis of composite
structure based on material microstructure”, Compos. Struct., 384-392.
Steven Greene, M., Liu, Y., Chen, W. and Liu, W. (2011), “Computational uncertainity analysis in
multiresolution material via stochastic constitutive theory”, Comput. Meth. Appl. Mech. Eng., 309-325.
Suquet, P. and Lahellec, N. (2014), Elasto-Plasticity of Heterogeneous Materials at Different Scales,
Procedia IUTAM.
Tarantola, A. (2005), Inverse Problem Theory and Methods for Model Parameter Estimation, SIAM,
Philadelphia, U.S.A.
Yvonnet, J. and Bonnet, G. (2014), “A consistent nonlocal scheme based on filter for the homogenenization
of heterogeneous linear materials with non-separated scales”, J. Sol. Struct., 196-209.
AI