Randomfield KL Cond
Randomfield KL Cond
Randomfield KL Cond
Moo K. Chung
mchung@stat.wisc.edu
February 1, 2007
{ǫ(x, t) : (x, t) ∈ M ⊗ R+ }
R(x, y) = Eǫ(x)ǫ(y).
The variance field of ǫ is given by R(x, x). Mean zero Gaussian field is com-
pletely characterized by the covariance function. A Gaussian random vector field
is defined similarly.
e(x) = (e1 (x), · · · , en (x))′
1
is a n-dimensional Gaussian vector field if ei are Gaussian fields. We may gener-
alize it further to matrix and tensor fields.
Two fields e1 and e2 are independent if e1 (x) and e2 (y) are independent for
every x and y. For mean zero Gaussian fields, e1 and e2 are independent if and
only if the cross-covariance function
R(x, y) = Ee1 (x)e2 (y)
vanishes for all x and y.
Let G be a collection of Gaussian random fields. Suppose e1 , e2 ∈ G. forms
a vector space if c1 e1 + c2 e2 ∈ G for all c1 and c2 . It is trivial to see that G is
an infinite-dimensional vector space. However, there exists a finite vector space
Gp that is the closest to G in the least-squares sense. For any linear operator f ,
f (G) ⊂ L. We show this for the derivatives of Gaussian fields.
2 Derivative Fields
The sequence of random fields Xh (t) converges to X(t) as h → 0 in mean-square
if
lim E|Xh (t) − X(t)|2 = 0.
h→0
We denote it by limh→0 Xh (t) = X(t) if there is no ambiguity. The convergence
in the mean square implies the convergence in mean. Note that
2 2
E|Xh (t) − X(t)|2 = −Var Xh (t) − X(t) − EXh (t) − X(t) .
Now let Xh → X in mean squre. The left hand side is positive and converges to
zero while the right hand side negative. Hence each term in the right hand side
should converges to zero.
We define derivative in mean square as
dX(t) X(t + h) − X(t)
= lim .
dt h→0 h
Note that if X(t) and X(t + h) are Gaussian random fields, X(x + h) − X(t) is
again Gaussian and hence the limit on the right hand side is again Gaussian. If R is
the covariance function of the mean zero Gaussian field X, the covariance function
of its derivative is given by
h dX(t) dX(s) i ∂ 2 R(t, s)
E = .
dt ds ∂t∂s
Partial derivatives are defined similarly.
2
3 Integration of Fields
We define the integration of a random field as the limit of Riemann sum of the field.
Let ∪ni=1 Ωi be a partition of Ω ⊂ Rn , i.e. Ω = ∪ni=1 Ωi and Ωi ∩ Ωj = ∅ if i 6= j.
Let ti ∈ Ωi and µ(Ωi ) be the volume of Ωi . Then we define the integration of a
random field as
Z n
X
X(t) dt = lim X(ti )µ(Ωi ).
Ω all µ(Ωj )→0 i=1
Multiple integration is defined similarly. When we integrate a Gaussian field,
it is the limit of a linear combination of Gaussian random variables so it is again
Gaussian.
Consider the following integral
Z
Y (t) = K(t, s)X(s) ds.
where K is called the kernel of the integral. Define convolution between kernel K
and random field X as the above integral.
Suppose
R the kernel to be isotropic probability density, i.e. K(t, s) = K(t −
s) and K(t) dt = 1. Further we may assume K to be unimodal with some
parameter σ such that
lim K(t; σ) → δ(t),
σ→0