Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Normal Distribution

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

NORMAL RANDOM VARIABLES AND GAUSSIAN VECTORS

1. Normal Random Variables


Recall that a random variable X with values in R is Normal (or Gaussian) with parameters
(, 2 ), R, 2 > 0, if its density with respect to the Lebesgue measure is given by:
1
2
2
e(x) /2 , < x < .
f (x) =
2
Such a distribution is usually denoted by N (, 2 ), and we write X N (, 2 ). We can
also include the case of a degenerate Normal random variable with distribution N (, 0). This
is simply the Dirac distribution of a random variable which is almost surely equal to .
The mean and the variance of a Normal random variable X are given by:
Var(X) = 2 .

E[X] = ,

The characteristic function of a such a Normal random variable is:




 iX 
2 2
E e
= exp i , R,
2
and the Laplace transform is:


 zX 
z2 2
E e
= exp z + ,
2

z C.

Note that of course the characteristic function follows immediately, when the Laplace transform is known.
An important property of Normal random variables is that if X N (1 , 12 ), and Y
N (2 , 22 ), and X and Y are independent, then X + Y N (1 + 2 , 12 + 22 ). This follows
immediately from the form of the characteristic function.
Proposition 1.1. Let (Xn ) be a sequence of Normal random variables, such that Xn
N (n , n2 ), for all n. Suppose that Xn converges in distribution to X. Then, it follows that
X is also Normal with mean = limn n , and variance 2 = limn n2 .
2. Gaussian Vectors
Definition 2.1. An Rn -valued
P random variable X = (X1 , . . . , Xn ) is a Gaussian vector, if
for any u Rn , < u, X >= ni=1 ui Xi is a (one-dimensional) Normal random variable.
If X is a Gaussian vector, there exists Rn , and a quadratic positive form qX on Rn ,
such that for all u Rn ,
E[< u, X >] =< u, >,
Var(< u, X >) = qX (u).
Date: September 27, 2009.
1

In fact, if (e1 ,P
. . . , en ) is the canonical orthonormal basis of Rn , and we write X =
we have = ni=1 E[Xi ]ei =: E[X], and
n
X
qX (u) =
uj uk Cov(Xi , Xj ),

Pn

i=1

Xi ei ,

i,j=1

where Q := Cov(Xi , Xj ) = E[(Xi E[Xi ])(Xj E[Xj ])].


Since < u, X > N (< u, >, qX (u)), it is now easy to deduce the characteristic function
of X:


1
X (u) = E[exp(i < u, X >)] = exp i < u, > qX (u) , u Rn .
2
Note that in fact, the characteristic function uniquely characterizes the distribution of X.
Proposition 2.2. Two components Xl and Xk of a Gaussian vector are independent if and
only if they are uncorrelated (ie. Qlk = Qkl = 0). In particular, the components X1 , . . . , Xn
are independent if and only if the covariance matrix Q is diagonal.
The following theorem is also useful:
Theorem 2.3. Let X be and Rn -valued Gaussian vector and let Y an Rm -valued Gaussian
vector. Assume that X and Y are independent. Then, Z = (X, Y ) is an Rn+m -valued
Gaussian vector.
The characteristic function of Z is Z (u) = X (w)Y (v), where u = (w, v), w Rn and
v Rm .
In fact, all Gaussian vectors arise as linear transformations of vectors of independent
centered Normal random variables as the next theorem shows:
Theorem 2.4. Let X be an Rn -valued Gaussian vector with mean vector . Then, there
exist n independent Normal random variables Y1 , . . . , Yn , with Yj N (0, j ), j 0, and
an orthogonal matrix A, such that X = + AY .
Note that A is such that A QA = diag(1 , . . . , n ). Hence, the j s are the eigenvalues of
the covariance matrix of X. Also, it may be possible, that some of the j s are zero, that
is, Yj = 0 almost surely. So, the number of independent normal random variables needed to
determine a Gaussian vector in Rn may be strictly less than n. In the case, where all the
j s are positive, it is possible to give the density of X with respect to the Lebesgue measure
on Rn . Namely, the density is given by:


1
1
T 1
p
f (x) =
exp (x ) Q (x ) .
2
(2)d/2 det(Q)

You might also like