Random Vectors 1
Random Vectors 1
RANDOM VECTORS
Lecture 2 Review:
Elementary Matrix Algebra Review
rank, trace, transpose, determinants, orthogonality, etc.,
linear independence, range (column) space, null space,
spectral theorem/principal axis theorem,
idempotent matrices, projection matrices, positive definite and
positive semi-definite matrices.
RANDOM VECTORS
Definitions:
1. A random vector is a vector of random variables
X1
X = ... .
Xn
2. The mean or expectation of X is defined as
E[X1 ]
..
.
E[X] =
.
E[Xn ]
3. A random matrix is a matrix of random variables Z = (Zij ). Its
expectation is given by E[Z] = (E[Zij ]).
3. RANDOM VECTORS
Properties:
1. A constant vector a and a constant matrix A satisfy E[a] = a and
E[A] = A. (Constant means non-random in this context.)
2. E[X + Y] = E[X] + E[Y].
3. E[AX] = AE[X] for a constant matrix A.
4. More generally (Seber & Lee Theorem 1.1):
E[AZB + C] = AE[Z]B + C
if A, B, C are constant matrices.
Definition: If X is a random vector, the covariance matrix of X is
defined as
cov(X) [cov(Xi, Xj )]
var(X1)
cov(X1, X2 )
cov(X , X )
var(X2)
2
1
..
..
.
.
cov(Xn, X1 ) cov(Xn, X2 )
cov(X1, Xn )
cov(X2, Xn )
..
...
.
var(Xn)
X1 E[X1 ]
..
(X1 E[X1 ], , Xn E[Xn]) .
= E
.
Xn E[Xn ]
Example: (Independent random variables.) If X1, . . . , Xn are independent then cov(X) = diag(12 , . . . , n2 ).
If, in addition, the Xi have common variance 2 , then cov(X) = 2 In .
3. RANDOM VECTORS
3. RANDOM VECTORS
1
corr(X1 , X2)
1
corr(X2 , X1 )
...
...
corr(X1 , Xn)
corr(X2 , Xn)
.
...
...
2 1
cov(X) = .. .. . . .. .
. .
. .
1
This is sometimes called an exchangeable covariance matrix.
3. RANDOM VECTORS
...
...
cov(X1 , Yn)
cov(X2 , Yn)
.
...
...
cov(Xm , Yn)
X1 E[X1 ]
...
(Y1 E[Y1], , Yn E[Yn]) .
= E
Xm E[Xm ]
Note: The covariance is defined regardless of the values of m
and n.
Theorem: If A and B are constant matrices,
cov(AX, BY) = Acov(X, Y)B0 .
Proof: Similar to proof of cov(AX) = Acov(X)A0 .
Partitioned variance matrix: Let
X
Z=
.
Y
Then
cov(Z) =
cov(X) cov(X, Y)
cov(Y, X) cov(Y)
3. RANDOM VECTORS
E[(X ) A(X )] = E[
XX
i
XX
XX
aij cov(Xi , Xj )
= tr(A).
Second Proof (more clever):
E[(X )0 A(X )] =
=
=
=
=
3. RANDOM VECTORS
Note that
(n 1)s =
X
i
2 = X0 AX
(Xi X)
By the corollary
E[(n 1)s2 ] = E[X0 AX]
= tr(A 2I) + 10 A1
= (n 1) 2
because A1 = 0.
3. RANDOM VECTORS
ple mean X =
i=1 Xi /n and the sample variance S are
independently distributed.
Let x = (X1, . . . , Xn)0 so that x N(1n , 2In). S 2 = x0 Ax,
n
J
= Bx where B = 10n /n.
where A = Inn1
, and X
We now apply the theorem above:
BA =
n
J
2
)=(
)(10n 10n ) = 0.
n1
n(n 1)
In
(10n /n)( 2 In)(