Expectations: Proposition 12.1
Expectations: Proposition 12.1
Expectations: Proposition 12.1
Expectations
As we discussed earlier for two random variables X and Y and a function g (x, y) we can
consider g (X, Y ) as a random variable, and therefore
XX
Eg(X, Y ) = g(x, y)p(x, y)
x,y
¨
E(X + Y ) = (x + y)f (x, y)dx dy
¨ ¨
= xf (x, y)dx dy + yf (x, y)dx dy.
If we now set g(x, y) = x, we see the first integral on the right is EX, and similarly the
second is EY . Therefore
E(X + Y ) = EX + EY.
Proposition 12.1
If X and Y are two jointly continuous independent random variables, then for functions
h and k we have
E[h(X)k(Y )] = Eh(X) · Ek(Y ).
In particular, E(XY ) = (EX)(EY ).
161
162 12. EXPECTATIONS
Proof. By the above with g(x, y) = h(x)k(y), and recalling that the joint density
function factors by independence of X and Y as we saw in (11.1), we have
¨
E[h(X)k(Y )] = h(x)k(y)fXY (x, y)dx dy
¨
= h(x)k(y)fX (x)fY (y)dx dy
ˆ ˆ
= h(x)fX (x) k(y)fY (y)dy dx
ˆ
= h(x)fX (x)(Ek(Y ))dx
= Eh(X) · Ek(Y ).
Note that we can easily extend Proposition 12.1 to any number of independent random
variables.
Definition 12.1
The covariance of two random variables X and Y is defined by
As with the variance, Cov(X, Y ) = E(XY ) − (EX)(EY ). It follows that if X and Y are
independent, then E(XY ) = (EX)(EY ), and then Cov(X, Y ) = 0.
Proposition 12.2
Suppose X, Y and Z are random variables and a and c are constants. Then
(1) Cov (X, X) = Var (X).
(2) if X and Y are independent, then Cov (X, Y ) = 0.
(3) Cov (X, Y ) = Cov (Y, X).
(4) Cov (aX, Y ) = a Cov (X, Y ).
(5) Cov (X + c, Y ) = Cov (X, Y ).
(6) Cov (X + Y, Z) = Cov (X, Z) + Cov (Y, Z).
More generally,
m n
! m X
n
X X X
Cov ai X i , bj Y j = ai bj Cov (Xi , Yj ) .
i=1 j=1 i=1 j=1
12.2. CONDITIONAL EXPECTATION 163
Note
Var(aX + bY )
= E[((aX + bY ) − E(aX + bY ))2 ]
= E[(a(X − EX) + b(Y − EY ))2 ]
= E[a2 (X − EX)2 + 2ab(X − EX)(Y − EY ) + b2 (Y − EY )2 ]
= a2 Var X + 2ab Cov(X, Y ) + b2 Var Y.
We have the following corollary.
Proposition 12.3
If X and Y are independent, then
Var(X + Y ) = Var X + Var Y.
Proof. We have
Var(X + Y ) = Var X + Var Y + 2 Cov(X, Y ) = Var X + Var Y.
Example 12.1. Recall that a binomial random variable is the sum of n independent
Bernoulli random variables with parameter p. Consider the sample mean
Xn
Xi
X := ,
i=1
n
where all {Xi }∞
i=1 are independent and have the same distribution, then EX = EX1 = p and
Var X = Var X1 /n = p (1 − p).
Here the conditional density is defined by Equation (11.3) in Section 11.3. We can think of
E[X | Y = y] is the mean value of X, when Y is fixed at y. Note that unlike the expectation
164 12. EXPECTATIONS
XY 0 1
0 0.2 0.7 .
1 0 0.1
Find E [XY ].
Example 12.3. Suppose X, Y are independent exponential random variables with param-
eter λ = 1. Set up a double integral that represents
E X 2Y .
Solution:
Since X, Y are independent then fX,Y factorizes
Thus
ˆ ∞ ˆ ∞
E X Y =
2
x2 ye−(x+y) dydx.
0 0
© Copyright 2017 Phanuel Mariano, Patricia Alonso Ruiz, Copyright 2020 Masha Gordina.
166 12. EXPECTATIONS
Solution: We first draw the region (try it!) and then set up the integral
ˆ 1ˆ y ˆ 1ˆ y
EXY = 2
xy 10xy dxdy = 10 x2 y 3 dxdy
0 0 0 0
ˆ 1
10 10 1 10
= y 3 y 3 dy = = .
3 0 3 7 21
Example 12.5. Suppose X, Y are random variables whose joint PDF is given by
(
1
0 < y < 1, 0 < x < y
f (x, y) = y .
0 otherwise
Solution:
(a) Recall that Cov (X, Y ) = EXY − EXEY . So
ˆ 1ˆ y ˆ 1 2
1 y 1
EXY = xy dxdy = dy = ,
0 0 y 0 2 6
ˆ 1ˆ y ˆ 1
1 y 1
EX = x dxdy = dy = ,
0 0 y 0 2 4
ˆ 1ˆ y ˆ 1
1 1
EY = y dxdy = ydy = .
0 0 y 0 2
Thus
Cov (X, Y ) = EXY − EXEY
1 11
= −
6 42
1
= .
24
(b) We have that
1 y 1
y2
21 1
ˆ ˆ ˆ
EX =
2
x dxdy =
dy = ,
0 0 y 0 3 9
ˆ 1ˆ y ˆ 1
1 1
EY 2 = y 2 dxdy = y 2 dy = .
0 0 y 0 3
Thus recall that
Var (X) = EX 2 − (EX)2
2
1 1 7
= − = .
9 4 144
Also
Var (Y ) = EY 2 − (EY )2
2
1 1 1
= − = .
3 2 12
(c)
1
Cov (X, Y ) 24
ρ (X, Y ) = p =q ≈ 0.6547.
Var (X) Var (Y ) 7 1
144 12
168 12. EXPECTATIONS
Example 12.6. Let X and Y be random variables with the joint PDF
(
1 − x+y
18
e 6 if 0 < y < x,
fXY (x, y) =
0 otherwise.
In order to find Var(X | Y = 2), we need to compute the conditional PDF of X given Y = 2,
i.e.
fXY (x, 2)
fX|Y =2 (x | 2) = .
fY (2)
To this purpose, we compute first the marginal of Y .
ˆ ∞
1 − x+y 1 y
y ∞ 1 y
fY (y) = e 6 dx = e− 6 −e− 6 = e− 3 for y > 0.
y 18 3 y 3
Then we have (
1 2−x
6
e 6 if x > 2,
fX|Y =2 (x | 2) =
0 otherwise.
Now it only remains to find E[X | Y = 2] and E[X | Y = 2]. Applying integration by parts
2
twice we have
ˆ ∞ 2
x 2−x ∞
2−x ∞
2−x ∞
2 2−x
E[X | Y = 2] =
2
e dx = − x e − 12xe −12 6e = 4+24+72 = 100.
6 6 6 6
2 6 2 2 2
12.4. Exercises
Exercise 12.1. Suppose the joint distribution for X and Y is given by the joint probability
mass function shown below:
Y \X 0 1
0 0 0.3
1 0.5 0.2
Exercise 12.2. Let X and Y be random variables whose joint probability density function
is given by (
x + y 0 < x < 1, 0 < y < 1
f (x, y) =
0 otherwise.
Exercise 12.3. Let X be normally distributed with mean 1 and variance 9. Let
Y be expo-
nentially distributed with λ = 2. Suppose X and Y are independent. Find E (X − 1)2 Y .
(Hint: Use properties of expectations.)
Exercise∗ 12.2. Show that if random variables X and Y are uncorrelated, then Var (X + Y ) =
Var (X) + Var (Y ). Note that this is a more general statement than Proposition 12.3 since
independent variables are uncorrelated.
170 12. EXPECTATIONS
Y \X 0 1
0 0 0.3 0.3
0.5 0.5
Then
EXY = (0 · 0) · 0 + (0 · 1) · 0.5 + (1 · 0) · 0.3 + (1 · 1) · 0.2 = 0.2
EX = 0 · 0.5 + 1 · 0.5 = 0.5
EY = 0 · 0.3 + 1 · 0.7 = 0.7.