Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lecture 11

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Example 1. Let X ∼ Bin(n, p). Find EX.

Solution.
n
X
EX = kfX (k)
k=0
n
X
= kfX (k)
k=1
n
X n!
= pk (1 − p)n−k
k=1
(k − 1)!(n − k)!
n
X (n − 1)!
= np pk−1 (1 − p)n−k
k=1
(k − 1)!(n − k)!
n−1  
X n−1 `
= np p (1 − p)n−1−`
`=0
`
= np(p + (1 − p))n−1
= np.

Definition 0.1. The expectation/mean of a continuous random variable X is


Z ∞
EX = xfX (x) dx.
−∞

Remark. Although we define the expectations separately for discrete and continuous random
variables, they can be unified by the following:
Z ∞
EX = x dFX (x),
−∞

where the above integral is a Riemann–Stieltjes integral. This makes sense even if X is
neither discrete nor continuous.
Example 2. Suppose that X ∼ Unif[a, b]. Then
Z ∞ Z b
1 b 2 − a2 a+b
EX = xfX (x) dx = x dx = = .
−∞ a b−a 2(b − a) 2
Example 3. Suppose that X ∼ Exp(λ). Find EX.
Solution.
Z ∞
EX = xfX (x) dx
Z−∞

= xλe−λx dx
0
Z ∞
= −xe−λx |∞
0 + e−λx dx
0

1 1
= − e−λx = .
λ 0 λ

0.1 Infinite and nonexistent expectations


Example 4. Consider the following game. You flip a fair coin. If it comes up heads, you
win 2 dollars and the game is over. If it comes out tails, your prize is doubled and then you
flip again. Continue in this same manner: every tails doubles the prize, and once you flip
the first heads the game is over and you collect the money. Let X be the prize you win. Find
EX.

Solution. We have P(X = 2n ) = 2−n for all positive integers n, because X = 2n means the
first (n − 1) flips are tails and the last one is heads. So

X ∞
X
n −n
EX = 2 2 = 1 = ∞.
n=1 n=1

Remark. Recall that this game will end eventually with probability 1. In particular, with
probability 1, X is finite. However, EX is infinite. This example is also known as the
St. Petersburg paradox.

Example 5. Suppose that X has PDF


(
x−2 if x ≥ 1,
fX (x) =
0 otherwise.

Find EX.

Solution.
Z ∞ Z ∞ Z ∞
−2
EX = xfX (x) dx = x·x dx = x−1 dx = log x|∞
1 = ∞.
−∞ 1 1

In the next examples we cannot give the expectation any value whatsoever, finite or
infinite.

Definition 0.2. A random variable X is said to have a (standard) Cauchy distribution if it


has PDF
1 1
fX (x) = , x ∈ R.
π 1 + x2
It is easy to see that if X is Cauchy, then EX is not defined.

2
Example 6. A fair coin is flipped until we see the first heads. Let n denote the number of
flips needed. If n is odd, you pay me 2n dollars. If n is even, I pay you 2n dollars. Let X
denote my net reward. Can I calculate my expected net reward?
Solution. We have

P(X = 2n ) = 2−n for odd n ≥ 1 and P(X = −2n ) = 2−n for even n ≥ 1.

If we try to compute EX, we have

EX = 21 · 2−1 + (−22 ) · 2−2 + 23 · 2−3 + (−24 ) · 2−4 + · · · = 1 − 1 + 1 − 1 + · · · ,

this is a divergent series and so EX does not exist.

0.2 Expectation of a function of random variable


If Y = g(X), how can we find EY ?
Example 7. Suppose that we roll a fair die, and the winnings or loss W of a player is as
follows: 
−1
 if the roll is 1, 2 or 3,
W = 1 if the roll is 4,

2 if the roll is 5 or 6.

What is EW ?
If X denote the outcome of the die roll, then W = g(X), where g(1) = g(2) = g(3) = −1,
g(4) = 1, g(5) = g(6) = 2.
Solution.
1 1 1 1
EW = −1 · P(W = −1) + 1 · P(W = 1) + 2 · P(W = 2) = (−1) · +1· +2· = .
2 6 3 3
There is an alternative way to think about this expectation. We can compute P(W = w)
by finding all the values of x for which g(x) = w and adding up the probabilities:

P(W = −1) = P(X = 1) + P(X = 2) + P(X = 3),


P(W = 1) = P(X = 4),
P(W = 2) = P(X = 5) + P(X = 6).

By substituting these values in the previous computation of EW , we get

Eg(X) =EW = −1 · P(W = −1) + 1 · P(W = 1) + 2 · P(W = 2)


=(−1) · (P(X = 1) + P(X = 2) + P(X = 3)) + 1 · P(X = 4)
+ 2 · (P(X = 5) + P(X = 6))
=g(1)P(X = 1) + g(2)P(X = 2) + g(3)P(X = 3)

3
+ g(4)P(X = 4) + g(5)P(X = 5) + g(6)P(X = 6)
1
= .
3
We see that
6
X
Eg(X) = g(x)P(X = x).
x=1

This is true in general:


Proposition 0.3. If X is a discrete random variable and g : R → R, then
X
Eg(X) = g(x)fX (x).
x

Proof. We just repeat the same calculations:


X
EY = yfY (y)
y
X
= yP(Y = y)
y
X X
= y P(X = x)
y x:g(x)=y
X X
= g(x)P(X = x)
y x:g(x)=y
X
= g(x)P(X = x).
x

For continuous random variables, we cannot repeat the same computations (since fX (x)
does not represent probability), but the same is also true.
Proposition 0.4. If X is a continuous random variable and g : R → R, then
Z ∞
Eg(X) = g(x)fX (x) dx.
−∞

To show this, we need the following lemma.


Lemma 0.5. If Y is a nonnegative random variable, then
Z ∞
EY = P(Y > t) dt.
0

Proof. We have
Z
EY = y dFY (y)
R

4
Z Z y
= dt dFY (y)
ZR Z0 ∞
= I{t<y} dt dFY (y)
ZR∞ 0Z
= I{y>t} dFY (y) dt
0 R
Z ∞Z ∞
= dFY (y) dt
0 t
Z ∞
= P(Y > t) dt.
0
Using the same idea, one can show that for general random variable Y such that EY is
defined, one has Z ∞ Z ∞
EY = P(Y > t) dt − P(Y < −t) dt.
0 0
We now prove the proposition.
Proof. First,
Z ∞ Z ∞
Eg(X) = P(g(X) > t) dt − P(g(X) < −t) dt.
0 0
Note that
Z ∞ Z ∞ Z
P(g(X) > t) dt = fX (x) dx dt
0 0 {x:g(x)>t}
Z Z g(x)
= fX (x) dt dx
{x:g(x)>0} 0
Z
= g(x)fX (x) dx.
{x:g(x)>0}

Similarly, Z ∞ Z
P(g(X) < −t) dt = − g(x)fX (x) dx.
0 {x:g(x)<0}
Combining, we obtain the desired result.
Example 8. A stick of length L is broken at a uniformly chosen random location. What is
the expected length of the longer piece?
Solution. Let the interval [0, L] represent the stick and let X ∼ Unif[0, L] be the position
where the stick is broken. Let g(x) denote the length of the longer piece:
(
L − x if 0 ≤ x ≤ L/2,
g(x) =
x if L/2 ≤ x ≤ L.
Then
∞ L/2 L
L−x
Z Z Z
x 3L
Eg(X) = g(x)fX (x) dx = dx + dx = .
−∞ 0 L L/2 L 4

5
0.3 Variance
Definition 0.6. Let X be a random variable with mean µ. The variance of X is defined by

Var(X) = E(X − µ)2 .

Another symbol: σ 2 = Var(X).


p
The square root of the variance is called the standard deviation σ = Var(X).

You might also like