CH 6 Slides
CH 6 Slides
CH 6 Slides
Kuan Xu
University of Toronto
kuan.xu@utoronto.ca
1 Introduction
6 Order Statistics
U is the daily profit in $100. With $ 300 per ton price of pure sugar and
the fixed overhead cost of $100 per day, the daily profit function is
U = 3Y − 1. Find the probability density function of U.
Solution: From U = 3Y − 1, we get y = u+1 3 . Corresponding to
0 ≤ y ≤ 1, −1 ≤ u ≤ 2. For −1 ≤ u ≤ 2,
u+1
FU (u) = P(U ≤ u) = P(3Y − 1 ≤ u) = P(Y ≤ )
3
If u < −1, then u+1
3 < 0 and FU (u) = P(3Y − 1 ≤ u) = 0.
If u > 2, then u+1
3 > 1 and FU (u) = P(3Y − 1 ≤ u) = 1.
Kuan Xu (UofT) ECO 227 February 28, 2024 7 / 65
The Method of Distribution Functions (3)
Solution (continued):
As we know f (y ), we can integrate f (y ) to get FU (u):
Z (u+1)/3 Z (u+1)/3 2
u+1
FU (u) = P(3Y −1 ≤ u) = f (y )dy = 2ydy = .
−∞ 0 3
Example (U = h(Y1 , Y2 )): Revisit the oil tank example in Ch5. Y1 and Y2
has a joint density function:
(
3y1 , 0 ≤ y2 ≤ y1 ≤ 1,
f (y1 , y2 ) =
0, elsewhere.
Solution (continued): For 0 ≤ u ≤ 1, FU (u) = P(Y1 − Y2 ≤ u) is the integral over the dark shaded region
above the line y1 − y2 = u. But it is easier to integrate over the lower triangular region. We write, for 0 ≤ u ≤ 1,
Z 1 Z y −u
1
FU (u) = P(U ≤ u) = 1 − P(U ≥ u) = 1 − 3y1 dy2 dy1
u 0
! 1
y13 uy12 u3 u3
Z 1 " #
1 u
=1− 3y1 (y1 − u)dy1 = 1 − 3 − =1−3 − − +
u 3 2 u
3 2 3 2
3
" #
3u u 1 3
=1− 1− + = (3u − u ).
2 2 2
Summarizing,
0,
u < 0,
FU (u) = (3u − u 3 )/2, 0 ≤ u ≤ 1,
1, u > 1.
It follows (
dFU (u)) 3(1 − u 2 )/2, 0 ≤ u ≤ 1,
fU (u) = =
du 0, elsewhere.
Solution (continued):
Find fU (u).
Solution: For u > 0,
Figure: u = h(y ) = y 2
1 e −y /β ,
(
β
0 ≤ y ≤ ∞,
fY (y ) =
0, elsewhere.
Also E (Y ) = β and V (Y ) = β 2 . The exponential distribution is a special case of the gamma distribution (see Ch4).
d −e −y /β
FY (y ) = 0.
For 0 ≤ y ≤ ∞, Z y Z y
1 −u/β
FY (y ) = P(Y ≤ y ) = fY (u)du = e du
0 0 β
y
−u/β −y /β
= −e =1−e .
0
Hence, (
1 − e −y /β , 0 ≤ y ≤ ∞,
FY (y ) =
0, elsewhere.
Solution (continued): FY (y ) is strictly increasing in y on the interval [0, ∞). Now look at U which takes any
value u on the interval (0, 1). The link between U and Y can be described by FY (y ) = u or, in this case, by
FY (y ) = 1 − e −y /β = u ⇒ 1 − u = e −y /β ⇒ ln(1 − u) = −y β
⇒ y = −β ln(1 − u) = FY−1 (u).
Based on the above, consider the random variable FY−1 (U) = −β ln(1 − U). If y > 0,
−1
P(FY (U) ≤ y ) = P[−β ln(1 − U) ≤ y ]
= P[ln(1 − U) ≥ −y /β]
−y /β
= P(U ≤ 1 − e )
−y /β
= 1−e .
Solutions (continued): The last equality in the above derivation can be explained by the following figure. As u
follows an uniform distribution, the cumulative probability distribution is a straight line from the origin to (1, 1). Therefore,
P(U ≤ 1 − e −y /β ) = 1 − e −y /β .
P (U ≤ 1 − e−y/β )
1
1 − e−y/β
0 1
u
1 − e−y/β
Figure: 0 ≤ y ≤ ∞ and 0 ≤ u ≤ 1
Thus, FY−1 (U) = −β ln(1 − U) follows an exponential distribution with mean β > 0 as desired.
dh−1 (u)
fU (u) = fY (h−1 (u)) .
du
or
FU (u) = FY (h−1 (u)).
Differentiate FU (u) w.r.t. u:
d u+1
dh−1 (u) 3 1
= = .
du du 3
Also note that corresponding to 0 ≤ y ≤ 1, −1 ≤ u ≤ 2. For −1 ≤ u ≤ 2,
d[h−1 (u)]
−1 u+1 1
fU (u) = fY (h (u)) =2 = 2(u + 1)/9.
du 3 3
Kuan Xu (UofT) ECO 227 February 28, 2024 25 / 65
The Method of Transformations (5)
Solution (continued):
Therefore, (
2(u + 1)/9, −1 ≤ u ≤ 2,
fU (u) =
0, elsewhere.
−1
Remarks: In this case, we do not use the absolute value of d[h du(u)] but it
is always positive. This is not the case for u = h(y ) is decreasing in y .
d[h−1 (u)]
fU (u) = −fY (h−1 (u)) .
du
−1
Note that d[h du(u)] < 0 but there is a negative sign for fY .
Combine the cases for u = h(y ) to be increasing and decreasing in y :
dh−1 (u)
fU (u) = fY (h−1 (u)) .
du
fU (u) = 0.
Solution (continued):
Second, implement step 2:
Z ∞
fU (u) = gY1 ,U (y1 , u)dy1
−∞
u
R u e −u dy = y e −u
= ue −u , 0 ≤ u,
0 1 1
=
0
0, elsewhere.
Remarks:
Suppose that X and Y have mX (t) and mY (t), respectively. Then assume
that the probability density functions exist as well.
Z
tX
MX (t) = E [e ] = e tx fX (x)dx
R
and Z
MY (t) = E [e tY ] = e ty fY (y )dy ,
R
where R denotes all real numbers.
We rewrite
Kuan Xu the
(UofT)arguments: ECO 227 February 28, 2024 33 / 65
The Method of Moment-Generating Functions (2)
We are given that (or can deduce from FX (z) = FY (z)) X and Y have the
same probability density function (or the same probability function):
fX (z) = fY (z).
Hence we can rewrite MY (t):
Z
tY
MY (t) = E [e ]= e tz fX (z)dz.
R
Therefore, MY (t) = MX (t).
z 2
1 −
√ e 2[(1−2t)−1/2 ]2 .
2π(1 − 2t)−1/2
This density function integrated from −∞ to ∞ is equal to 1. Because we
z2
−
have divided √12π e 2(1−2t)−1 by (1−2t)
1
−1/2 , we must make an adjustment by
1 1
= 1/2
(1) = ,
(1 − 2t) (1 − 2t)1/2
if t < 1/2. Link
dmZ 2 (t) 1 1 3
|t=0 = [(1 − 2t)− 2 −1 ](− )(−2) = [(1 − 2t)− 2 ] = 1.
dt 2
t=0 t=0
Pn Qn
Theorem 6.2—If U = i=1 Yi , then mU (t) = i=1 mYi (t)
Let Y1 , Y2 , . . . , Yn be independent random variables with
moment-generating functions mY1 , mY2 , . . . , mYn , respectively. If
U = Y1 + Y2 + · · · + Yn , then
Remarks:
λy e −λ
p(y ) = , y = 0, 1, 2, 3, . . . .
y!
Now we define a new set of continuous random variables Y1 , Y2 , . . . , Yn :
θ 0
" # ∞
−1 1 −yi (1−θt)
mYi (t) = (1−θt) θ
e θ
θ 0
∞
(1−θt)
h i
= −(1 − θt)−1 e −yi θ
0
= 0 + (1 − θt)−1 (1)
= (1 − θt)−1 .
Pn indep
Theorem 6.3—mU (t) for U = i=1 ai Yi , where Yi ∼ N(µi , σi2 )
Let Y1 , Y2 , . . . , Yn be independent normally distributed random variables
with E (Yi ) = µi and P V (Yi ) = σi2 , for i = 1, 2, . . . , n, and let a1 , a2 , . . . , an
be constants. If U = ni=1 ai Yi , then U is a normally distributed random
variable with
X n
E (U) = ai µi
i=1
and
n
X
V (U) = ai2 σi2 .
i=1
h 1 2 2
i
d 2 mY (t) d (µ + tσ 2 )e µt+ 2 t σ
=
dt dt
Recall duv ′ ′
dx = uv + vu .
h 1 2 2
i
d (µ + tσ 2 )e µt+ 2 t σ h 1 2 2 1 2 2
i
= (µ + tσ 2 )2 e µt+ 2 tσ
+ σ 2 e µt+ 2 t σ
dt
t=0
= µ2 + σ 2 = E (Y 2 ).
V (Y ) = E (Y 2 ) − [E (Y )]2 = µ2 + σ 2 − µ2 = σ 2 .
Pn i.i.d.
Theorem 6.4—V = i=1 Zi2 ∼ χ2 (n) if Zi ∼ N(0, 1)
indep
Let Yi ∼ N(µi , σi2 ) for i = 1, 2, . . . , n.
Yi − µi i.i.d.
Zi = ∼ N(0, 1)
σi
for i = 1, 2, . . . , n. Then,
n
X
U= Zi2 ∼ χ2 (n).
i=1
Remarks:
Pn It is known that for a χ2 -distributed random variable
U = i=1 Zi ∼ χ2 (n), E (U) = n and V (U) = 2n.
2
Pn 2
Link Apply Theorem 6.2 to U = i=1 Zi to get
Pn
Zi2
mU (t) = E (e tU ) = E (e t i=1 )
1 n
h i n
= (1 − 2t)− 2 = (1 − 2t)− 2
− n+2
= n(1 − 2t) 2 = n(1) = n.
t=0
" #
− n+2
d n(1 − 2t) 2
2
" #
d mU (t) n+2 − n+4
2 2
E (U ) = = = (−2) − n(1 − 2t)
dt 2 t=0
dt t=0
2 t=1
2
= (n + 2)n(1) = n + 2n.
2 2 2 2
V (U) = E (U ) − [E (U)] = n + 2n − n = 2n.
Show how to get the density function for Y(n) . Because Y(n) is the
maximum and Y(n) ≤ y occurs only if all Yi ≤ y (i = 1, 2, . . . , n) occur,
P(Y(n) ≤ y ) = P(Y1 ≤ y , Y2 ≤ y , . . . , Yn ≤ y )
Take the derivative of FY(n) = [F (y )]n w.r.t. y to get the density function
of Y(n) , g(n) (y ):
g(n) (y ) = n[F (y )]n−1 f (y ).
Y(1) Y(n) ≤ y
Y(1) ≤ y Y(n)
= [F (y2 )]2 .
To summarize the two cases, the joint distribution function of Y(1) and
Y(2) is
(
2F (y1 )F (y2 ) − [F (y1 )]2 , y1 ≤ y2 ,
FY(1) ,Y(2) (y1 , y2 ) =
[F (y2 )]2 , y1 > y2 .
Differentiating FY(1) ,Y(2) (y1 , y2 ) w.r.t. to y2 first and to y1 second to get
the joint density of Y(1) and Y(2) :
(
2f (y1 )f (y2 ), y1 ≤ yy ,
g(1)(2)(y1 ,y2 ) =
0, y1 > y2 .
Find the joint density function of Y(1) , Y(2) , . . . , Y(n) . We can show this
without the proof:
(
n!f (y1 )f (y2 ) · · · f (yn ), y1 ≤ y2 ≤ · · · ≤ yn ,
g(1)(2)···(n) (y1 , y2 , . . . , yn ) =
0, elsewhere.
Assume that two such parts operate independently in a system but the
system fails when either part fails. Let X be the length of life of the
system. Find the density function of X .