Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CH 6 Slides

Download as pdf or txt
Download as pdf or txt
You are on page 1of 65

ECO 227Y1 Foundations of Econometrics

Kuan Xu

University of Toronto
kuan.xu@utoronto.ca

February 28, 2024

Kuan Xu (UofT) ECO 227 February 28, 2024 1 / 65


Ch 6 Functions of Random Variables

1 Introduction

2 Finding the Probability Distribution of a Function of Random Variables

3 The Method of Distribution Functions

4 The Method of Transformations

5 The Method of Moment-Generating Functions

6 Order Statistics

Kuan Xu (UofT) ECO 227 February 28, 2024 2 / 65


Introduction (1)

The purpose of statistics is to study the distributions of functions of


random variables.
So far we have not discussed how to do it.
Consider a simple example, we infer the information about the
1 Pn
population mean µ using the sample mean y = n i=1 yi .
Typically, we are interested in learning how unbiased and how
accurate the sample mean is.
To learn these, we treat the sample mean Y = (1/n) ni=1 Yi as a
P
random variable which itself is a function of i.i.d. random variables
Y1 , Y2 , . . . , , Yn .
We can study the distribution of Y .
In this course and in the remainder of the textbook, we assume that
Y1 , Y2 , . . . , Yn are in fact independent of one another and these
variables share a common probability/density function.

Kuan Xu (UofT) ECO 227 February 28, 2024 3 / 65


Introduction (2)

More specifically, their joint probability function is

p(y1 , y2 , . . . , yn ) = p(y1 )p(y2 ) · · · p(yn )

if they are discrete random variables. Their joint density function is

f (y1 , y2 , . . . , yn ) = f (y1 )f (y2 ) · · · f (yn )

if they are continuous random variables.

Kuan Xu (UofT) ECO 227 February 28, 2024 4 / 65


Finding the Probability Distribution of a Function of
Random Variables (1)

The method of distribution functions


The method of transformations
The method of moment-generating functions
Multivariable transformations using Jocobins (not pursued in this
course)

Kuan Xu (UofT) ECO 227 February 28, 2024 5 / 65


The Method of Distribution Functions (1)

General steps of this method: Let U be a function of random variables


Y1 , Y2 , . . . , Yn .
Find the region U = u in the (y1 , y2 , . . . , yn ) space.
Find the region U ≤ u.
Find FU (u) = P(U ≤ u) by integrating f (y1 , y2 , . . . , yn ) over the
region U ≤ u.
Find the density function fU (u) by differentiating FU (u). Thus,
fU (u) = dFU (u)/du.
Link

Kuan Xu (UofT) ECO 227 February 28, 2024 6 / 65


The Method of Distribution Functions (2)
We use some examples to explain the method.
Example (U = h(Y )): Y is the amount of pure sugar refined per day, up
to 1 ton. It has a density function
(
2y , 0 ≤ y ≤ 1,
f (y ) =
0, elsewhere.

U is the daily profit in $100. With $ 300 per ton price of pure sugar and
the fixed overhead cost of $100 per day, the daily profit function is
U = 3Y − 1. Find the probability density function of U.
Solution: From U = 3Y − 1, we get y = u+1 3 . Corresponding to
0 ≤ y ≤ 1, −1 ≤ u ≤ 2. For −1 ≤ u ≤ 2,
u+1
FU (u) = P(U ≤ u) = P(3Y − 1 ≤ u) = P(Y ≤ )
3
If u < −1, then u+1
3 < 0 and FU (u) = P(3Y − 1 ≤ u) = 0.
If u > 2, then u+1
3 > 1 and FU (u) = P(3Y − 1 ≤ u) = 1.
Kuan Xu (UofT) ECO 227 February 28, 2024 7 / 65
The Method of Distribution Functions (3)
Solution (continued):
As we know f (y ), we can integrate f (y ) to get FU (u):
Z (u+1)/3 Z (u+1)/3  2
u+1
FU (u) = P(3Y −1 ≤ u) = f (y )dy = 2ydy = .
−∞ 0 3

Thus, the distribution function of U is



0, 
 u < −1
u+1 2
FU (u) = 3 , −1 ≤ u ≤ 2,

1, u > 2.

The density function of U is


(
dFU (u) (2/9)(u + 1), −1 ≤ u ≤ 2,
fU (u) = =
du 0, elsewhere.

Kuan Xu (UofT) ECO 227 February 28, 2024 8 / 65


The Method of Distribution Functions (4)

Example (U = h(Y1 , Y2 )): Revisit the oil tank example in Ch5. Y1 and Y2
has a joint density function:
(
3y1 , 0 ≤ y2 ≤ y1 ≤ 1,
f (y1 , y2 ) =
0, elsewhere.

In addition, we are interested in U = Y1 − Y2 (a specific form of


U = h(Y1 , Y2 )), where Y1 is the proportional amount of oil at the
beginning of a week and Y2 is the proportional amount of oil sold during
the week. Find the density of U and E (U).

Kuan Xu (UofT) ECO 227 February 28, 2024 9 / 65


The Method of Distribution Functions (5)
Solution:

Fig. 6.1, p. 300

Figure: Region over which f (y1 , y2 ) is positive

There is the line y1 − y2 = u, for a value of u between 0 and 1. (The line


is drawn for y2 = −u + y1 .)
If u < 0, y1 − y2 = u has intercept −u < 0 and
FU (u) = P(Y1 − Y2 ≤ u) = 0.
If u > 1, y1 − y2 = u has intercept −u < −1 and
FU (u) = P(Y1 − Y2 ≤ u) = 1.
Kuan Xu (UofT) ECO 227 February 28, 2024 10 / 65
The Method of Distribution Functions (6)

Solution (continued): For 0 ≤ u ≤ 1, FU (u) = P(Y1 − Y2 ≤ u) is the integral over the dark shaded region
above the line y1 − y2 = u. But it is easier to integrate over the lower triangular region. We write, for 0 ≤ u ≤ 1,

Z 1 Z y −u
1
FU (u) = P(U ≤ u) = 1 − P(U ≥ u) = 1 − 3y1 dy2 dy1
u 0

! 1
y13 uy12 u3 u3
Z 1 " #
1 u
=1− 3y1 (y1 − u)dy1 = 1 − 3 − =1−3 − − +
u 3 2 u
3 2 3 2

3
" #
3u u 1 3
=1− 1− + = (3u − u ).
2 2 2

Summarizing, 
0,
 u < 0,
FU (u) = (3u − u 3 )/2, 0 ≤ u ≤ 1,

1, u > 1.

It follows (
dFU (u)) 3(1 − u 2 )/2, 0 ≤ u ≤ 1,
fU (u) = =
du 0, elsewhere.

See the graphs for FU and fU in the next slide.

Kuan Xu (UofT) ECO 227 February 28, 2024 11 / 65


The Method of Distribution Functions (7)

Solution (continued):

Fig. 6.2, p. 301

Figure: Distribution and Density Functions

Kuan Xu (UofT) ECO 227 February 28, 2024 12 / 65


The Method of Distribution Functions (8)

Solution (continued): Find


1
1
u2 u4
Z  
2
E (U) = u(3/2)(1 − u )du = (3/2) − = 3/8.
0 2 4
0

Kuan Xu (UofT) ECO 227 February 28, 2024 13 / 65


The Method of Distribution Functions (9)
Example (U = h(Y ) = Y 2 ): To reinforce the steps of the distribution
function method given before. Distribution Function Method Let U = h(Y ) = Y 2 ,
where Y is a continous random variable with distribution function FY (y )
and density function fY (y ). For u ≤ 0,

FU (u) = P(U ≤ u) = P(Y 2 ≤ u) = 0.

Find fU (u).
Solution: For u > 0,

FU (u) = P(U ≤ u) = P(Y 2 ≤ u)


√ √
= P(− u ≤ Y ≤ u)
Z √u
= √ Y
f (y )dy
− u
√ √
= FY ( u) − FY (− u)

Kuan Xu (UofT) ECO 227 February 28, 2024 14 / 65


The Method of Distribution Functions (10)

Solution (continued): Graphically, Y and U = Y 2 are related by


u = h(y ) = y 2 ; see

Fig. 6.7, p. 305

Figure: u = h(y ) = y 2

Kuan Xu (UofT) ECO 227 February 28, 2024 15 / 65


The Method of Distribution Functions (11)

Solution (continued): In general,


( √ √
FY ( u) − FY (− u), u>0
FU (u) =
0, otherwise.

Apply the chain rule,


√ √
dFY ( u) √ 1 dFY (− u) √ 1
= fY ( u) √ and = −fY (− u) √ .
du 2 u du 2 u

Take derivative of FU (u) w.r.t. u,


(
1 √ √
2

u
[fY ( u) + fY (− u)], u > 0
fU (u) =
0, otherwise.

Kuan Xu (UofT) ECO 227 February 28, 2024 16 / 65


The Method of Distribution Functions (12)
Example: In the above example, we do not specify FY (y ) nor fY (y ). Now
we specify (
y +1
fY (y ) = 2 , −1 ≤ y ≤ 1,
0, elsewhere.
Find fU (u) for U = Y 2 .
Solution: Recall
(
1 √ √
√ [f ( u)
2 u Y
+ fY (− u)], u > 0
fU (u) =
0, otherwise.

For −1 ≤ y ≤ 1, fY (y ) = y +1 and y = u, where 0 < u ≤ 1. Substitute
√ 2
y = u into fY (y ) to obtain
√ √ 
− u+1
( u+1
1

2 + 2 = 2√1 u , 0 < u ≤ 1
fU (u) = 2 u
0, elsewhere.

Kuan Xu (UofT) ECO 227 February 28, 2024 17 / 65


The Method of Distribution Functions (13)

Sometimes, to generate a random variable (say Y ) with some specific


distribution function (say, an exponential distribution function with mean
β > 0 as shown below), we need generate U with a uniform distribution on
the interval (0, 1) and then transform U into that random variable (say Y ).
Example (Example 6.5):
Let U be a uniform random variable on the interval (0, 1). Find a transformation Y = G (U) such that Y = G (U) possess an
exponential distribution with mean β > 0. Note that this transformation Y = G (U) imposes the restriction on the type of
distribution that Y follows. This is unlike the previous case where U = h(Y ).
Solution:
Recall: For Y to be an exponential distributed random variable with mean β > 0, it must have a density function

1 e −y /β ,
(
β
0 ≤ y ≤ ∞,
fY (y ) =
0, elsewhere.

Also E (Y ) = β and V (Y ) = β 2 . The exponential distribution is a special case of the gamma distribution (see Ch4).

Kuan Xu (UofT) ECO 227 February 28, 2024 18 / 65


The Method of Distribution Functions (14)

d −e −y /β
 

Solution (continued): To find FY (y ), recall dy


1 e −y /β and then find F (y ). For y < 0,
= β Y

FY (y ) = 0.

For 0 ≤ y ≤ ∞, Z y Z y
1 −u/β
FY (y ) = P(Y ≤ y ) = fY (u)du = e du
0 0 β

y
−u/β −y /β
= −e =1−e .
0

Hence, (
1 − e −y /β , 0 ≤ y ≤ ∞,
FY (y ) =
0, elsewhere.

Kuan Xu (UofT) ECO 227 February 28, 2024 19 / 65


The Method of Distribution Functions (15)

Solution (continued): FY (y ) is strictly increasing in y on the interval [0, ∞). Now look at U which takes any
value u on the interval (0, 1). The link between U and Y can be described by FY (y ) = u or, in this case, by
FY (y ) = 1 − e −y /β = u ⇒ 1 − u = e −y /β ⇒ ln(1 − u) = −y β
⇒ y = −β ln(1 − u) = FY−1 (u).
Based on the above, consider the random variable FY−1 (U) = −β ln(1 − U). If y > 0,

−1
P(FY (U) ≤ y ) = P[−β ln(1 − U) ≤ y ]
= P[ln(1 − U) ≥ −y /β]
−y /β
= P(U ≤ 1 − e )
−y /β
= 1−e .

Kuan Xu (UofT) ECO 227 February 28, 2024 20 / 65


The Method of Distribution Functions (16)

Solutions (continued): The last equality in the above derivation can be explained by the following figure. As u
follows an uniform distribution, the cumulative probability distribution is a straight line from the origin to (1, 1). Therefore,
P(U ≤ 1 − e −y /β ) = 1 − e −y /β .

P (U ≤ 1 − e−y/β )
1

1 − e−y/β

0 1

u
1 − e−y/β

Figure: 0 ≤ y ≤ ∞ and 0 ≤ u ≤ 1

Thus, FY−1 (U) = −β ln(1 − U) follows an exponential distribution with mean β > 0 as desired.

Kuan Xu (UofT) ECO 227 February 28, 2024 21 / 65


The Method of Transformations (1)

This method is an offshoot of the method of distribution functions.


Summary: Given Y with its density function fY (y ) and U = h(Y ), find
fU (u) in the following steps:
1. Find Y = h−1 (U) and y = h−1 (u);
2. Substitute y = h−1 (u) into fY (y );
dh−1 (u)
3. Multiply fY (h−1 (u)) by du to get

dh−1 (u)
fU (u) = fY (h−1 (u)) .
du

Kuan Xu (UofT) ECO 227 February 28, 2024 22 / 65


The Method of Transformations (2)
It is important to note that u = h(y ) could be either increasing or
decreasing in y .
Suppose that u = h(y ) is increasing in y . See the following figure.

Fig. 6.8, p. 311

Figure: An Increasing Function

Kuan Xu (UofT) ECO 227 February 28, 2024 23 / 65


The Method of Transformations (3)

P(U ≤ u) = P(h(Y ) ≤ u) = P[h−1 (h(Y )) ≤ h−1 (u)] = P[Y ≤ h−1 (u)]

or
FU (u) = FY (h−1 (u)).
Differentiate FU (u) w.r.t. u:

dFU (u) dFY (h−1 (u)) d[h−1 (u)]


fU (u) = = = fY (h−1 (u))
du du du

Kuan Xu (UofT) ECO 227 February 28, 2024 24 / 65


The Method of Transformations (4)
Example:
Y (output) has a density function:
(
2y , 0 ≤ y ≤ 1,
fY (y ) =
0, elsewhere.

Let U = 3Y − 1 (profit). Find fU (u) by the transformation method.


Solution:
From u = 3y − 1, we know that u = h(y ) = 3y − 1 is increasing in y . In
addition, y = h−1 (u) = u+1
3 is increasing in u. Note

d u+1

dh−1 (u) 3 1
= = .
du du 3
Also note that corresponding to 0 ≤ y ≤ 1, −1 ≤ u ≤ 2. For −1 ≤ u ≤ 2,
d[h−1 (u)]
 
−1 u+1 1
fU (u) = fY (h (u)) =2 = 2(u + 1)/9.
du 3 3
Kuan Xu (UofT) ECO 227 February 28, 2024 25 / 65
The Method of Transformations (5)

Solution (continued):
Therefore, (
2(u + 1)/9, −1 ≤ u ≤ 2,
fU (u) =
0, elsewhere.
−1
Remarks: In this case, we do not use the absolute value of d[h du(u)] but it
is always positive. This is not the case for u = h(y ) is decreasing in y .

Kuan Xu (UofT) ECO 227 February 28, 2024 26 / 65


The Method of Transformations (6)
Let us consider the case where u = h(y ) is decreasing in y . See the
following figure.

Fig. 6.9, p. 312

Figure: A Decreasing Function

Kuan Xu (UofT) ECO 227 February 28, 2024 27 / 65


The Method of Transformations (7)

P(U ≤ u) = P[Y ≥ h−1 (u)]


or
FU (u) = 1 − FY [h−1 (u)].
Differentiate fU (u) w.r.t. u:

d[h−1 (u)]
fU (u) = −fY (h−1 (u)) .
du
−1
Note that d[h du(u)] < 0 but there is a negative sign for fY .
Combine the cases for u = h(y ) to be increasing and decreasing in y :

dh−1 (u)
fU (u) = fY (h−1 (u)) .
du

Kuan Xu (UofT) ECO 227 February 28, 2024 28 / 65


The Method of Transformations (8)
Example:
Let Y has the density function
(
2y , 0 ≤ y ≤ 1,
fY (y ) =
0, elsewhere.

Given U = h(Y ) = −4Y + 3, find the density function fU (u).


Solution:
⇒ U − 3 = −4Y ⇒ Y = 3−U −1 3−U
4 . Therefore, Y = h (U) = 4 . Clearly,
−1
u = h(y ) is decreasing in y and y = h (u) is decreasing in u.
Note that, for 0 ≤ y ≤ 1, −1 ≤ u ≤ 3. Also note that
dh−1 (u) d ( 3−u
4 )
du = du = − 14 .
For −1 ≤ u ≤ 3,
 
3−u 1 3−u 1
fU (u) = fY ( ) =2 = (3 − u)/8.
4 4 4 4
Kuan Xu (UofT) ECO 227 February 28, 2024 29 / 65
The Method of Transformations (9)

Solution (continued): For u outside [−1, 3],

fU (u) = 0.

Combine the above results:


(
(3 − u)/8, −1 ≤ u ≤ 3,
fU (u) =
0, elsewhere.

Kuan Xu (UofT) ECO 227 February 28, 2024 30 / 65


The Method of Transformations (10)
Example (a bivariate case): Y1 and Y2 have a joint density function:
(
e −(y1 +y2 ) , 0 ≤ y1 , 0 ≤ y2 ,
fY1 ,Y2 (y1 , y2 ) =
0, elsewhere.
Now given U = h(Y1 , Y2 ) = Y1 + Y2 , find the density function fU (u).
Solution:
R ∞ fY1 ,Y2 (y1 , y2 );
Step 1: find the joint density function gY1 ,U (y1 , u) from
Step 2: Find the marginal density function fU (u) = −∞ gY1 ,U (y1 , u)dy1 .
First, implement step 1: Fix Y1 at y1 ≥ 0. Now
U = y1 + Y2 = h(Y2 ) ⇒ Y2 = h−1 (U) = U − y1 . For 0 ≤ y2 , we have
0 ≤ u − y1 .
Let gY1 ,U (y1 , u) be the joint density for Y1 , U. For 0 ≤ y1 and 0 ≤ u − y1 ,
gY1 ,U (y1 , u) =
dh−1 (u)
(
fY1 ,Y2 (y1 , h−1 (u)) du = e −(y1 +u−y1 ) (1) = e −u , 0 ≤ y1 ≤ u,
0, elsewhere.
Kuan Xu (UofT) ECO 227 February 28, 2024 31 / 65
The Method of Transformations (11)

Solution (continued):
Second, implement step 2:
Z ∞
fU (u) = gY1 ,U (y1 , u)dy1
−∞
 u
R u e −u dy = y e −u

= ue −u , 0 ≤ u,
0 1 1
=
 0
0, elsewhere.

Kuan Xu (UofT) ECO 227 February 28, 2024 32 / 65


The Method of Moment-Generating Functions (1)
Theorem 6.1—X and Y have the Same Probability Distribution if
mX (t) = mY (t)
Let mX (t) and mY (t) denote the moment-generating functions of random
variables X and Y , respectively. If both moment-generating functions exist
and mX (t) = mY (t) for all values of t, then X and Y have the same
probability distribution.

Remarks:
Suppose that X and Y have mX (t) and mY (t), respectively. Then assume
that the probability density functions exist as well.
Z
tX
MX (t) = E [e ] = e tx fX (x)dx
R
and Z
MY (t) = E [e tY ] = e ty fY (y )dy ,
R
where R denotes all real numbers.
We rewrite
Kuan Xu the
(UofT)arguments: ECO 227 February 28, 2024 33 / 65
The Method of Moment-Generating Functions (2)

We are given that (or can deduce from FX (z) = FY (z)) X and Y have the
same probability density function (or the same probability function):

fX (z) = fY (z).
Hence we can rewrite MY (t):
Z
tY
MY (t) = E [e ]= e tz fX (z)dz.
R
Therefore, MY (t) = MX (t).

Kuan Xu (UofT) ECO 227 February 28, 2024 34 / 65


The Method of Moment-Generating Functions (3)
Example:
Given that Y ∼ N(µ, σ 2 ) and that Y − µ has moment-generating function
2 2
mY −µ (t) = e t σ /2 , show that Z = Y σ−µ ∼ N(0, 1).
Solution: The moment-generating function for Z is
2 (σ 2 /2) 2 /2
mZ (t) = E (e tZ ) = E [e (t/σ)(Y −µ) ] = m(Y −µ) (t/σ) = e (t/σ) = et .
2
On comparing mZ (t) = e t /2 with the moment-generating function of the
2
normally distributed random variable Y − µ, mY −µ = e t /2 if σ 2 = 1, we
can conclude that Z ∼ N(0, 1).
Now find E (Z ) and V (Z ) using mZ (t).
dmZ (t) 2 /2
= te t |t=0 = 0.
dt
t=0
Recall, for f = uv , f ′ = u ′ v + uv ′ .
2
d 2 mZ (t) d(te t /2 ) 2 /2 2 /2
= = (e t + t 2e t )|t=0 = 1.
dt 2 dt
t=0 t=0
Kuan Xu (UofT) ECO 227 February 28, 2024 35 / 65
The Method of Moment-Generating Functions (4)

Alternatively, we can use a more direct approach. Z = Y σ−µ is a linear


transformation of Y ∼ N(µ, σ 2 ). Therefore, Z ∼ N(?, ?).
The mean and variance of Z can be found using the expectation
operations. More specifically,
1 1
E (Z ) = E (Y − µ) = (µ − µ) = 0
σ σ
and
1 2 σ2
V (Z ) = E (Y − µ) = = 1.
σ2 σ2
Therefore, Z ∼ N(0, 1).
Therefore, based on this observation, we can conclude Z is normally
distributed with mean 0 and variance 1.

Kuan Xu (UofT) ECO 227 February 28, 2024 36 / 65


The Method of Moment-Generating Functions (5)
Example:
Show that if Z ∼ N(0, 1), then Z 2 ∼ χ2 (1).
The moment-generating function for Z 2 is
Z ∞ Z ∞
tZ 2 tz 2 1 2 2
mZ 2 (t) = E (e ) = e f (z)dz = √ e tz e −z /2 dz
−∞ −∞ 2π
Z ∞
1 −(z 2 /2)(1−2t)
= √ e dz.
−∞ 2π
Assume 1 − 2t > 0 or t < 1/2, we can treat (1 − 2t)−1 as the variance of
Z and Z ∼ N(0, (1 − 2t)−1 ). Write the integrand as
1 − z2
√ e 2(1−2t)−1 ,

which is proportional to the density function of a normally distributed
random variable. We must scale the above function into the density
1
function by dividing it with the standard deviation (1−2t)−1/2 :

Kuan Xu (UofT) ECO 227 February 28, 2024 37 / 65


The Method of Moment-Generating Functions (6)

z 2
1 −
√ e 2[(1−2t)−1/2 ]2 .
2π(1 − 2t)−1/2
This density function integrated from −∞ to ∞ is equal to 1. Because we
z2

have divided √12π e 2(1−2t)−1 by (1−2t)
1
−1/2 , we must make an adjustment by

multiplying it by the same quality so that we get


Z ∞ z 2
1 1 −
mZ 2 (t) = √ e 2[(1−2t)−1/2 ]2 dz
(1 − 2t)1/2 −∞ 2π(1 − 2t)−1/2

1 1
= 1/2
(1) = ,
(1 − 2t) (1 − 2t)1/2
if t < 1/2. Link

Kuan Xu (UofT) ECO 227 February 28, 2024 38 / 65


The Method of Moment-Generating Functions (7)

It is known that if Z ∼ N(0, 1), then U = Z 2 ∼ χ2 (1) with 1 degree of


freedom with E (U) = 1. Now check this statement using mZ 2 (t):

dmZ 2 (t) 1 1 3
|t=0 = [(1 − 2t)− 2 −1 ](− )(−2) = [(1 − 2t)− 2 ] = 1.
dt 2
t=0 t=0

It is known that the Chi-square distribution is a special case of the gamma


distribution if the parameters of the gamma distribution are α = v /2 and
β = 2, where v is the degrees of freedom of the Chi-square distribution.
Recall the density function of a gamma distributed random variable Y (see
Ch4): ( α−1 −y /β
y e
fY (y ) = β α Γ(α) , 0 ≤ y ≤ ∞,
0, elsewhere,
R ∞ α−1 −y
where Γ(α) = 0 y e dy .

Kuan Xu (UofT) ECO 227 February 28, 2024 39 / 65


The Method of Moment-Generating Functions (8)

Substitute v = 1, α = v /2, and β = 2 into the above density function and


replacing Y and y with U and u, respectively, we have
 −1 −u
 u 12 e 2 , 0 ≤ u ≤ ∞,
fU (u) = 2 2 Γ( 12 )
0, elsewhere.

Later, we will show the chi-square distributed random variable can be


i.i.d
constructed as U = vi=1 Zi2 , where Zi ∼ N(0, 1) with v degrees of
P
freedom generally. The density function of U ∼ χ2 (v ) is:
 v −1 − u
 u 2 v e 2 , 0 ≤ u ≤ ∞,
fU (u) = 2 2 Γ( v2 )
0, elsewhere.

When v = 1, we have our special case where U = Z 2 ∼ χ2 (1).

Kuan Xu (UofT) ECO 227 February 28, 2024 40 / 65


The Method of Moment-Generating Functions (9)

Pn Qn
Theorem 6.2—If U = i=1 Yi , then mU (t) = i=1 mYi (t)
Let Y1 , Y2 , . . . , Yn be independent random variables with
moment-generating functions mY1 , mY2 , . . . , mYn , respectively. If
U = Y1 + Y2 + · · · + Yn , then

mU (t) = mY1 (t) × mY2 (t) × · · · × mYn (t)

Remarks:

mU (t) = E [e t(Y1 +Y2 +···+Yn ) ] = E (e tY1 e tY2 · · · e tYn )

= E (e tY1 ) × E (e tY2 ) × · · · × E (e tYn ) = mY1 (t) × mY2 (t) × · · · × mYn (t).

Kuan Xu (UofT) ECO 227 February 28, 2024 41 / 65


The Method of Moment-Generating Functions (10)

Example: Let Y be a discrete random variable representing the number of


customer arrivals at a checkout counter within a fixed time interval and it
follows a Poisson probability distribution:

λy e −λ
p(y ) = , y = 0, 1, 2, 3, . . . .
y!
Now we define a new set of continuous random variables Y1 , Y2 , . . . , Yn :

Y1 = the time until the 1st arrival;

Y2 = the time between the 1st and 2nd arrivals;


..
.
Yn = the time between the n − 1st and nth arrivals.

Kuan Xu (UofT) ECO 227 February 28, 2024 42 / 65


The Method of Moment-Generating Functions (11)
The Poisson ditribution deals with the number of occurrences of rare
events in a fixed time interval while the exponential distribution deals with
the time between occurrences of successive rare events as time flows by
continuously. Y1 , Y2 , . . . , Yn are the time variables of this kind. These
variables are continuous and follow the exponential density function:
(
1 −yi /θ
e , yi > 0,
fYi (yi ) = θ
0, otherwise.
Recall λ is the mean arrival rate per time interval. Let θ be the mean time
interval and it is given by 1 divided by the mean arrival rate, θ = λ1 . This
also means that the mean arrival rate per time interval is given by 1
divided by the mean time interval λ = 1θ .
As shown in Ch 4, if Y follows an exponential distribution with the above
mentioned density function, E (Y ) = θ.
Find the probability density function for the waiting time from the opening
of the checkout counter until the nth customer P arrives. That is, we need
to find the probability density function for U = ni=1 Yi , fU (u).
Kuan Xu (UofT) ECO 227 February 28, 2024 43 / 65
The Method of Moment-Generating Functions (12)
Solution:
First, find the moment-generating function for the exponentially
distributed random variable Yi .
Z ∞
1
tYi
mYi (t) = E (e ) = e tyi e −yi /θ dyi
0 θ
Z ∞
1 −yi (1−θt)
= e θ dyi
0 θ
Recall that Z
1
e ax dx = e ax .
a
Z ∞
1 −yi (1−θt)
mYi (t) = e θ dyi
0 θ
" # ∞
−1 1 −yi (1−θt)
= (1−θt) θ
e θ

θ 0

Kuan Xu (UofT) ECO 227 February 28, 2024 44 / 65


The Method of Moment-Generating Functions (13)

" # ∞
−1 1 −yi (1−θt)
mYi (t) = (1−θt) θ
e θ

θ 0

(1−θt)
h i
= −(1 − θt)−1 e −yi θ
0
= 0 + (1 − θt)−1 (1)
= (1 − θt)−1 .

Second, we apply Theorem 6.2 on U = ni=1 Yi to get


P

mU (t) = mY1 (t) × mY2 (t) × · · · × mYn (t)


= (1 − θt)−1 × (1 − θt)−1 × · · · × (1 − θt)−1
= (1 − θt)−n

Kuan Xu (UofT) ECO 227 February 28, 2024 45 / 65


The Method of Moment-Generating Functions (14)

This is the moment-generating function of a gamma-distributed random


variable with α = n and β = θ.
Recall the moment-generating function of a gamma-distributed random
variable with α and β is given in Ch 4:
 α 
1 β 1
mU (t) = α Γ(α) = for t < 1/β.
β Γ(α) 1 − βt (1 − βt)α

Kuan Xu (UofT) ECO 227 February 28, 2024 46 / 65


The Method of Moment-Generating Functions (15)

Pn indep
Theorem 6.3—mU (t) for U = i=1 ai Yi , where Yi ∼ N(µi , σi2 )
Let Y1 , Y2 , . . . , Yn be independent normally distributed random variables
with E (Yi ) = µi and P V (Yi ) = σi2 , for i = 1, 2, . . . , n, and let a1 , a2 , . . . , an
be constants. If U = ni=1 ai Yi , then U is a normally distributed random
variable with
X n
E (U) = ai µi
i=1

and
n
X
V (U) = ai2 σi2 .
i=1

Kuan Xu (UofT) ECO 227 February 28, 2024 47 / 65


The Method of Moment-Generating Functions (16)
Remarks: To understand the above theorem, we take a few steps.
First, let Z ∼ N(0, 1). Find the moment-generating-function of Z as
follows:
Z ∞
1 1 2
Zt
mZ (t) = E (e ) = e zt √ e − 2 z dz
−∞ 2π
Z ∞ Z ∞
1 zt− 1 z 2 1 1 2 1 2 1 2
= √ e 2 dz = √ e − 2 z +zt− 2 t + 2 t dz
2π 2π
Z−∞

−∞
Z ∞
1 − 1 (z−t)2 + 1 t 2 1 2 1 1 2
= √ e 2 2 dz = e 2 t
√ e − 2 (z−t) dz
−∞ 2π 2π
| −∞ {z }
=1

Second, let Y = a + bZ , where a and b are constants and Z ∼ N(0, 1).


Find the moment-generating function for Y .
mY (t) = E (e tY ) = E (e t(a+bZ ) ) = E (e ta e tbZ )
= e ta E (e tbZ ) = e ta mZ (tb).
Kuan Xu (UofT) ECO 227 February 28, 2024 48 / 65
The Method of Moment-Generating Functions (17)

Let a = µ and b = σ. Then


1 2 1 2 2
mY = e µt mZ (tσ) = e µt e 2 (tσ) = e µt+ 2 t σ
.

This result can be verified. We can use mY (t) to show Y ∼ N(µ, σ 2 ).


h i
µt+ 12 t 2 σ 2
dmY (t) d e h 1 2 2
i
= = (µ + tσ 2 )e µt+ 2 t σ
dt dt
t=0 t=0 t=0
= µ = E (Y ).

Kuan Xu (UofT) ECO 227 February 28, 2024 49 / 65


The Method of Moment-Generating Functions (18)

h 1 2 2
i
d 2 mY (t) d (µ + tσ 2 )e µt+ 2 t σ
=
dt dt
Recall duv ′ ′
dx = uv + vu .
h 1 2 2
i
d (µ + tσ 2 )e µt+ 2 t σ h 1 2 2 1 2 2
i
= (µ + tσ 2 )2 e µt+ 2 tσ
+ σ 2 e µt+ 2 t σ
dt
t=0
= µ2 + σ 2 = E (Y 2 ).

V (Y ) = E (Y 2 ) − [E (Y )]2 = µ2 + σ 2 − µ2 = σ 2 .

Kuan Xu (UofT) ECO 227 February 28, 2024 50 / 65


The Method of Moment-Generating Functions (19)
indep 2
Third, let Yi ∼ N(µ Pin, σi ), for i = 1, 2, . . . , n. Let a1 , a2 , . . . , an be
constants. Let U = i=1 ai Yi . Find the moment-generating function for
U, mU (t), and, through mU (t), find E (U) and V (U).
1 2 2 1 2 2 2
Recall mYi (t) = e µi t+ 2 t σi . Further mai Yi (t) = e ai µi t+ 2 ai σi t . Apply
Theorem 6.2 to get
Pn
mU (t) = E (e tU ) = E (e t i=1 ai Yi )
= ma1 Y1 (t) × ma2 Y2 (t) × · · · × man Yn (t)
a1 µ1 t+ 21 a12 σ12 t 2 1 2 2 2 1 2 2 2
=e × e a2 µ2 t+ 2 a2 σ2 t × · · · × e an µn t+ 2 an σn t
Pn 1 2
Pn 2 2
= et i=1 ai µi + 2 t i=1 ai σi .
mU (t) is a moment-generating function for U, which is also normally
distributed—A linear transformation of normally distributed variables is
also normally distributed.
n n
dmU (t) X d 2 mU (t) X
= ai µi and = ai2 σi2 .
dt dt 2
t=0 i=1 t=0 i=1
Kuan Xu (UofT) ECO 227 February 28, 2024 51 / 65
The Method of Moment-Generating Functions (20)

Pn i.i.d.
Theorem 6.4—V = i=1 Zi2 ∼ χ2 (n) if Zi ∼ N(0, 1)
indep
Let Yi ∼ N(µi , σi2 ) for i = 1, 2, . . . , n.

Yi − µi i.i.d.
Zi = ∼ N(0, 1)
σi
for i = 1, 2, . . . , n. Then,
n
X
U= Zi2 ∼ χ2 (n).
i=1

Remarks:
Pn It is known that for a χ2 -distributed random variable
U = i=1 Zi ∼ χ2 (n), E (U) = n and V (U) = 2n.
2

In the following, we show the moment-generating function for U, mU (t),


and use it to find E (U) and V (U).
Kuan Xu (UofT) ECO 227 February 28, 2024 52 / 65
The Method of Moment-Generating Functions (21)

Recall that if Zi ∼ N(0, 1), we have shown previously that


1
mZ 2 (t) = (1 − 2t)− 2
i

Pn 2
Link Apply Theorem 6.2 to U = i=1 Zi to get
Pn
Zi2
mU (t) = E (e tU ) = E (e t i=1 )

= mZ 2 (t) × mZ 2 (t) × · · · × mZn2 (t)


1 2

1 n
h i n
= (1 − 2t)− 2 = (1 − 2t)− 2

This is the moment-generating function of U ∼ χ2 (n).

Kuan Xu (UofT) ECO 227 February 28, 2024 53 / 65


The Method of Moment-Generating Functions (22)

To verify the above conclusion, based on the moment-generating function,


we find E (U) and E (U 2 ) so that we can find V (U).
dmU (t) n − n −1
 
E (U) = = (−2) − (1 − 2t) 2
dt t=0
2 t=0

− n+2
= n(1 − 2t) 2 = n(1) = n.
t=0

"  #
− n+2
d n(1 − 2t) 2
2
"  #
d mU (t) n+2 − n+4
  
2 2
E (U ) = = = (−2) − n(1 − 2t)
dt 2 t=0
dt t=0
2 t=1
2
= (n + 2)n(1) = n + 2n.

2 2 2 2
V (U) = E (U ) − [E (U)] = n + 2n − n = 2n.

Kuan Xu (UofT) ECO 227 February 28, 2024 54 / 65


The Method of Moment-Generating Functions (23)

Summary of the Moment-Generating Function Method


Let U be a function of the random variables Y1 , Y2 , . . . , Yn .
1. Find the moment-geneating function for U, mU (t).
2. Compare mU (t) with other well-known moment-generating functions.
If mU (t) = mV (t) for all values of t, Theorem 6.1 implies that U and
V have identical distributions.

Kuan Xu (UofT) ECO 227 February 28, 2024 55 / 65


Order Statistics (1)
Purposes:
(1) Introduce order statistics
(2) Find the probability distribution of order statistics via the method of
distribution functions
The random variables that depend on the relative magnitudes of the
observed variables are called order statistics.
Why are order statistics important?
(a) The median—1/2 median income as the poverty line
(b) The Gini index—a function of order statistics
Let Y1 , Y2 , . . . , Yn are i.i.d random variables with distribution function
F (y ) and density function f (y ). Order Yi by Y(1) , Y(2) , . . . , Y(n) such that
Y(1) ≤ Y(2) ≤ · · · ≤ Y(n) . We use Y(i) , i = 1, 2, . . . , n to denote order
statistics. For example,

Y(1) = min(Y1 , Y2 , . . . , Yn ) and Y(n) = max(Y1 , Y2 , . . . , Yn )

Kuan Xu (UofT) ECO 227 February 28, 2024 56 / 65


Order Statistics (2)

Show how to get the density function for Y(n) . Because Y(n) is the
maximum and Y(n) ≤ y occurs only if all Yi ≤ y (i = 1, 2, . . . , n) occur,

P(Y(n) ≤ y ) = P(Y1 ≤ y , Y2 ≤ y , . . . , Yn ≤ y )

Because Yi ’s are independent and P(Yi ≤ y ) = F (y ) for i = 1, 2, . . . , n,


the probability distribution function of Y(n) is

FY(n) = P(Y(n) ≤ y ) = P(Y1 ≤ y )P(Y2 ≤ y ) · · · P(Yn ≤ y ) = [F (y )]n .

Take the derivative of FY(n) = [F (y )]n w.r.t. y to get the density function
of Y(n) , g(n) (y ):
g(n) (y ) = n[F (y )]n−1 f (y ).

Kuan Xu (UofT) ECO 227 February 28, 2024 57 / 65


Order Statistics (3)
Show how to get the density function for Y(1) . Because Y(1) is the
minimum,
FY(1) = P(Y(1) ≤ y ) = 1 − P(Y(1) > y ).
Y(1) is the minimum and if Y(1) > y then Yi > y for i = 1, 2, . . . , n.
Because Yi ’s are independent, P(Yi > y ) = 1 − F (y ) for i = 1, 2, . . . , n.
Therefore,

FY(1) (y ) = P(Y(1) ≤ y ) = 1 − P(Y(1) > y )


= 1 − P(Y1 > y , Y2 > y , . . . , Yn > y )
= 1 − [P(Y1 > y )P(Y2 > y ) · · · P(Yn > y )]
= 1 − [1 − F (y )]n .

Take the derivative of FY(1) (y ) = 1 − [1 − F (y )]n . w.r.t. y to get the


density function of Y(1) , g(1) (y ):

g(1) (y ) = n[1 − F (y )]n−1 f (y ).

Kuan Xu (UofT) ECO 227 February 28, 2024 58 / 65


Order Statistics (4)
The following figure can assist students to understand the above
derivations of g(n) (y ) and g(1) (y ).
Y1 , Y2 , . . . , Yn P (Y(n) ≤ y) = P (Y1 ≤ y, Y2 ≤ y, . . . , Yn ≤ y) = [F (y)]n
Y(n) = max(Y1 , Y2 , . . . , Yn )

Y(1) Y(n) ≤ y

Y1 , Y2 , . . . , Yn P (Y(1) ≤ y) = 1 − P (Y(1) > y) = 1 − P (Y1 > y, Y2 > y, . . . , Yn > y) = 1 − [1 − F (y)]n


Y(1) = min(Y1 , Y2 , . . . , Yn )

Y(1) ≤ y Y(n)

Kuan Xu (UofT) ECO 227 February 28, 2024 59 / 65


Order Statistics (5)
Show how to get the joint density function for Y(1) and Y(2) . The event
(Y(1) ≤ y1 , Y(2) ≤ y2 ) means either (Y1 ≤ y1 , Y2 ≤ y2 ) or
(Y2 ≤ y1 , Y1 ≤ y2 ). Therefore, for y1 ≤ y2 ,

P(Y(1) ≤ y1 , Y(2) ≤ y2 ) = P[(Y1 ≤ y1 , Y2 ≤ y2 ) ∪ (Y2 ≤ y1 , Y1 ≤ y2 )]


Noting that y1 ≤ y2 and applying the additive law of probability, we get

P(Y(1) ≤ y1 , Y(2) ≤ y2 ) = P(Y1 ≤ y1 , Y2 ≤ y2 ) + P(Y2 ≤ y1 , Y1 ≤ y2 )


−P(Y1 ≤ y1 , Y2 ≤ y1 )
Noting that the independence between Y1 and Y2 and P(Yi ≤ w ) = F (w )
for i = 1, 2, we have
P(Y(1) ≤ y1 , Y(2) ≤ y2 ) = F (y1 )F (y2 ) + F (y1 )F (y2 ) − [F (y1 )]2
= 2F (y1 )F (y2 ) − [F (y1 )]2 .
Kuan Xu (UofT) ECO 227 February 28, 2024 60 / 65
Order Statistics (6)
Having considered the case for y1 ≤ y2 , now consider the case for y1 > y2 .
In this case, we have still have Y(1) ≤ Y(2) . Therefore,
P(Y(1) ≤ y1 , Y(2) ≤ y2 ) = P(Y(1) ≤ y2 , Y(2) ≤ y1 ) = P(Y1 ≤ y2 , Y2 ≤ y2 )

= [F (y2 )]2 .
To summarize the two cases, the joint distribution function of Y(1) and
Y(2) is
(
2F (y1 )F (y2 ) − [F (y1 )]2 , y1 ≤ y2 ,
FY(1) ,Y(2) (y1 , y2 ) =
[F (y2 )]2 , y1 > y2 .
Differentiating FY(1) ,Y(2) (y1 , y2 ) w.r.t. to y2 first and to y1 second to get
the joint density of Y(1) and Y(2) :
(
2f (y1 )f (y2 ), y1 ≤ yy ,
g(1)(2)(y1 ,y2 ) =
0, y1 > y2 .

Kuan Xu (UofT) ECO 227 February 28, 2024 61 / 65


Order Statistics (7)

Find the joint density function of Y(1) , Y(2) , . . . , Y(n) . We can show this
without the proof:
(
n!f (y1 )f (y2 ) · · · f (yn ), y1 ≤ y2 ≤ · · · ≤ yn ,
g(1)(2)···(n) (y1 , y2 , . . . , yn ) =
0, elsewhere.

Example: Electronic parts of a certain type have a length of life Y , with


the (exponential) probability density function:
(
(1/100)e −y /100 , y > 0
f (y ) =
0, elsewhere.

Assume that two such parts operate independently in a system but the
system fails when either part fails. Let X be the length of life of the
system. Find the density function of X .

Kuan Xu (UofT) ECO 227 February 28, 2024 62 / 65


Order Statistics (8)

Solution: X = min(Y1 , Y2 ). Note n = 2 and


y
Ry
F (y ) = 0 (1/100)e −w /100 dw = −e −w /100 = 1 − e −y /100 . Therefore,
 

fX (y ) = g(1) (y ) = n[1 − F (y )]n−1 f (y )


(
2e −y /100 (1/100)e −y /100 , y > 0,
=
0, elsewhere.
(
(1/50)e −y /50 , y > 0,
=
0, elsewhere.
Remarks: The minimum of two exponentially distributed random variable
has an exponential distribution.

Kuan Xu (UofT) ECO 227 February 28, 2024 63 / 65


Order Statistics (9)
Example: Continue our previous example. But we change the parts
operate in parallel. The system will fail if both parts fail. Let X be the life
of the system. Find the density function of X .
Solution: X = max(Y1 , Y2 ). Note n = 2 and
y
Ry −w
 −w /100 
F (y ) = 0 (1/100)e /100 dw = −e = 1 − e −y /100 . Therefore,
0

fX (y ) = g(2) (y ) = n[F (y )]n−1 f (y )


(
2(1 − e −y /100 )(1/100)e −y /100 , y > 0,
=
0, elsewhere.
(
(1/50)(e −y /100 − e −y /50 ), y > 0,
=
0, elsewhere.
Remarks: The maximum of two exponentially distributed random variables
does not have an exponential distribution.
Kuan Xu (UofT) ECO 227 February 28, 2024 64 / 65
The End

Kuan Xu (UofT) ECO 227 February 28, 2024 65 / 65

You might also like