Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Expectation: Moments of A Distribution

Download as pdf or txt
Download as pdf or txt
You are on page 1of 39

RS - Chapter 3 - Moments

1
Chapter 3
Moments of a Distribution
Expectation
RS - Chapter 3 - Moments
2
We develop the expectation operator in terms of the Lebesgue integral.
Recall that the Lebesgue measure (A) for some set A gives the
length/area/volume of the set A. If A = (3; 7), then (A) = 3 - 7 = 4.
The Lebesgue integral of f on [a,b] is defined in terms of
i
y
i
(A
i
),
where 0 = y
1
y
2
... y
n
, A
i
= {x : y
i
f (x) < y
i+1
}, and (A
i
) is the
Lebesgue measure of the set A
i
.
The value of the Lebesgue integral is the limit as the y
i
's are pushed
closer together. That is, we break the y-axis into a grid using {y
n
} and
break the x-axis into the corresponding grid {A
n
} where
A
i
= {x : f (x) [y
i
; y
i+1
)}.
Taking expectations
Taking expectations: Riemann vs Lebesgue
Riemanns approach
Partition the base. Measure the height of the function at the center of
each interval. Calculate the area of each interval. Add all intervals.
Lebesgue approach
Divide the range of the function. Measure the length of each
horizontal interval. Calculate the area of each interval. Add all
intervals.
RS - Chapter 3 - Moments
3
Taking expectations
A Borel function (RV) f is integrable if and only if |f| is integrable.
For convenience, we define the integral of a measurable function f
from (, , ) to ( R, B), where R = R U {, }, B = (B
U{{}, {}}).
Example: If = R and is the Lebesgue measure, then the Lebesgue
integral of f over an interval [a, b] is written as
[a,b]
f(x) dx =
a
b
f(x) dx,
which agrees with the Riemann integral when the latter is well defined.
However, there are functions for which the Lebesgue integrals are
defined but not the Riemann integrals.
If -P, in statistics, X dP = EX = E[X] is called the expectation or
expected value of X.
Consider our probability space (, , P) . Take an event (a set A of
) and X, a RV, that assigns real numbers to each A.
If we take an observation from A without knowing which A will
be drawn, we may want to know what value of X() we should expect
to see.
Each of the A has been assigned a probability measure P[],
which induces P[x]. Then, we use this to weight the values X().
P is a probability measure: The weights sum to 1. The weighted sum
provides us with a weighted average of X (). If P gives the "correct"
likelihood of being chosen, the weighted average of X() --E[X]
tells us what values of X() are expected.
Expected Value
RS - Chapter 3 - Moments
4
Now with the concept of the Lebesgue integral, we take the possible
values {x
i
} and construct a grid on the y-axis, which gives a
corresponding grid on the x-axis in A, where
A
i
= { A : X() [x
i
; x
i+1
)}.
Let the elements in the x-axis grid be A
i
. The weighted average is
) ( ] [ ] [
1 1 1
i X
n
i
i i X
n
i
i i
n
i
i
x f x x X P x A P x

= = =
= = =
As we shrink the grid towards 0, A, becomes infinitesimal. Let d be
the infinitesimal set A. The Lebesgue integral becomes:
dx x f x x X P x d P x A P x
i X i X i
n
i
i
n
) ( ] [ ] [ ] [ lim
1


=

= = = =
Expected Value
Definition
Let X denote a discrete RV with probability function p(x) (probability
density function f(x) if X is continuous) then the expected value of X,
E(X) is defined to be:
( ) ( ) ( )
i i
x i
E X xp x x p x = =

( ) ( ) E X xf x dx

and if X is continuous with probability density function f(x)


The Expectation of X: E(X)
The expectation operator defines the mean (or population average) of a
random variable or expression.
Sometimes we use E[.] as E
X
[.] to indicate that the expectation is being
taken over f
X
(x) dx.
RS - Chapter 3 - Moments
5
0
0.1
0.2
0.3
0.4
0 1 2 3 4 5 6 7
Interpretation of E(X)
1. The expected value of X, E(X), is the center of gravity of the
probability distribution of X.
2. The expected value of X, E(X), is the long-run average value of X.
(To be discussed later: Law of Large Numbers)
E(X)
Example: The Binomal distribution
Let X be a discrete random variable having the Binomial distribution
--i.e., X = the number of successes in n independent repetitions of a
Bernoulli trial. Find the expected value of X, E(X).
( ) ( ) 1 0,1, 2, 3, ,
n x
x
n
p x p p x n
x
| |
= =
|
\
K
( ) ( ) ( )
0 0
1
n n
n x
x
x x
n
E X xp x x p p
x

= =
| |
= =
|
\

( )
( )
( )
1
1
1
!
1
! !
n
n x
x
x
n
n x
x
x
n
x p p
x
n
x p p
x n x

=
| |
=
|
\
=

RS - Chapter 3 - Moments
6
Example: Solution
( ) ( ) ( )
0 0
1
n n
n x
x
x x
n
E X xp x x p p
x

= =
| |
= =
|
\

( )
( )
( )
1
1
1
!
1
! !
n
n x
x
x
n
n x
x
x
n
x p p
x
n
x p p
x n x

=
| |
=
|
\
=

( ) ( )
( )
1
!
1
1 ! !
n
n x
x
x
n
p p
x n x

=
=

( )
( )
( )
( )
1 2
1 2
! !
1 1
0! 1 ! 1! 2 !
n n n n
p p p p
n n

= + +

( )
( )
( )
1
! !
1
2 !1! 1 !0!
n n
n n
p p p
n n

+ +

K
( )
( )
( )
( )
( )
( )
1 2
0 1
1 ! 1 !
1 1
0 ! 1 ! 1! 2 !
n n
n n
np p p p p
n n


= + +

( )
( )
( )
( )
( )
2 1
1 ! 1 !
1
2 !1! 1 ! 0 !
n n
n n
p p p
n n

(
+ +
(

(

K
( ) ( )
1 2
0 1
1 1
1 1
0 1
n n
n n
np p p p p

| | | |
= + +
| |
\ \
( )
2 1
1 1
1
2 1
n n
n n
p p p
n n

( | | | |
+ +
( | |

\ \
K
( ) [ ]
1 1
1 1
n n
np p p np np

( = + = =

Example: Solution
RS - Chapter 3 - Moments
7
Example: Exponential Distribution
Let X have an exponential distribution with parameter . The
probability density function of X is:
( )
0
0 0
x
e x
f x
x



=

<

The expected value of X is:


( ) ( )
0
x
E X xf x dx x e dx

= =

We will determine
udv uv vdu =

x
x e dx

We will determine
x
x e dx

using integration by parts.


In this case and
x
u x dv e dx


= =
Hence and
x
du dx v e

= =
Thus
x x x
x e dx xe e dx


= +

1

x x
xe e


=
( )
0
0 0
1

x x x
E X x e dx xe e


= =

( )
1 1
0 0 0

| |
= + =
|
\
Summary: If X has an exponential distribution with parameter , then:
( )
1
E X

=
Example: Exponential Distribution
RS - Chapter 3 - Moments
8
Example: The Uniform distribution
Suppose X has a uniform distribution from a to b.
Then:
( )
1
0 ,
b a
a x b
f x
x a x b


=

< >

The expected value of X is:


( ) ( )
1
b
b a
a
E X xf x dx x dx

= =

( )
2 2 2
1
2 2 2
b
b a
a
x b a a b
b a

( +
= = =
(


Example: The Normal distribution
Suppose X has a Normal distribution with parameters and .
Then:
( )
( )
2
2
2
1
2
x
f x e

=
The expected value of X is:
( ) ( )
( )
2
2
2
1
2
x
E X xf x dx x e dx


= =

x
z

=
Make the substitution:
1
and dz dx x z

= = +
RS - Chapter 3 - Moments
9
Hence
Now
( ) ( )
2
2
1
2
z
E X z e dz

= +

2 2
2 2
1
2 2
z z
e dz ze dz





= +

2 2
2 2
1
1 and 0
2
z z
e dz ze dz




= =

( ) Thus E X =
Example: The Gamma distribution
Suppose X has a Gamma distribution with parameters and .
Then:
( ) ( )
1
0
0 0
x
x e x
f x
x

<

Note:
This is a very useful formula when working with the Gamma
distribution.
( )
( )
1
0
1 if 0,
x
f x dx x e dx

= = > 0.


RS - Chapter 3 - Moments
10
The expected value of X is:
( ) ( )
( )
1
0
x
E X xf x dx x x e dx

= =


This is now
equal to 1.
( )
0
x
x e dx


( )
( )
( )
1
1
0
1
1
x
x e dx

+
+
=
+

( )
( )
( )
( )
1


+
= = =

Example: The Gamma distribution
Thus, if X has a Gamma ( ,) distribution, the expected value of X
is:
E(X) = /
Special Cases: ( ,) distribution then the expected value of X is:
1. Exponential () distribution: = 1, arbitrary
( )
1
E X

=
2. Chi-square () distribution: =

/
2
, = .
( )
2
1
2
E X

= =
Example: The Gamma distribution
RS - Chapter 3 - Moments
11
0
0.05
0.1
0.15
0.2
0.25
0.3
0 2 4 6 8 10
( ) E X

=
Example: The Gamma distribution
0
0.05
0.1
0.15
0.2
0.25
0 5 10 15 20 25
The Exponential distribution
( )
1
E X

=
RS - Chapter 3 - Moments
12
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0 5 10 15 20 25
The Chi-square (
2
) distribution
( ) E X =
Let X denote a discrete random variable with probability function
p(x) (probability density function f(x) if X is continuous) then the
expected value of g(X), E[g(X)], is defined to be:
( ) ( ) ( ) ( ) ( )
i i
x i
E g X g x p x g x p x ( = =


( ) ( ) ( ) E g X g x f x dx

( =

and if X is continuous with probability density function f(x)
Expectation of a function of a RV
RS - Chapter 3 - Moments
13
Example: Function of a Uniformly distributed RV
Suppose X has a uniform distribution from 0 to b.
Then:
( )
1
0
0 0,
b
x b
f x
x x b

=

< >

Find the expected value of A = X


2
.
If X is the length of a side of a square (chosen at random form 0 to b)
then A is the area of the square
( ) ( )
2 2 2
1
b
b a
a
E X x f x dx x dx

= =

( )
3 3 3 2
1
0
0
3 3 3
b
b
x b b
b
(
= = =
(

= 1/3 the maximum area of the square
A median is described as the numeric value separating the higher half
of a sample, a population, or a probability distribution, from the lower
half.
Definition: Median
The median of a random variable X is the unique number m that satisfies
the following inequalities:
P(X m) and P(X m) .
For a continuous distribution, we have that m solves:
2 / 1 ) ( ) ( = =


m
X
m
X
dx x f dx x f
Median: An alternative central measure
RS - Chapter 3 - Moments
14
Calculation of medians is a popular technique in summary statistics
and summarizing statistical data, since it is simple to understand and
easy to calculate, while also giving a measure that is more robust in the
presence of outlier values than is the mean.
An optimality property
A median is also a central point which minimizes the average of the
absolute deviations. That is, a value of c that minimizes
E(|X c|)
is the median of the probability distribution of the random variable X.
Median: An alternative central measure
Example I: Median of the Exponential Distribution
Let X have an exponential distribution with parameter . The
probability density function of X is:
( )
0
0 0
x
e x
f x
x



=

<

The median m solves the following integral of X:


2 / 1 ) ( =

m
X
dx x f
2 / 1 | = = = =


m
m
x
m
x
m
x
e e dx e dx e


That is, m = ln(2)/.
RS - Chapter 3 - Moments
15
Example II: Median of the Pareto Distribution
Let X follow a Pareto distribution exponential distribution with
parameters (scale) and x
m
(shape). The pdf of X is:
The median m solves the following integral of X: 2 / 1 ) ( =

m
X
dx x f

+ +

= = = + =
+
+ +
= =


/ 1
1 ) 1 (
) 1 (
1
2 2 / 1 |
1 ) 1 (
m m m m
m
m
m
m
m
x m m x C x x
C
x
x dx x x dx
x
x
Note: The Pareto distribution is used to describe the distribution of
wealth.

<
>

= +

0 0
0
) ( 1
x if
x x if
x
x
x f
m
m
Moments of Random Variables
RS - Chapter 3 - Moments
16
The moments of a random variable X are used to describe the behavior
of the RV (discrete or continuous).
Definition: K
th
Moment
Let X be a RV (discrete or continuous), then the kth moment of X is:
( )
k
k
E X =
( )
( )
-
if is discrete
if is continuous
k
x
k
x p x X
x f x dx X

The first moment of X, =


1
= E(X) is the center of gravity of
the distribution of X.
The higher moments give different information regarding the shape
of the distribution of X.
Moments of a Random Variable
Definition: Central Moments
Let X be a RV (discrete or continuous). Then, the k
th
central moment of
X is defined to be:
( )
0
k
k
E X
(
=

( ) ( )
( ) ( )
-
if is discrete
if is continuous
k
x
k
x p x X
x f x dx X

where =
1
= E(X) = the first moment of X .
The central moments describe how the probability distribution is
distributed about the centre of gravity, .
Moments of a Random Variable
RS - Chapter 3 - Moments
17
The second central moment depends on the spread of the
probability distribution of X about . It is called the variance of X
and is denoted by the symbol var(X).
= 2
nd
central moment.
( )
2
0
2
E X
(
=

( )
2
0
2
E X
(
=

is called the standard deviation of X
and is denoted by the symbol .
( ) ( )
2
0 2
2
var X E X
(
= = =

The first central moments is given by:
[ ]
0
1
E X =
Moments of a Random Variable 1
st
and 2
nd
The third central moment
contains information about the skewness of a distribution.
( )
3
0
3
E X
(
=

A popular measure of skewness:
( )
0 0
3 3
1 3 2 3
0
2

= =
Moments of a Random Variable - Skewness
0
0 . 0 1
0 . 0 2
0 . 0 3
0 . 0 4
0 . 0 5
0 . 0 6
0 . 0 7
0 . 0 8
0 . 0 9
0 5 1 0 1 5 2 0 2 5 3 0 3 5
0
3 1
0, 0 = =
Distribution according to skewness:
1) Symmetric distribution
RS - Chapter 3 - Moments
18
0
0 . 0 1
0 . 0 2
0 . 0 3
0 . 0 4
0 . 0 5
0 . 0 6
0 . 0 7
0 . 0 8
0 5 1 0 1 5 2 0 2 5 3 0 3 5
0
3
0, 0
1
> >
2) Positively skewed distribution
Moments of a Random Variable - Skewness
3) Negatively skewed distribution
0
0 . 0 1
0 . 0 2
0 . 0 3
0 . 0 4
0 . 0 5
0 . 0 6
0 . 0 7
0 . 0 8
0 5 1 0 1 5 2 0 2 5 3 0 3 5
0
3 1
0, 0 < <
Moments of a Random Variable - Skewness
Skewness and Economics
- Zero skew means symmetrical gains and losses.
- Positive skew suggests many small losses and few rich returns.
- Negative skew indicates lots of minor wins offset by rare major losses.
In financial markets, stock returns at the firm level show positive
skewness, but at stock returns at the aggregate (index) level show
negative skewness.
From horse race betting and from U.S. state lotteries there is evidence
supporting the contention that gamblers are not necessarily risk-lovers
but skewness-lovers: Long shots are overbet (positve skewness loved!).
RS - Chapter 3 - Moments
19
The fourth central moment
It contains information about the shape of a distribution. The
property of shape that is measured by this moment is called kurtosis.
The measure of (excess) kurtosis:
Distributions:
1) Mesokurtic distribution
( )
4
0
4
E X
(
=

( )
0 0
4 4
2 2 4
0
2
3 3

= =
Moments of a Random Variable - Kurtosis
0
0 . 0 1
0 . 0 2
0 . 0 3
0 . 0 4
0 . 0 5
0 . 0 6
0 . 0 7
0 . 0 8
0 . 0 9
0 5 1 0 1 5 2 0 2 5 3 0 3 5
0
4
0, moderate in size
2
=
0
0 2 0 4 0 6 0 8 0
0
4
0, small in size
2
<
2) Platykurtic distribution
Moments of a Random Variable - Kurtosis
0
0 2 0 4 0 6 0 8 0
0
4
0, large in size
2
>
3) Leptokurtic distribution
RS - Chapter 3 - Moments
20
Example: The uniform distribution from 0 to 1
( )
1 0 1
0 0, 1
x
f x
x x

=

< >

( )
1
1 1
0 0
1
1
1 1
k
k k
k
x
x f x dx x dx
k k

(
= = = =
(
+ +


Finding the moments
Finding the central moments:
( ) ( ) ( )
1
0
1
2
0
1
k k
k
x f x dx x dx

= =

Moments of a Random Variable
Finding the central moments (continuation):
( ) ( ) ( )
1
0
1
2
0
1
k k
k
x f x dx x dx

= =

1
2
maki ng t he s ub st i t u t i on w x =
( ) ( )
1
1
2
2
1 1
2 2
1 1
1 1 1
2 2 0
1 1
k k
k
k
k
w
w dw
k k

+ +
+


(
= = =
(
+ +

( )
( )
( )
1
1
1
if even
1 1
2 1
2 1
0 if odd
k
k
k
k
k
k
k
+
+



+
= =

+

Moments of a Random Variable


RS - Chapter 3 - Moments
21
Thus,
( ) ( )
0 0 0
2 3 4 2 4
1 1 1 1
Hence , 0,
2 3 12 2 5 80
= = = = =
( )
0
2
1
var
12
X = =
( )
0
2
1
var
12
X = = =
0
3
1
3
0

= =
The standard deviation
The measure of skewness
( )
0
4
1 2 4
1 80
3 3 1.2
1 12

= = =
The measure of kurtosis
Moments of a Random Variable
Alternative measures of dispersion
When the median is used as a central measure for a distribution, there are
several choices for a measure of variability:
- The range -the length of the smallest interval containing the data
- The interquartile range -the difference between the 3
rd
and 1
st
quartiles.
- The mean absolute deviation (1/n)
i
|x
i
central measure(X)|
- The median absolute deviation (MAD). MAD= m
i
(|x
i
- m(X)|)
These measures are more robust (to outliers) estimators of scale than the
sample variance or standard deviation.
They especially behave better with distributions without a mean or
variance, such as the Cauchy distribution.
RS - Chapter 3 - Moments
22
Rules for Expectations
Rules:
( )
( ) ( )
( ) ( )
if is discrete
if is continuous
x
g x p x X
E g X
g x f x dx X

( =

[ ]
1. where is a constant E c c c =
( ) ( ) [ ] ( ) if then g X c E g X E c cf x dx

( = =

( ) c f x dx c

= =

Proof:
The proof for discrete random variables is similar.
Rules for Expectations
RS - Chapter 3 - Moments
23
[ ] [ ]
2. where , are constants E aX b aE X b a b + = +
( ) [ ] ( ) ( ) if then g X aX b E aX b ax b f x dx

+ + = +

Proof
The proof for discrete random variables is similar.
( ) ( ) a xf x dx b f x dx


= +

( ) aE X b = +
Rules for Expectations
( ) ( )
2
0
2
3. var X E X
(
= =

( ) ( )
2 2
2 x x f x dx

= +

Proof:
The proof for discrete random variables is similar.
( ) ( )
2
2 2
2 1
E X E X ( = =

( ) ( ) ( ) ( )
2 2
var X E X x f x dx

(
= =


( ) ( ) ( )
2 2
2 x f x dx xf x dx f x dx


= +

( ) ( )
2 2
2 2 1
2 E X E X
2 2
( = + = =

Rules for Expectations
RS - Chapter 3 - Moments
24
( ) ( )
2
4. var var aX b a X + =
Proof:
( ) ( )
2
var
aX b
aX b E aX b
+
(
+ = +

[ ] ( )
2
E aX b a b
(
= + +
(

( )
2
2
E a X
(
=

( ) ( )
2
2 2
var a E X a X
(
= =

[ ] [ ]
aX b
E aX b aE X b a b
+
= + = + = +
Rules for Expectations
Moment generating functions
RS - Chapter 3 - Moments
25
Definition: Moment Generating Function (MGF)
Let X denote a random variable, Then the moment generating function of X ,
m
X
(t) is defined by:
-
( )
( ) ( )
( ) ( )
if is discrete
if is continuous
x
g x p x X
E g X
g x f x dx X

( =

The expectation of a function g(X) is given by:


( )
( )
( )
if is discrete
if is continuous
tx
x
tX
X
tx
e p x X
m t E e
e f x dx X

( = =

Moment generating functions


MGF: Examples
( ) ( ) 1 0,1, 2, ,
n x
x
n
p x p p x n
x
| |
= =
|
\
K
The MGF of X , m
X
(t) is:
1. The Binomial distribution (parameters p, n)
( ) ( )
tX tx
X
x
m t E e e p x ( = =


( )
0
1
n
n x
tx x
x
n
e p p
x

=
| |
=
|
\

( ) ( )
0 0
1
n n
x
n x
t x n x
x x
n n
e p p a b
x x

= =
| | | |
= =
| |
\ \

( ) ( )
1
n
n
t
a b e p p = + = +
RS - Chapter 3 - Moments
26
( ) 0,1, 2,
!
x
p x e x
x


= = K
The MGF of X , m
X
(t) is:
2. The Poisson distribution (parameter )
( ) ( )
tX tx
X
x
m t E e e p x ( = =


0
!
x n
tx
x
e e
x


=
=

( )
0 0
using
! !
t
x
t
x
e u
x x
e
u
e e e e
x x



= =
= = =

( )
1
t
e
e

=
MGF: Examples
( )
0
0 0
x
e x
f x
x



=

<

The MGF of X , m
X
(t) is:
3. The Exponential distribution (parameter )
( ) ( )
0
tX tx tx x
X
m t E e e f x dx e e dx

( = = =

( )
( )
0
0
t x
t x
e
e dx
t

(
= =
(

undefined
t
t
t

<

MGF: Examples
RS - Chapter 3 - Moments
27
( )
2
2
1
2
x
f x e

=
The MGF of X , m
X
(t) is:
4. The Standard Normal distribution ( = 0, = 1)
( ) ( )
tX tx
X
m t E e e f x dx

( = =

2
2
2
1
2
x tx
e dx

2
2
1
2
x
tx
e e dx

MGF: Examples
( )
2 2 2 2
2 2
2 2 2
1 1
2 2
x tx t x tx t
X
m t e dx e e dx

+



= =

We will now use the fact that
( )
2
2
2
1
1 for all 0,
2
x b
a
e dx a b
a

= >

We have
completed
the square
( )
2
2 2
2 2 2
1
2
x t
t t
e e dx e

= =

This is 1

MGF: Examples
RS - Chapter 3 - Moments
28
( ) ( )
1
0
0 0
x
x e x
f x
x

<

The MGF of X , m
X
(t) is:
4. The Gamma distribution (parameters , )
( ) ( )
tX tx
X
m t E e e f x dx

( = =

( )
1
0
tx x
e x e dx

( )
( ) 1
0
t x
x e dx

MGF: Examples
We use the fact
( )
1
0
1 for all 0, 0
a
a bx
b
x e dx a b
a


= > >

( )
( )
( ) 1
0
t x
X
m t x e dx

( )
( )
( )
( ) 1
0
t x
t
x e dx
t
t

| |
= =
|

\

Equal to 1

MGF: Examples
( ) ( )
2
1 2
X
m t t

=
The Chi-square distribution with degrees of freedom (=

/
2
, =):
RS - Chapter 3 - Moments
29
Properties of
Moment Generating Functions
1. m
X
(0) = 1
( ) ( ) ( ) ( ) ( )
0
, hence 0 1 1
tX X
X X
m t E e m E e E

= = = =
( ) v) Gamma Dist'n
X
m t
t

| |
=
|

\
( )
2
2
iv) Std Normal Dist'n
t
X
m t e =
( ) iii) Exponential Dist'n
X
m t
t

| |
=
|

\
( )
( )
1
ii) Poisson Dist'n
t
e
X
m t e

=
( ) ( )
i) Binomial Dist'n 1
n
t
X
m t e p p = +
Note: The MGFs of the following distributions satisfy the property
m
X
(0) = 1
MGF: Properties
RS - Chapter 3 - Moments
30
( )
2 3 3 2
1
2. 1
2! 3! !
k k
X
m t t t t t
k

= + + + + + + K K
We use the expansion of the exponential function:
2 3
1
2! 3! !
k
u
u u u
e u
k
= + + + + + + K K
( ) ( )
tX
X
m t E e =
2 3
2 3
1
2! 3! !
k
k
t t t
E tX X X X
k
| |
= + + + + + +
|
\
K K
( ) ( ) ( ) ( )
2 3
2 3
1
2! 3! !
k
k
t t t
tE X E X E X E X
k
= + + + + + + K K
2 3
1 2 3
1
2! 3! !
k
k
t t t
t
k
= + + + + + + K K
MGF: Properties
( )
( ) ( )
0
3 . 0
k
k
X X k k
t
d
m m t
dt

=
= =
Now
( )
2 3 3 2
1
1
2! 3! !
k k
X
m t t t t t
k

= + + + + + + K K
( )
2 1 3 2
1
2 3
2! 3! !
k k
X
m t t t kt
k


= + + + + + K K
( )
2 1 3
1 2
2! 1 !
k k
t t t
k



= + + + + +

K K
( )
1
and 0
X
m =
( )
( )
2 4
2 3
2! 2 !
k k
X
m t t t t
k



= + + + + +

K K
( )
2
and 0
X
m =
( )
( ) continuing we find 0
k
X k
m =
MGF: Properties
RS - Chapter 3 - Moments
31
( ) ( )
i) Binomial Dist'n 1
n
t
X
m t e p p = +
Property 3 is very useful in determining the moments of a RV X.
Examples:
( ) ( ) ( )
1
1
n
t t
X
m t n e p p pe

= +
( ) ( ) ( )
1
0 0
1
0 1
n
X
m n e p p pe np

= + = = =
( ) ( ) ( ) ( ) ( )
2 1
1 1 1
n n
t t t t t
X
m t np n e p p e p e e p p e

(
= + + +
(

( ) ( ) ( ) ( )
( )
2
2
1 1 1
1 1
n
t t t t
n
t t t
npe e p p n e p e p p
npe e p p ne p p

(
= + + +

( = + +

MGF: Applying Property 3 - Binomial
2
2 2 ' '
] [ ] 1 [ ) 0 ( = + = + = + = npq p n q np np p np np m
X
( )
( )
1
ii) Poisson Dist'n
t
e
X
m t e

=
( )
( ) ( )
1 1
t t
e e t
t
X
m t e e e


+
( = =

( )
( ) ( ) ( )
1 1 2 1
2
1
t t t
e t e t e t
t
X
m t e e e e


+ + +
( = + = +

( )
( ) ( )
1 2 1
2
2 1
t t
e t e t
t t
X
m t e e e e


+ +
( ( = + + +

( ) ( )
1 2 1
2
3
t t
e t e t
t
e e e


+ +
( = + +

( ) ( ) ( )
1 3 1 2 1
3 2
3
t t t
e t e t e t
e e e


+ + +
= + +
MGF: Applying Property 3 - Poisson
RS - Chapter 3 - Moments
32
( )
( )
0
1 0
1
0
e
X
m e


+
= = =
( )
( ) ( )
0 0
1 0 1 0
2 2
2
0
e e
X
m e e


+ +
= = + = +
( )
3 0 2 0 0 3 2
3
0 3 3
t
X
m e e e = = + + = + +
To find the moments we set t = 0.
MGF: Applying Property 3 - Poisson
( ) iii) Exponential Dist'n
X
m t
t

| |
=
|

\
( )
( )
1
X
d t
d
m t
dt t dt

| |
= =
|

\
( )( ) ( ) ( )
2 2
1 1 t t

= =
( ) ( )( ) ( ) ( )
3 3
2 1 2
X
m t t t

= =
( ) ( ) ( ) ( ) ( ) ( )
4 4
2 3 1 2 3
X
m t t t

= =
( )
( ) ( ) ( )( ) ( ) ( ) ( )
5 5
4
2 3 4 1 4!
X
m t t t

= =
M
( )
( ) ( ) ( )
1
!
k
k
X
m t k t

=
MGF: Applying Property 3 - Exponential
RS - Chapter 3 - Moments
33
Thus
( ) ( )
2
1
1
0
X
m

= = = =
( ) ( )
3
2
2
2
0 2
X
m

= = =
M
( )
( ) ( ) ( )
1 !
0 !
k
k
k X
k
k
m k


= = =
MGF: Applying Property 3 - Exponential
( )
2 3 3 2
1
1
2 ! 3! !
k k
X
m t t t t t
k

= + + + + + + K K
Note: the moments for the exponential distribution can be calculated
in an alternative way. This is done by expanding m
X
(t) in powers of t
and equating the coefficients of t
k
to the coefficients in:
( )
2 3
1 1
1
1
1
X
m t u u u
t
t u

= = = = + + + +

L
2 3
2 3
1
t t t

= + + + + L
Equating the coefficients of t
k
we get:
1 !
or
!
k
k k k
k
k


= =
MGF: Applying Property 3 - Exponential
RS - Chapter 3 - Moments
34
iv) Standard normal distribution m
X
(t)=exp(t
2
/2)
We use the expansion of e
u
.
2 3
0
1
! 2! 3! !
k k
u
k
u u u u
e u
k k

=
= = + + + + + +

L L
( )
( )
( ) ( ) ( )
2 2 2
2
2
2
2 3
2 2 2
2
1
2! 3! !
t
k
t t t
t
X
m t e
k
= = + + + + + + L L
2 4 6 2
1
2 2 3
1 1 1
1
2 2! 2 3! 2 !
k
k
t t t t
k
= + + + + + + L L
We now equate the coefficients t
k
in:
( )
( )
2 2 2 2
1
1
2! ! 2 !
k k k k
X
m t t t t t
k k

= + + + + + + + K K K
MGF: Applying Property 3 - Normal
If k is odd:
k
= 0.
( )
2
1
2 ! 2 !
k
k
k k

=
For even 2k:
( )
2
2 !
or
2 !
k k
k
k
=
( )
1 2 3 4
2
2! 4!
Thus 0, 1, 0, 3
2 2 2!
= = = = = =
MGF: Applying Property 3 - Normal
RS - Chapter 3 - Moments
35
Let l
X
(t) = ln m
X
(t) = the log of the MGF.
( ) ( ) Then 0 ln 0 ln 1 0
X X
l m = = =
The log of Moment Generating Functions
( )
( )
( )
( )
( )
1

X
X X
X X
m t
l t m t
m t m t

= =
( )
( ) ( ) ( )
( )
2
2
X X X
X
X
m t m t m t
l t
m t
(

=
(

( )
( )
( )
1
0
0
0
X
X
X
m
l
m

= = =
( )
( ) ( ) ( )
( )
[ ]
2
2
2
2 1 2
0 0 0
0
0
X X X
X
X
m m m
l
m

(

= = =
(

Thus l
X
(t) = ln m
X
(t) is very useful for calculating the mean and
variance of a random variable
( ) 1. 0
X
l =
( )
2
2. 0
X
l =
The Log of Moment Generating Functions
RS - Chapter 3 - Moments
36
Log of MGF: Examples - Binomial
1. The Binomial distribution (parameters p, n)
( ) ( ) ( )
1
n n
t t
X
m t e p p e p q = + = +
( ) ( ) ( )
ln ln
t
X X
l t m t n e p q = = +
( )
1
t
X t
l t n e p
e p q
=
+
( )
1
0
X
l n p np
p q
= = =
+
( )
( ) ( )
( )
2
t t t t
X
t
e p e p q e p e p
l t n
e p q
+
=
+
( )
( ) ( )
( )
2
2
0
X
p p q p p
l n npq
p q

+
= = =
+
2. The Poisson distribution (parameter )
( )
( )
1
t
e
X
m t e

=
( ) ( ) ( )
ln 1
t
X X
l t m t e = =
( )
t
X
l t e =
( )
t
X
l t e =
( ) 0
X
l = =
( )
2
0
X
l = =
Log of MGF: Examples - Poisson
RS - Chapter 3 - Moments
37
3. The Exponential distribution (parameter )
( )
undefined
X
t
m t
t
t

<

( ) ( ) ( ) ln ln ln if
X X
l t m t t t = = <
( ) ( )
1 1
X
l t t
t

= =

( ) ( ) ( )
( )
2
2
1
1 1
X
l t t
t

= =

( ) ( )
2
2
1 1
Thus 0 and 0
X X
l l

= = = =
Log of MGF: Examples - Exponential
4. The Standard Normal distribution ( = 0, = 1)
( )
2
2
t
X
m t e =
( ) ( )
2
2
ln
t
X X
l t m t = =
( ) ( ) , 1
X X
l t t l t = =
( ) ( )
2
Thus 0 0 and 0 1
X X
l l = = = =
Log of MGF: Examples - Normal
RS - Chapter 3 - Moments
38
5. The Gamma distribution (parameters , )
( )
X
m t
t

| |
=
|

\
( ) ( ) ( ) ln ln ln
X X
l t m t t ( = =

( )
1
X
l t
t t


(
= =
(


( ) ( ) ( ) ( )
( )
2
2
1 1
X
l t t
t

= =

( ) ( )
2
2
Hence 0 and 0
X X
l l



= = = =
Log of MGF: Examples - Gamma
6. The Chi-square distribution (degrees of freedom )
( ) ( )
2
1 2
X
m t t

=
( ) ( ) ( )
ln ln 1 2
2
X X
l t m t t

= =
( ) ( )
1
2
2 1 2 1 2
X
l t
t t

= =

( ) ( ) ( ) ( )
( )
2
2
2
1 1 2 2
1 2
X
l t t
t


= =

( ) ( )
2
Hence 0 and 0 2
X X
l l = = = =
Log of MGF: Examples Chi-squared
RS - Chapter 3 - Moments
39
Since e
itx
= cos(xt) + i sin(xt) and e
itx
1, then
X
(t) is defined for all
t. Thus, the characteristic function always exists, but the MGF need not
exist.
Relation to the MGF:
X
(t) = m
iX
(t) = m
X
(it)
Calculation of moments:
Characteristic functions
Definition: Characteristic Function
Let X denote a random variable. Then, the characteristic function of X ,

X
(t) is defined by:
-
) ( ) (
itx
X
e E t =
k
k
t
X
k
i
t
t

=0
|
) (

You might also like