Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
29 views

Topic 6: Convergence and Limit Theorems: Let Be A Sequence of Random Variables. Define As + +

This document summarizes key concepts related to convergence and limit theorems in probability and statistics: 1) It discusses sums of random variables, laws of large numbers, the central limit theorem, and convergence of sequences of random variables. 2) The central limit theorem states that the distribution of the sum of independent random variables with finite mean and variance will converge to a normal distribution as the number of variables increases. 3) The laws of large numbers describe how the sample mean converges in probability and almost surely to the true population mean as the number of samples increases. 4) Different types of convergence for sequences of random variables are defined, including sure, almost sure, mean square, convergence in probability, and
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Topic 6: Convergence and Limit Theorems: Let Be A Sequence of Random Variables. Define As + +

This document summarizes key concepts related to convergence and limit theorems in probability and statistics: 1) It discusses sums of random variables, laws of large numbers, the central limit theorem, and convergence of sequences of random variables. 2) The central limit theorem states that the distribution of the sum of independent random variables with finite mean and variance will converge to a normal distribution as the number of variables increases. 3) The laws of large numbers describe how the sample mean converges in probability and almost surely to the true population mean as the number of samples increases. 4) Different types of convergence for sequences of random variables are defined, including sure, almost sure, mean square, convergence in probability, and
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Topic 6: Convergence and Limit Theorems

Sum of random variables


Laws of large numbers
Central limit theorem
Convergence of sequences of RVs

ES150 Harvard SEAS

Sum of random variables


Let X1 , X2 , ..., Xn be a sequence of random variables. Dene Sn as
Sn = X 1 + X 2 + + X n
The mean and variance of S become
E[Sn ]

var(Sn )

E[X1 ] + E[X2 ] + + E[Xn ]


n
n 
n


var(Xk ) +
cov(Xj , Xk )
k=1

j=1

j=k

k=1

If X1 , X2 , ..., Xn are independent random variables, then


var(Sn ) =

n


var(Xk )

k=1

The characteristic function can be used to calculate the joint pdf as




Sn () = E ejSn = X1 () Xn ()
fSn (x)
ES150 Harvard SEAS

F 1 {X1 () Xn ()}
2

Sum of a random number of independent RVs


Consider the sum of N i.i.d. RVs Xi with nite mean and variance
SN =

N


Xk

k=1

where N is a random variable independent of the Xk .


Using conditional expectation, the mean and variance of SN are
E[SN ] = E [E[SN |N ]] = E[N E[X]] = E[N ]E[X]
var(SN ) = var(N )E[X]2 + E[N ]var(X)
The characteristic function of SN is


SN () = E E[ejSN |N ]
 
= E zN 

z=X ()



= E X ()N
= GN (X ())

which is the generating function of N evaluated at z = ().

ES150 Harvard SEAS

Example:
Number of jobs N submitted to the CPU is a geometric RV with
parameter p.
The excution time of each job is an exponential RV with mean .
Find the pdf of the total execution time.

ES150 Harvard SEAS

Laws of large numbers


Let X1 , X2 , ..., Xn be independent, identically distributed (iid) random
variables with mean E[Xj ] = , ( < ).
The sample mean of the sequence is dened as
n

1
Mn =
Xj
n j=1
For large n, Mn can be used to estimate since
n

E[Mn ] =
var(Mn ) =

1
E[Xj ] =
n j=1
1
n 2
2
var(S
)
=
=
n
n2
n2
n

From Chebyshev inequality,


2
n2
P [|Mn | < ] 1 2 /n2
P [|Mn | ]

or

As n , we have var(Mn ) 0 and 2 /n2 0.


ES150 Harvard SEAS

The Weak Law of Large Numbers (WLLN)


lim P [|Mn | < ] = 1

for any  > 0

The WLLN implies that for a large (xed) value of n, the sample mean
will be within  of the true mean with high probability.
The Strong Law of Large Numbers (SLLN)


P lim Mn = = 1
n

The SLLN implies that, with probability 1, every sequence of sample


means will approach and stay close to the true mean.
Example:
Given an event A, we can estimate p = P [A] by
performing a sequence of N Bernoulli trials
observing the relative frequency of A occurring fA (N )
How large should N be to have
P [|fA (N ) p| 0.01] 0.95 ?
i.e., a 0.95 chance that the relative frequency is within 0.01 of P [A]?
ES150 Harvard SEAS

The Central Limit Theorem


Let X1 , X2 , ..., Xn be i.i.d. RVs with nite mean and variance
E[Xi ] = <
var(Xi ) = 2 <
Let Sn =

n

i=1

Xi , and dene Zn as
Zn =

Sn n
,
n

Zn has zero-mean and unit-variance.


As n then Zn N (0, 1). That is
1
lim P [Zn z] =
n
2

ex

/2

dx.

Convergence applies to any distribution of X with nite mean and


nite variance.
This is the Central Limit Theorem (CLT) and is widely used in EE.
ES150 Harvard SEAS

Examples:
1. Suppose that cell-phone call durations are iid RVs with = 8 and
= 2 (minutes).
Estimate the probability of 100 calls taking over 840 minutes.
After how many calls can we be 90% sure that the total time used
is more than 1000 minutes?
2. Does the CLT apply to Cauchy random variables?

ES150 Harvard SEAS

Gaussian approximation for binomial probabilities


A Binomial random variable is a sum of iid Bernoulli RVs.
X=

n


Zi ,

Zi Bern(p) are i.i.d.

i=1

then X binomial(np).
By CLT, the Binomial cdf FX (x) approaches a Gaussian cdf


1
(k np)2
p[X = k]

exp
2np(1 p)
2np(1 p)
The approximation is best for k near np.
Example:
A digital communication link has bit-error probability p.
Estimate the probability that a n-bit received message has at least
k bits in error.

ES150 Harvard SEAS

Convergence of sequences of RVs


Given a sequence of RVs {Xn ()}:
{Xn ()} can be viewed as a sequence of functions of .
For each , {Xn ()} is a sequence of numbers {x1 , x2 , x3 , . . .}.
A sequence {xn } is said to converge to x if for any  > 0, there
exists N such that
|xn x| < 

for all n > N.

We write xn x.
In what sense does {Xn ()} converge to a random variable X() as
n ?
Types of convergence for a sequence of RVs:
Sure convergence: {Xn ()} converges surely to X() if
Xn () X() as n

for all S

For every S, the sequence {Xn ()} converges to X() as n .


ES150 Harvard SEAS

10

Almost-sure convergence: {Xn ()} converges almost surely X() if


P [ : Xn () X() as n ] = 1
Xn () converges to X() as n for all in S, except possibly on a
set of zero probability.
The strong LLN is an example of almost-sure convergence.
Mean-square convergence: {Xn ()} converges in the mean square sense
to X() if


2
E (Xn () X()) 0 as n
Here the convergence is in a sequence of a function of Xn ().
Cauchy criterion:
{Xn ()} converges in the mean square sense if and only if


2
E (Xn () Xm ()) 0 as n and m
Convergence in probability: {Xn ()} converges in probability to X()
if, for any > 0,
P [|Xn () X()| > ] 0

as n

ES150 Harvard SEAS

11

For each S, the sequence Xn () is not required to stay within 


of X() as n , but only be within with high probability.
The WLLN is an example of convergence in probability.
Convergence in distribution: {Xn ()} with cdf {Fn (x)} converges in
distribution to X with cdf F (x) if
Fn (x) F (x) as n
for all x at which F (x) is continuous.
The CLT is an example of convergence in distribution.
Relationship among dierent convergences

Sure Convergence

Almost-Sure
Convergence
Convergence in
Probability

Convergence in
Distribution

Mean Square
Convergence

MS convergence does not imply a.s. convergence and vice versa.


ES150 Harvard SEAS

12

You might also like