Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

6761 4 MarkovChains

Download as pdf or txt
Download as pdf or txt
You are on page 1of 76

4.

Markov Chains

4. Markov Chains (9/23/12, cf. Ross)

1. Introduction
2. Chapman-Kolmogorov Equations
3. Types of States
4. Limiting Probabilities
5. Gambler’s Ruin
6. First Passage Times
7. Branching Processes
8. Time-Reversibility
1
4. Markov Chains

4.1. Introduction

Definition: A stochastic process (SP) {X(t) : t ∈ T }


is a collection of RV’s. Each X(t) is a RV; t is usually
regarded as “time.”

Example: X(t) = the number of customers in line at


the post office at time t.

Example: X(t) = the price of IBM stock at time t.


2
4. Markov Chains

T is the index set of the process. If T is countable,


then {X(t) : t ∈ T } is a discrete-time SP. If T is some
continuum, then {X(t) : t ∈ T } is a continuous-time
SP.

Example: {Xn : n = 0, 1, 2, . . .} (index set of non-


negative integers)

Example: {X(t) : t ≥ 0} (index set is <+)

3
4. Markov Chains

The state space of the SP is the set of all possible


values that the RV’s X(t) can take.

Example: If Xn = j, then the process is in state j at


time n.

Any realization of {X(t)} is a sample path.

4
4. Markov Chains

Definition: A Markov chain (MC) is a SP such that


whenever the process is in state i, there is a fixed
transition probability Pij that its next state will be j.

Denote the “current” state (at time n) by Xn = i.

Let the event A = {X0 = i0, X1 = i1, . . . Xn−1 = in−1}


be the previous history of the MC (before time n).

5
4. Markov Chains

{Xn} has the Markov property if it forgets about its


past, i.e.,

Pr(Xn+1 = j|A ∩ Xn = i) = Pr(Xn+1 = j|Xn = i).

{Xn} is time homogeneous if

Pr(Xn+1 = j|Xn = i) = Pr(X1 = j|X0 = i) = Pij ,

i.e., if the transition probabilities are independent of


n.
6
4. Markov Chains

Recap: A Markov chain is a SP such that

Pr(Xn+1 = j|A ∩ Xn = i) = Pij ,

i.e., the next state depends only on the current state


(and is indep of the time).

7
4. Markov Chains

Since Pij is a probability, 0 ≤ Pij ≤ 1 for all i, j.

Since the process has to go from i to some state, we


P∞
must have j=0 Pij = 1, for all i. Note that it may be
possible to go from i to i (i.e., “stay” at i).

Definition: The one-step transition matrix is


 
P00 P01 P02 · · ·
P= P10 P11 P12 · · ·
 
 
.. .. ..
 

8
4. Markov Chains

Example: A frog lives in a pond with three lily pads


(1,2,3). He sits on one of the pads and periodically
rolls a die. If he rolls a 1, he jumps to the lower
numbered of the two unoccupied pads. Otherwise,
he jumps to the higher numbered pad. Let X0 be the
initial pad and let Xn be his location just after the nth
jump. This is a MC since his position only depends
on the current position, and the Pij ’s are independent
of n.
 
0 1/6 5/6
P =  1/6 0 5/6  ♦
 
.

1/6 5/6 0
9
4. Markov Chains

Example: Let Xi denote the weather (rain or sun) on


day i. We’ll think of Xi−1 as yesterday, Xi as today,
and Xi+1 as tomorrow. Suppose that

Pr(Xi+1 = R | Xi−1 = R, Xi = R) = 0.7


Pr(Xi+1 = R | Xi−1 = S, Xi = R) = 0.5
Pr(Xi+1 = R | Xi−1 = R, Xi = S) = 0.4
Pr(Xi+1 = R | Xi−1 = S, Xi = S) = 0.2

10
4. Markov Chains

X0, X1, . . . isn’t quite a MC, since the probability that


it’ll rain tomorrow depends on Xi and Xi−1.

We’ll transform the process into a MC by defining the


following states in terms of today and yesterday.

0: Xi−1 = R, Xi = R
1: Xi−1 = S, Xi = R
2: Xi−1 = R, Xi = S
3: Xi−1 = S, Xi = S

11
4. Markov Chains

Thus, we have, e.g.,

Pr(Xi+1 = R | Xi−1 = R, Xi = R) = P00 = 0.7


Pr(Xi+1 = S | Xi−1 = R, Xi = R) = P02 = 0.3

Using similar reasoning, we get


 

0.7 0 0.3 0 
 0.5 0 0.5 0 
P= 
.


0 0.4 0 0.6

 
 
0 0.2 0 0.8

12
4. Markov Chains

Example: A MC whose state space is given by the


integers is called a random walk if Pi,i+1 = p and
Pi,i−1 = 1 − p.
.. .. .. .. ..
 
 



··· 1−p 0 p 0 0 ··· 


 ··· 0 1−p 0 p 0 ··· 
P= 
.


··· 0 0 1−p 0 p ···

 
 
··· 0 0 0 1−p 0 ···
 
 

.. .. .. .. .. 

13
4. Markov Chains

Example (Gambler’s Ruin): Every time a gambler


plays a game, he wins $1 w.p. p, and he loses $1
w.p. 1 − p. He stops playing as soon as his fortune is
either $0 or $N. The gambler’s fortune is a MC with
the following Pij ’s:

Pi,i+1 = p, i = 1, 2, . . . , N − 1
Pi,i−1 = 1 − p, i = 1, 2, . . . , N − 1
P0,0 = PN,N = 1

0 and N are absorbing states — once the process


enters one of these states, it can’t leave. ♦
14
4. Markov Chains

Example (Ehrenfest Model): A random walk on a fi-


nite set of states with “reflecting” boundaries. Set of
states is {1, 2, . . . , a}.
a−i




 a if j = i + 1





Pij = i if j = i − 1


 a




0 otherwise

Idea: Suppose A has i marbles, B has a − i. Select a


marble at random, and put it in the other container.

15
4. Markov Chains

4.2 Chapman-Kolmogorov Equations

Definition: The n-step transition probability that a


process currently in state i will be in state j after n
additional transitions is
(n)
Pij ≡ Pr(Xn = j|X0 = i), n, i, j ≥ 0.
(1)
Note that Pij = Pij , and

(0)  1 if i = j
Pij = .
 0 otherwise

16
4. Markov Chains

Theorem (C-K Equations):



(n+m) X (n) (m)
Pij = Pik Pkj .
k=0

Think of going from i to j in n + m steps with an


intermediate stop in state k after n steps; then sum
over all possible k values.

17
4. Markov Chains

Proof: By definition,
(n+m)
Pij
= Pr(Xn+m = j|X0 = i)

X
= Pr(Xn+m = j ∩ Xn = k|X0 = i) (total prob)
k=0

X
= Pr(Xn+m = j|X0 = i ∩ Xn = k)Pr(Xn = k|X0 = i)
k=0
(since Pr(A ∩ C|B) = Pr(A|B ∩ C)Pr(C|B))

X
= Pr(Xn+m = j|Xn = k)Pr(Xn = k|X0 = i)
k=0
(Markov property). ♦
18
4. Markov Chains

Definition: The n-step transition matrix is


 
(n) (n) (n)
 P00 P01 P02 ··· 
 
 
P(n) =
 
 (n) (n) (n) 

 P10 P11 P12 ··· 

 
 
.. .. ..
 

The C-K equations imply P(n+m) = P(n)P(m).

In particular, P(2) = P(1)P(1) = PP = P2.

By induction, P(n) = Pn.


19
4. Markov Chains

Example: Let Xi = 0 if it rains on day i; otherwise,


Xi = 1. Suppose P00 = 0.7 and P10 = 0.4. Then
 
0.7 0.3
P =  .
0.4 0.6

Suppose it rains on Monday. Then the prob that it


(4)
rains on Friday is P00 . Note that
 4  
0.7 0.3 0.5749 0.4251
P(4) = P4 =   =  ,
0.4 0.6 0.5668 0.4332

(4)
so that P00 = 0.5749. ♦
20
4. Markov Chains

Unconditional Probabilities

Suppose we know the “initial” probabilities,

αi ≡ Pr(X0 = i), i = 0, 1, . . . .

(Note that i αi = 1.) Then by total probability,


P


X
Pr(Xn = j) = Pr(Xn = j ∩ X0 = i)
i=0

X
= Pr(Xn = j|X0 = i)Pr(X0 = i)
i=0

X (n)
= Pij αi.
i=0
21
4. Markov Chains

Example: In the above example, suppose α0 = 0.4


and α1 = 0.6. Find the prob that it will not rain on
the 4th day after we start keeping records (assuming
nothing about the first day).

X (4)
Pr(X4 = 1) = Pi1 αi
i=0
(4) (4)
= P01 α0 + P11 α1
= (0.4251)(0.4) + (0.4332)(0.6)
= 0.4300. ♦

22
4. Markov Chains

4.3 Types of States

(n)
Definition: If Pij > 0 for some n ≥ 0, state j is
accessible from i.

Notation: i → j.

Definition: If i → j and j → i, then i and j communi-


cate.

Notation: i ↔ j.
23
4. Markov Chains

Theorem: Communication is an equivalence relation:


(i) i ↔ i for all i (reflexive).
(ii) i ↔ j implies j ↔ i (symmetric).
(iii) i ↔ j and j ↔ k imply i ↔ k (transitive).

Proof: (i) and (ii) are trivial, so we’ll only do (iii). To


do so, suppose i ↔ j and j ↔ k. Then there are n, m
(n) (m)
such that Pij > 0 and Pjk > 0. So by C-K,

(n+m) X (n) (m) (n) (m)
Pik = Pir Prk ≥ Pij Pjk > 0.
r=0
Thus, i → k. Similarly, k → i. ♦
24
4. Markov Chains

Definition: An equivalence class consists of all states


that communicate with each other.

Remark: Easy to see that two equiv classes are dis-


joint.

Example: The following P has equiv classes {0, 1} and


{2, 3}.
 

1/2 1/2 0 0 
 1/2 1/2 0 0 
P = 
.


0 0 3/4 1/4

 
 
0 0 1/4 3/4
25
4. Markov Chains

Example: P again has equiv classes {0, 1} and {2, 3}


— note that 1 isn’t accessible from 2.
 

1/2 1/2 0 0 
 1/2 1/4 1/4 0 
P = 
.


0 0 3/4 1/4

 
 
0 0 1/4 3/4

Definition: A MC is irreducible if there is only one


equiv class (i.e., if all states communicate).

Example: The previous two examples are not irre-


ducible. ♦
26
4. Markov Chains

Example: The following P is irreducible since all states


communicate (“loop” technique: 0 → 1 → 0).
 
1/2 1/2
P =  . ♦
1/4 3/4

Example: P is irreducible since 0 → 2 → 1 → 0.


 
1/4 0 3/4
P = ♦
 

 1 0 0 .

0 1/2 1/2

27
4. Markov Chains

Definition: The probability that the MC eventually


returns to state i is

fi ≡ Pr(Xn = i for some n ≥ 1|X0 = i).

Example: The following MC has equiv classes {0, 1},


{2}, and {3}, the latter of which is absorbing.
 

1/2 1/2 0 0 
 1/2 1/2 0 0 
P = 
.

1/4 1/4 1/4 1/4

 
 
0 0 0 1

We have f0 = f1 = 1, f2 = 1/4, f3 = 1. ♦
28
4. Markov Chains

Remark: The fi’s are usually hard to compute.

Definition: If fi = 1, state i is recurrent. If fi < 1,


state i is transient.

Theorem: Suppose X0 = i. Let N denote the number


of times that the MC is in state i (before leaving i
forever). Note that N ≥ 1 since X0 = i. Then i is
recurrent iff E[N ] = ∞ (and i is transient iff E[N ] <
∞).
29
4. Markov Chains

Proof: If i is recurrent, it’s easy to see that the MC


returns to i an infinite number of times; so E[N ] = ∞.
Otherwise, suppose i is transient. Then

Pr(N = 1) = 1 − fi (never returns)


Pr(N = 2) = fi(1 − fi) (returns exactly once)
..

Pr(N = k) = fik−1(1 − fi) (returns k − 1 times)

So N ∼ Geom(1 − fi). Finally, since fi < 1, we have


1 < ∞.
E[N ] = 1−f ♦
i
30
4. Markov Chains

P∞ (n)
Theorem: i is recurrent iff n=1 Pii = ∞. (So i is
P∞ (n)
transient iff n=1 Pii < ∞.)

Proof: Define the event



 1 if Xn = i
An ≡ .
 0 if Xn 6= i
P∞
Note that N ≡ n=1 An is the number of returns to i.

Then by the trick that allows us to treat the ex-


pected value of an indicator function as a probability,
we have. . .
31
4. Markov Chains

∞ ∞
X (n) X
Pii = Pr(Xn = i|X0 = i)
n=1 n=1

X
= E[An|X0 = i] (trick)
n=1
" ∞ #
X
= E An X0 =i
n=1
= E[N |X0 = i] (N = number of returns)
= ∞
⇔ i is recur (by previous theorem). ♦

32
4. Markov Chains

Corollary 1: If i is recur and i ↔ j, then j is recur.

Proof: See Ross. ♦

Corollary 2: In a MC with a finite number of states,


not all of the states can be transient.

Proof: Suppose not. Then the MC will run out of


states not to go to an infinite number of times. This
is a contradiction. ♦
33
4. Markov Chains

Corollary 3: If one state in an equiv class is transient,


then all states are trans.

Proof: Suppose not, i.e., suppose there’s a recur


state. Since all states in the equiv class communi-
cate, Corollary 1 implies all states are recur. This is
a contradiction. ♦

Corollary 4: All states in a finite irreducible MC are


recurrent.
34
4. Markov Chains

Proof: Suppose not, i.e., suppose there’s a trans


state. Then Corollary 3 implies all states are trans.
But this contradicts Corollary 1. ♦

Definition: By Corollary 1, all states in an equiv class


are recur if one state in that class is recur. Such a
class is a recurrent equiv class.

By Corollary 3, all states in an equiv class are trans


if one state in that class is trans. Such a class is a
transient equiv class.
35
4. Markov Chains

Example: Consider the prob transition matrix


 
1/2 1/2
P =  .
1/4 3/4

Clearly, all states communicate. So this is a finite,


irreducible MC. So Corollary 4 implies all states are
recurrent. ♦

36
4. Markov Chains

Example: Consider
 

1/4 0 0 3/4 
 1 0 0 0 
P = 
.

0 1 0 0

 
 
0 0 1 0
Loop: 0 → 3 → 2 → 1 → 0. Thus, all states commu-
nicate; so they’re all recurrent. ♦

37
4. Markov Chains

Example: Consider
 

1/4 0 3/4 0 0 

 0 1/2 0 1/2 0 

P =
 

 1/2 0 1/2 0 0 .

0 1/2 0 1/2 0
 
 
 
1/5 1/5 0 0 3/5

The equiv classes are {0, 2} (recur), {1, 3} (recur),


and {4} (trans). ♦

38
4. Markov Chains

Example: Random Walk: A drunk walks on the inte-


gers 0, ±1, ±2, . . . with transition probabilities

Pi,i+1 = p
Pi,i−1 = q = 1 − p

(i.e., he steps to the right w.p. p and to the left w.p.


1 − p).

39
4. Markov Chains

The prob transition matrix is


..
 
 



q 0 p 0 0 


 0 q 0 p 0 
P = 
.

... 0 0 q 0 p ...

 
 
0 0 0 q 0
 
 

.. 

Are the states recurrent or transient?

Clearly, all states communicate. So Corollary 1 implies


that if one of the states are recur, then they all are.
Otherwise, all states will be transient.
40
4. Markov Chains

Consider a typical state 0. If 0 is recurrent [transient],


then all states will be recurrent [transient]. We’ll find
P∞ (n)
out which is the case by calculating n=1 P00 .

Suppose the drunk starts at 0. Since it’s impossible


for him to return to 0 in an odd number of steps, we
(2n+1)
see that P00 = 0 for all n ≥ 0.

41
4. Markov Chains

So the only chance he has of returning to 0 is if he’s


taken an even number of steps, say 2n. Of these
steps, n must be taken to the left, and n to the right.
So, thinking binomial, we have
 
(2n) 2n n n (2n)! n n
P00 =  p q = p q , n ≥ 1.
n n!n!

Aside: For large n, Stirling’s approximation says that


√ n+ 1
n! ≈ 2π n 2 e−n.

42
4. Markov Chains

After the smoke clears,

(2n) [4p(1 − p)]n


P00 ≈ √ ,
πn
so that
∞ ∞
X (n) X (2n)
P00 = P00
n=1 n=1
n
∞ [4p(1 − p)] 

X = ∞ if p = 1/2
= √ .
n=1 πn  < ∞ if p 6= 1/2

So the MC is recur if p = 1/2 and trans otherwise.



43
4. Markov Chains

Definition: If p = 1/2, the random walk is symmetric.

Remark: A 2-dimensional r.w. with probability 1/4 of


going each way yields a recurrent MC.

A 3-dimensional r.w. with probability 1/6 of going


each way (N, S, E, W, up, down) yields a transient
MC.

44
4. Markov Chains

4.4 Limiting Probabilities

Example: Note that the following matrices appear to


be converging. . . .
   
0.7 0.3 0.61 0.39
P =  , P(2) =  ,
0.4 0.6 0.52 0.48
   
0.575 0.425 0.572 0.428
P( 4 ) =  , P(8) =  ,...
0.567 0.433 0.570 0.430

45
4. Markov Chains

(n)
Definition: Suppose that Pii = 0 whenever n is not
divisible by d, and suppose that d is the largest integer
with this property. Then state i has period d. Think
of d as the greatest common divisor of all n values for
(n)
which Pii > 0.

Example: All states have period 3.


 
0 1 0
P = ♦
 

 0 0 1 .

1 0 0

46
4. Markov Chains

Definition: A state with period 1 is aperiodic.

Example:
 

0 1 0 0 
 1 0 0 0 
P = 
.

1/4 1/4 1/4 1/4

 
 
0 0 1/2 1/2
Here, states 0 and 1 have period 2, while states 2 and
3 are aperiodic. ♦

47
4. Markov Chains

Definition: Suppose state i is recurrent and X0 = i.


If the expected time until the process returns to i is
finite, then i is positive recurrent.

Remark: It turns out that. . .


(1) In a finite MC, all recur states are positive recur.
(2) In an ∞-state MC, there may be some recur states
that are not positive recur. Such states are null recur.

Definition: Pos recur, aperiodic states are ergodic.


48
4. Markov Chains

Theorem: For an irreducible, ergodic MC,


(n)
(1) πj ≡ limn→∞ Pij exists and is independent of i.
(The πj ’s are called limiting probabilities.)
(2) πj is the unique, nonnegative solution of
 P∞




πj = i=0 πiPij , j≥0
.
 P∞
1 = j=0 πj


In vector notation, this can be written as π = πP.

“Heuristic “proof”: see Ross. ♦


49
4. Markov Chains

Remarks: (1) πj is also the long-run proportion of


time that the MC will be in state j. The πj ’s are often
called stationary probs — since if Pr(X0 = j) = πj ,
then Pr(Xn = j) = πj for all n.

(2) In the irred, pos recur, periodic case, πj can only


be interpreted as the long-run proportion of time in j.

(3) Let mjj ≡ expected number of transitions needed


to go from j to j. Since, on average, the MC spends
1 time unit in state j for every mjj time units, we have
mjj = 1/πj .
50
4. Markov Chains

Example: Find the limiting probabilities of


 
0.5 0.4 0.1
P =
 

 0.3 0.4 0.3 .

0.2 0.3 0.5
P∞
Solve πj = i=0 πiPij (π = πP), i.e.,

π0 = π0P00 + π1P10 + π2P20 = 0.5π0 + 0.3π1 + 0.2π2,


π1 = π0P01 + π1P11 + π2P21 = 0.4π0 + 0.4π1 + 0.3π2,
π2 = π0P02 + π1P12 + π2P22 = 0.1π0 + 0.3π1 + 0.5π2,

21 , 23 , 18
n o
and π0 + π1 + π2 = 1. Get π = 62 62 62 . ♦
51
4. Markov Chains

Definition: A transition matrix P is doubly stochastic


if each column (and row) sums to 1.

Theorem: If, in addition to the conditions of the pre-


vious theorem, P is a doubly stochastic n × n matrix,
then πj = 1/n for all j.

Proof: Just plug in πj = 1/n for all j into π = πP


to verify that it works. Since this solution must be
unique, we’re done. ♦
52
4. Markov Chains

Example: Find the limiting probabilities of


 
0.5 0.4 0.1
P =
 

 0.3 0.3 0.4 .

0.2 0.3 0.5

This is a doubly stochastic matrix, so we immediately


have that π0 = π1 = π2 = 1/3. ♦

53
4. Markov Chains

4.5 Gambler’s Ruin Problem

Each time a gambler plays, he wins $1 w.p. p and loses


$1 w.p. 1 − p = q. Each play is independent. Suppose
he starts with $i. Find the probability that his fortune
will hit $N (i.e., he breaks the bank) before it hits $0
(i.e., he is ruined).

54
4. Markov Chains

Let Xn denote his fortune at time n. Clearly, {Xn} is


a MC.

Note Pi,i+1 = p and Pi,i−1 = q for i = 1, 2, . . . N − 1.

Further, P00 = 1 = PN N .

We have 3 equiv classes: {0} (recur), {1, 2, . . . , N − 1}


(trans), and {N } (recur).

55
4. Markov Chains

By a standard one-step conditioning argument,

Pi ≡ Pr(Eventually hit $N |X0 = i)


= Pr(Event. hit N |X1 = i + 1 and X0 = i)
× Pr(X1 = i + 1|X0 = i)
+ Pr(Event. hit N |X1 = i − 1 and X0 = i)
× Pr(X1 = i − 1|X0 = i)
= Pr(Event. hit N |X1 = i + 1)p
+ Pr(Event. hit N |X1 = i − 1)q
= pPi+1 + qPi−1, i = 1, 2, . . . , N − 1.

56
4. Markov Chains

Since p + q = 1, we have

pPi + qPi = pPi+1 + qPi−1

iff

p(Pi+1 − Pi) = q(Pi − Pi−1)

iff
q
Pi+1 − Pi = (Pi − Pi−1), i = 1, 2, . . . , N − 1.
p

57
4. Markov Chains

Since P0 = 0, we have
q
P2 − P1 = P1
p
 2
q q
P3 − P2 = (P2 − P1) =  P1
p p
..
 i−1
q q
Pi − Pi−1 = (Pi−1 − Pi−2) =  P1 .
p p
Summing up the LHS terms and the RHS terms,
 j
i
X i−1
X q
(Pj − Pj−1) = Pi − P1 =   P1.
j=2 j=1 p

58
4. Markov Chains

This implies that



1−(q/p) i
 j
P1 if q 6= p (p 6= 1/2)


i−1
X q

 1−(q/p)

Pi = P1   = .
j=0 p 



 iP1 if q = p (p = 1/2)

In particular, note that



1−(q/p)N
P1 if p 6= 1/2



 1−(q/p)

1 = PN = 
.



 N P1 if p = 1/2

59
4. Markov Chains

Thus,

1−(q/p)
if p 6= 1/2


 1−(q/p)N


P1 = 
,



 1/N if p = 1/2
so that
1−(q/p)i

if p 6= 1/2


 1−(q/p)N


Pi = 
. ♦


i/N if p = 1/2

By the way, as N → ∞,
1 − (q/p)i if p > 1/2




Pi → 
. ♦
0 if p ≤ 1/2


60
4. Markov Chains

Example: A guy can somehow win any blackjack hand


w.p. 0.6. If he wins, he fortune increases by $100;
a loss costs him $100. Suppose he starts out with
$500, and that he’ll quit playing as soon as his fortune
hits $0 or $1500. What’s the probability that he’ll
eventually hit $1500?
1 − (0.4/0.6)5
P5 = 15
= 0.870. ♦
1 − (0.4/0.6)

61
4. Markov Chains

4.6 First Passage Time from State 0 to State N

(n)
Pij ≡ P (Xn = j|X0 = i)

Definition: The probability that the first passage time


from i to j is n is

(n)
fij ≡ P (Xn = j|X0 = i, Xk 6= j, k = 0, 1, . . . , n − 1).

This is the probability that the MC goes from i to j


in exactly n steps (without passing thru j along the
way).
62
4. Markov Chains

Remarks:

(1) (1)
(1) By definition, fij = Pij = Pij

(n) (n) (k) (n−k)


(2) fij = Pij − n−1 f
k=1 ij Pjj
P

(n)
Pij = Prob. of going from i to j in n steps
(k)
fij = Prob. of i to j for first time in k steps
(n−k)
Pjj = Prob. of j to j in remaining n − k steps

63
4. Markov Chains

Special Case: Start in state 0 and state N is an ab-


sorbing (“trapping”) state.
(1) (1)
f0N = P0N = P0N
(2) (2) (1) (1) (2) (1)
f0N = P0N − f0N PN N = P0N − P0N
(3) (3) (1) (2)
f0N = P0N − f0N − f0N
(3) (1) (2) (1) (3) (2)
= P0N − P0N − (P0N − P0N ) = P0N − P0N
..
(n) (n) (n−1)
f0N = P0N − P0N
(n) (1)
f0N ’s can be calculated iteratively starting at f0N .
64
4. Markov Chains

Define T ≡ first passage time from 0 to N


∞ ∞
(n)
E(T k ) nk Pr(T nk f0N
X X
= = n) =
n=1 n=1

(n) (n−1)
nk (P0N − P0N
X
= )
n=1

Usually use a computer to calculate this.

(WARNING! Don’t break this up into 2 separate ∞


(n)
summations!) Stop calculating when f0N ≈ 0.

65
4. Markov Chains

2nd Special Case: 2 absorbing states N, N 0

(n) (n)
Same procedure as before but divide each f0N , f0N 0
by the probs. of being trapped. So probs. of first
passage times to N , N 0 in n steps are
(n) (n)
f0N f0N 0
P∞ (k)
and P∞ (k)
.
f
k=1 0N f
k=1 0N 0

66
4. Markov Chains

4.7 Branching Processes ← Special class of MC’s

Suppose X0 is the number of individuals in a certain


population. Suppose the probability that any individ-
ual will have exactly j offspring during its lifetime is
Pj , j ≥ 0. (Assume that the number of offspring from
one individual is independent of the number from any
other individual.)

67
4. Markov Chains

X0 ≡ size of the 0th generation

X1 ≡ size of the 1st gener’n = # kids produced by


individuals from 0th gener’n.
..

Xn ≡ size of the nth gener’n = # kids produced by


indiv.’s from (n − 1)st gener’n.

Then {Xn : n ≥ 0} is a MC with the non-negative


integers as its state space. Pij ≡ P (Xn+1 = j|Xn = i).
68
4. Markov Chains

Remarks:
(1) 0 is recurrent since P00 = 1.
(2) If P0 > 0, then all other states are transient.
(Proof: If P0 > 0, then Pi0 = P0i > 0. If i is recurrent,
we’d eventually go to state 0. Contradiction.)

These two remarks imply that the population either


dies out or its size → ∞.

69
4. Markov Chains

P∞
Denote µ ≡ j=0 jPj , the mean number of offspring
of a particular individual.

P∞
Denote σ 2 ≡ j=0 (j − µ) 2P , the variance.
j

Suppose X0 = 1. In order to calculate E[Xn] and


Var(Xn), note that
XX
n−1
Xn = Zi
i=1
where Zi is the # of kids from indiv. i of gener’n
(n − 1).
70
4. Markov Chains

Since Xn−1 is indep of the Zi’s,


"Xn−1 #
X
E[Xn] = E Zi
i=1
= E[Xn−1]E[Zi]
= µE[Xn−1].
Since X0 = 1,
E[X1] = µ
E[X2] = µE[X1] = µ2
..

E[Xn] = µn.

71
4. Markov Chains

Similarly,
 n
µ −1 

2
σ µn−1 if µ 6= 1
µ−1 ,




Var(Xn) = 
nσ 2,

if µ = 1

72
4. Markov Chains

Denote π0 ≡ limn→∞ Pr(Xn = 0|X0 = 1) = prob that


the population will eventually die out (given X0 = 1).

Fact: If µ < 1, then π0 = 1.


Proof:

X
Pr(Xn ≥ 1) = Pr(Xn = j)
j=1

X
≤ jPr(Xn = j)
j=1
= E[Xn] = µn → 0 as n → ∞.

Fact: If µ = 1, then π0 = 1.
73
4. Markov Chains

What about the case when µ > 1?


Here, it turns out that π0 < 1, i.e., the prob. popula-
tion dies out is < 1.

π0 = Pr(pop’n dies out)



X
= Pr(pop’n dies out|X1 = j)} |Pr(X{z
1 = j)},
j=0
| {z
j
π0 Pj

j
where π0 implies that the families started by the j
members of the first generation all die out (indep’ly).

74
4. Markov Chains

Summary:

X j
π0 = π0Pj (∗)
j=0
For µ > 1, π0 is the smallest positive number satisfy-
ing (*).

75
4. Markov Chains

Example: Suppose P0 = 1 ,
4 1 P = 1, P = 1.
4 2 2

X 1 1 1 5
µ = jPj = 0 · + 1 · + 2 · = > 1
j=0 4 4 2 4

Furthermore, (*) implies


1 1 1 1 1 1 2
π0 = π00 1 2
· + π0 · + π 0 · = + π0 + π0
4 4 2 4 4 2

⇔ 2π02 − 3π0 + 1 = 0

Smallest positive sol’n is π0 = 1


2.

76

You might also like