Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

lec06

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Continuous Time Markov Chains

Stochastic Processes - Lecture Notes

Fatih Cavdur
to accompany
Introduction to Probability Models
by Sheldon M. Ross

Fall 2015

Outline

Introduction

Continuous-Time Markov Chains

Birth and Death Processes

The Transition Probability Function

Limiting Probabilities
Introduction
In this chapter we consider a class of probability models that has a
wide variety of applications in the real world. The members of this
class are the continuous-time analogs of the Markov chains of
Chapter 4 and as such are characterized by the Markovian property
that, given the present state, the future is independent of the past.
One example of a continuous-time Markov chain is the Poisson
process of Chapter 5.

Continuous-Time Markov Chains


Suppose that we have a continuous-time stochastic process
{X (t), t ≥ 0} taking on values in the set of non-negative integers.
We can say that {X (t), t ≥ 0} is a continuous-time Markov chain
(CT-MC), if, for all s, t ≥ 0 and non-negative integers
i, j, x(u), 0 ≤ u < s

P{X (t+s) = j|X (s) = i, X (u) = x(u)} = P{X (t+s) = j|X (s) = i}
Continuous-Time Markov Chains
In addition, if P{X (t + s) = j|X (s) = i} is independent of s, then,
the CT-MC is said to have stationary or homogenous transition
probabilities.
We assume that all MCs in this section have stationary transition
probabilities.

Continuous-Time Markov Chains


Another definition of a CT-MC is a stochastic process having the
properties that each time it enters state i
I the amount of time it spends in that state before making a
transition into a different state is exponentially distributed
with mean 1/vi , and
I when the process leaves state i, it next enters state j with
some probability, Pij where

Pii = 0, ∀i
X
Pij = 1, ∀i
j
Continuous-Time Markov Chains
In other words, a CT-MC is a stochastic process that moves from
state to state in accordance with a (discrete-time) Markov chain,
but is such that the amount of time it spends in each state, before
proceeding to the next state, is exponentially distributed. In
addition, the amount of time the process spends in state i and the
next state it enters must be independent RVs

Example
(A Shoeshine Shop) Consider a shoeshine establishment consisting
of two chairs—chair 1 and chair 2. A customer upon arrival goes
initially to chair 1 where his shoes are cleaned and polish is
applied. After this is done the customer moves on to chair 2 where
the polish is buffed. The service times at the two chairs are
assumed to be independent random variables that are exponentially
distributed with respective rates µ1 and µ2 , and the customers
arrive in accordance with a Poisson process with rate λ. We also
assume that a potential customer will enter the system only if both
chairs are empty.
Example
We can analyze this system as a CT-MC with a state space

State Description
0 system is empty
1 a customer is in chair 1
2 a customer is in chair 2

Example
We then have

v0 = λ, v1 = µ1 , v2 = µ2
and

P01 = P12 = P20 = 1


Birth and Death Processes
Consider a system whose state at any time is represented by the
number of people in the system at that time. Suppose that
whenever there are n people in the system
I new arrivals enter the system at an exponential rate λn , and
I people leave the system at an exponential rate µn .
Such a system is called a birth and death (arrival and departure)
process (BDP or ADP) , and the parameters {λn }∞ ∞
n=0 and {µn }n=1
are called birth (arrival) and death (departure) rates, respectively.

Birth and Death Processes


A BDP is thus a CT-MC with states {0, 1, . . .} for which
transitions from state n may go only to state n − 1 or state n + 1,
and

v0 = λ0 ; vi = λi + µi , i > 0
and

λi µi
P01 = 1; Pi,i+1 = , i > 0; Pi,i−1 = , i >0
λ i + µi λi + µi
Example (Poisson Process)
Consider a BDP for which

µn = 0, ∀n ≥ 0
λn = λ, ∀n ≥ 0

This is process in which no departures occur, and the time between


arrivals is exponentially distributed with mean 1/λ. Hence, this is
the Poisson process.

Birth and Death Process

A BDP for which µn = 0, ∀n is is said to be a pure birth process,


and a BDP for which λn = 0 , ∀n is said to be pure death process.
Example
A model in which

µn = nµ, n≥1
λn = nλ + θ, n≥0

is called a linear growth process with immigration and used to


study biological systems. If X (t) is the population size at time t,
and if we assume that X (0) = i and let

M(t) = E [X (t)]

Example
Derive and solve a differential equation to determine M(t):

M(t + h) = E [X (t + h)]
= E {E [X (t + h)|X (t)]}

We can write


 X (t + 1), w.p. [θ + X (t)λ]h + o(h)
X (t+h) = X (t − 1), w.p. X (t)µh + o(h)

X (t), w.p. 1 − [θ + X (t)λ + X (t)µ]h + o(h)
Example
We then have

E [X (t + h)|X (t)] = X (t) + [θ + X (t)λ − X (t)µ]h + o(h)


E {E [X (t + h)|X (t)]} = E {X (t) + [θ + X (t)λ − X (t)µ]h + o(h)}
M(t + h) = M(t) + (λ − µ)M(t)h + θh + o(h)
M(t + h) − M(t) o(h)
= (λ − µ)M(t) + θ +
 h   h 
M(t + h) − M(t) o(h)
lim = lim (λ − µ)M(t) + θ +
h→0 h h→0 h
dM(t)
= (λ − µ)M(t) + θ
dt

Example
If we define

dh(t) dM(t)
h(t) = (λ − µ)M(t) + θ ⇒ = (λ − µ)
dt dt
We can hence write

h0 (t) h0 (t)
= h(t) ⇒ =λ−µ
λ−µ h(t)
log [h(t)] = (λ − µ)t + c
h(t) = Ke (λ−µ)t
Example
In terms of M(t),

θ + (λ − µ)M(t) = Ke (λ−µ)t
To find K , we use that M(0) = i and for t = 0,

θ
θ + (λ − µ)i = K ⇒ M(t) = [e (λ−µ)t − 1] + ie (λ−µ)t
λ−µ
Note that, we have assumed that λ 6= µ. If λ = µ,

dM(t)
= θ ⇒ M(t) = θt + i
dt

Example
Suppose that customers arrive at a single-server service station in
accordance with a Poisson process having rate λ, and the
successive service times are assumed to be independent exponential
RVs with mean 1/µ. This is known as the M/M/1 queuing system
where the first and second M refer to the Poisson arrivals and the
exponential service times, both are Markovian, and 1 refers to the
number of servers.
Example
If we let X (t) be the number of customers in the system at time t,
then, {X (t), t ≥ 0} is a BDP process with with

µn = µ, n≥1
λn = λ, n≥0

Example
For a multi-server exponential queuing system with s servers, if we
let X (t) be the number of customers in the system at time t, then,
{X (t), t ≥ 0} is a BDP process with with

nµ, 1 ≤ n ≤ s
µn =
sµ, n>s
and

λn = λ, n≥0
The Transition Probability Function
We let

Pij (t) = P{X (t + s) = j|X (s) = i}


be the probability that a process presently in state i will be in state
j after time t. Pij (t) are often called the transition probabilities of
the CT-MC.

The Transition Probability Function


Proposition
For a pure birth process with λi 6= λj when i 6= j, we have

j
X j
Y j−1
X j−1
Y
λr λr
Pij (t) = e −λk t − e −λk t , i <j
λr − λk λr − λk
k=i r =i k=i r =i
r 6=k r 6=k

and

Pii (t) = e −λi t


Kolmogorov’s Backward Equations
Theorem: Kolmogorov’s Backward Equations
For all states i, j and times t ≥ 0,
X
Pij0 (t) = qik Pkj (t) − vi Pij (t)
k6=i

Example
The backward equations for the pure birth process are

Pij0 (t) = λi Pi+1,j (t) − λi Pij (t)


Example: Backward Equations for the BDP

0
P0j (t) = λ0 P1j (t) − λ0 P0j (t)

 
λi µi
Pij0 (t) = (λi +µi ) Pi+1,j (t) + Pi−1,j (t) −(λi +µi )Pij (t)
λ i + µi λ i + µi
or

0
P0j (t) = λ0 [P1j (t) − P0j (t)]

Pij0 (t) = λi Pi+1,j (t) + µi Pi−1,j (t) − (λi + µi )Pij (t)

Kolmogorov’s Forward Equations


Theorem: Kolmogorov’s Forward Equations
Under suitable regularity conditions,
X
Pij0 (t) = qkj Pik (t) − vj Pij (t)
k6=j
Kolmogorov’s Backward Equations
For the pure birth process, we have

Pij0 (t) = λj−1 Pi,j−1 (t) − λj Pij (t)


Noting that Pij (t) = 0 whenever j < i, we can write

Pii0 (t) = λi Pi,i (t)


and

Pij0 (t) = λj−1 Pi,j−1 (t) − λj Pij (t), j ≥i +1

Kolmogorov’s Backward Equations


For the BDP, we have

X
0
Pi0 (t) = qk0 Pik (t) − λ0 Pi0 (t)
k6=0

= µ1 Pi,1 (t) − λ0 Pi0 (t)

X
Pij0 (t) = qkj Pik (t) − (λj + µj )Pij (t)
k6=0

= λj−1 Pi,j−1 (t) − (λj + µj )Pij (t)


Limiting Probabilities
In analogy with a basic result in discrete-time Markov chains, the
probability that a continuous-time Markov chain will be in state j
at time t often converges to a limiting value that is independent of
the initial state. If we call this value Pj , then,

Pj ≡ lim Pij (t)


t→∞

where we assume that the limit exists and is independent of the


initial state i.

Limiting Probabilities
To derive Pj using the forward equations, we can write
X
Pij0 (t) = qkj Pik (t) − vj Pij (t)
k6=j

and

 
X
lim Pij0 (t) = lim  qkj Pik (t) − vj Pij (t)
t→∞ t→∞
k6=j
X
= qkj Pk − vj Pj
k6=j
Limiting Probabilities
We note that, since Pij0 (t) converges to 0, we can write
X X
0= qkj Pk − vj Pj ⇒ vj Pj = qkj Pk
k6=j k6=j

We can then use the following to find the limiting probabilities:

X
vj Pj = qkj Pk , ∀j
k6=j
X
1= Pj
j

Limiting Probabilities for the BDP


We can write,

State 0: λ 0 P0 = µ 1 P1
State 1: (λ1 + µ1 )P1 = µ2 P2 + λ0 P0
State 2: (λ2 + µ2 )P2 = µ 3 P3 + λ 1 P1
... ... ...
State n: (λn + µn )Pn = µn+1 Pn+1 + λn−1 Pn−1
Limiting Probabilities for the BDP
By organizing, we obtain

λ 0 P0 = µ 1 P1
λ 1 P1 = µ 2 P2
λ 2 P2 = µ 3 P3
... = ...
λn Pn = µn+1 Pn+1

Limiting Probabilities for the BDP


Solving in terms of P0 ,

λ0
P1 = P0
µ1
λ1 λ0
P2 = P0
µ2 µ1
λ2 λ1 λ0
P3 = P0
µ3 µ2 µ1
... = ...
λn−1 . . . λ1 λ0
Pn = P0
µn . . . µ2 µ1
Limiting Probabilities for the BDP
P∞
Using that n=0 Pn = 1, we obtain


X λn−1 . . . λ1 λ0 1
1 = P0 + P0 ⇒ P0 = P∞ λn−1 ...λ1 λ0
µ n . . . µ2 µ 1 1+
n=1 n=1 µn ...µ2 µ1

and so

λ0 λ1 . . . λn−1
Pn = , n≥1
µ1 µ2 . . . µn P∞ λ1n−1 ...λ1 λ0
1+ n=1 µn ...µ2 µ1

Limiting Probabilities for the BDP


The foregoing equations also show us the necessary conditions for
the limiting probabilities to exist:

X λn−1 . . . λ1 λ0
<∞
µ n . . . µ2 µ 1
n=1

This condition also might be shown to be sufficient.


Limiting Probabilities
Let us reconsider the shoeshine shop of Example 6.1, and
determine the proportion of time the process is in each of the
states 0, 1, 2. Because this is not a birth and death process (since
the process can go directly from state 2 to state 0), we start with
the balance equations for the limiting probabilities.

Limiting Probabilities for the BDP


We can write,

State 0: λ 0 P0 = µ 2 P2
State 1: µ 1 P1 = λP0
State 2: µ 2 P2 = µ 1 P1
Limiting Probabilities for the BDP
We then write,

λ λ
P1 = P0 , P2 = P0
µ1 µ2
and
2
X µ1 µ 2
Pi = 1 ⇒ P0 =
µ1 µ2 + λ(µ1 + µ2 )
i=0

and so

λµ2 λµ1
P1 = , P1 =
µ1 µ2 + λ(µ1 + µ2 ) µ1 µ2 + λ(µ1 + µ2 )

Thanks! Questions?

You might also like