lec06
lec06
lec06
Fatih Cavdur
to accompany
Introduction to Probability Models
by Sheldon M. Ross
Fall 2015
Outline
Introduction
Limiting Probabilities
Introduction
In this chapter we consider a class of probability models that has a
wide variety of applications in the real world. The members of this
class are the continuous-time analogs of the Markov chains of
Chapter 4 and as such are characterized by the Markovian property
that, given the present state, the future is independent of the past.
One example of a continuous-time Markov chain is the Poisson
process of Chapter 5.
P{X (t+s) = j|X (s) = i, X (u) = x(u)} = P{X (t+s) = j|X (s) = i}
Continuous-Time Markov Chains
In addition, if P{X (t + s) = j|X (s) = i} is independent of s, then,
the CT-MC is said to have stationary or homogenous transition
probabilities.
We assume that all MCs in this section have stationary transition
probabilities.
Pii = 0, ∀i
X
Pij = 1, ∀i
j
Continuous-Time Markov Chains
In other words, a CT-MC is a stochastic process that moves from
state to state in accordance with a (discrete-time) Markov chain,
but is such that the amount of time it spends in each state, before
proceeding to the next state, is exponentially distributed. In
addition, the amount of time the process spends in state i and the
next state it enters must be independent RVs
Example
(A Shoeshine Shop) Consider a shoeshine establishment consisting
of two chairs—chair 1 and chair 2. A customer upon arrival goes
initially to chair 1 where his shoes are cleaned and polish is
applied. After this is done the customer moves on to chair 2 where
the polish is buffed. The service times at the two chairs are
assumed to be independent random variables that are exponentially
distributed with respective rates µ1 and µ2 , and the customers
arrive in accordance with a Poisson process with rate λ. We also
assume that a potential customer will enter the system only if both
chairs are empty.
Example
We can analyze this system as a CT-MC with a state space
State Description
0 system is empty
1 a customer is in chair 1
2 a customer is in chair 2
Example
We then have
v0 = λ, v1 = µ1 , v2 = µ2
and
v0 = λ0 ; vi = λi + µi , i > 0
and
λi µi
P01 = 1; Pi,i+1 = , i > 0; Pi,i−1 = , i >0
λ i + µi λi + µi
Example (Poisson Process)
Consider a BDP for which
µn = 0, ∀n ≥ 0
λn = λ, ∀n ≥ 0
µn = nµ, n≥1
λn = nλ + θ, n≥0
M(t) = E [X (t)]
Example
Derive and solve a differential equation to determine M(t):
M(t + h) = E [X (t + h)]
= E {E [X (t + h)|X (t)]}
We can write
X (t + 1), w.p. [θ + X (t)λ]h + o(h)
X (t+h) = X (t − 1), w.p. X (t)µh + o(h)
X (t), w.p. 1 − [θ + X (t)λ + X (t)µ]h + o(h)
Example
We then have
Example
If we define
dh(t) dM(t)
h(t) = (λ − µ)M(t) + θ ⇒ = (λ − µ)
dt dt
We can hence write
h0 (t) h0 (t)
= h(t) ⇒ =λ−µ
λ−µ h(t)
log [h(t)] = (λ − µ)t + c
h(t) = Ke (λ−µ)t
Example
In terms of M(t),
θ + (λ − µ)M(t) = Ke (λ−µ)t
To find K , we use that M(0) = i and for t = 0,
θ
θ + (λ − µ)i = K ⇒ M(t) = [e (λ−µ)t − 1] + ie (λ−µ)t
λ−µ
Note that, we have assumed that λ 6= µ. If λ = µ,
dM(t)
= θ ⇒ M(t) = θt + i
dt
Example
Suppose that customers arrive at a single-server service station in
accordance with a Poisson process having rate λ, and the
successive service times are assumed to be independent exponential
RVs with mean 1/µ. This is known as the M/M/1 queuing system
where the first and second M refer to the Poisson arrivals and the
exponential service times, both are Markovian, and 1 refers to the
number of servers.
Example
If we let X (t) be the number of customers in the system at time t,
then, {X (t), t ≥ 0} is a BDP process with with
µn = µ, n≥1
λn = λ, n≥0
Example
For a multi-server exponential queuing system with s servers, if we
let X (t) be the number of customers in the system at time t, then,
{X (t), t ≥ 0} is a BDP process with with
nµ, 1 ≤ n ≤ s
µn =
sµ, n>s
and
λn = λ, n≥0
The Transition Probability Function
We let
j
X j
Y j−1
X j−1
Y
λr λr
Pij (t) = e −λk t − e −λk t , i <j
λr − λk λr − λk
k=i r =i k=i r =i
r 6=k r 6=k
and
Example
The backward equations for the pure birth process are
0
P0j (t) = λ0 P1j (t) − λ0 P0j (t)
λi µi
Pij0 (t) = (λi +µi ) Pi+1,j (t) + Pi−1,j (t) −(λi +µi )Pij (t)
λ i + µi λ i + µi
or
0
P0j (t) = λ0 [P1j (t) − P0j (t)]
X
0
Pi0 (t) = qk0 Pik (t) − λ0 Pi0 (t)
k6=0
X
Pij0 (t) = qkj Pik (t) − (λj + µj )Pij (t)
k6=0
Limiting Probabilities
To derive Pj using the forward equations, we can write
X
Pij0 (t) = qkj Pik (t) − vj Pij (t)
k6=j
and
X
lim Pij0 (t) = lim qkj Pik (t) − vj Pij (t)
t→∞ t→∞
k6=j
X
= qkj Pk − vj Pj
k6=j
Limiting Probabilities
We note that, since Pij0 (t) converges to 0, we can write
X X
0= qkj Pk − vj Pj ⇒ vj Pj = qkj Pk
k6=j k6=j
X
vj Pj = qkj Pk , ∀j
k6=j
X
1= Pj
j
State 0: λ 0 P0 = µ 1 P1
State 1: (λ1 + µ1 )P1 = µ2 P2 + λ0 P0
State 2: (λ2 + µ2 )P2 = µ 3 P3 + λ 1 P1
... ... ...
State n: (λn + µn )Pn = µn+1 Pn+1 + λn−1 Pn−1
Limiting Probabilities for the BDP
By organizing, we obtain
λ 0 P0 = µ 1 P1
λ 1 P1 = µ 2 P2
λ 2 P2 = µ 3 P3
... = ...
λn Pn = µn+1 Pn+1
λ0
P1 = P0
µ1
λ1 λ0
P2 = P0
µ2 µ1
λ2 λ1 λ0
P3 = P0
µ3 µ2 µ1
... = ...
λn−1 . . . λ1 λ0
Pn = P0
µn . . . µ2 µ1
Limiting Probabilities for the BDP
P∞
Using that n=0 Pn = 1, we obtain
∞
X λn−1 . . . λ1 λ0 1
1 = P0 + P0 ⇒ P0 = P∞ λn−1 ...λ1 λ0
µ n . . . µ2 µ 1 1+
n=1 n=1 µn ...µ2 µ1
and so
λ0 λ1 . . . λn−1
Pn = , n≥1
µ1 µ2 . . . µn P∞ λ1n−1 ...λ1 λ0
1+ n=1 µn ...µ2 µ1
State 0: λ 0 P0 = µ 2 P2
State 1: µ 1 P1 = λP0
State 2: µ 2 P2 = µ 1 P1
Limiting Probabilities for the BDP
We then write,
λ λ
P1 = P0 , P2 = P0
µ1 µ2
and
2
X µ1 µ 2
Pi = 1 ⇒ P0 =
µ1 µ2 + λ(µ1 + µ2 )
i=0
and so
λµ2 λµ1
P1 = , P1 =
µ1 µ2 + λ(µ1 + µ2 ) µ1 µ2 + λ(µ1 + µ2 )
Thanks! Questions?