Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Sheldon Ross-438-449

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Renewal Theory and

Its Applications CHAPTER

7.1 Introduction
We have seen that a Poisson process is a counting process for which the times
between successive events are independent and identically distributed exponential
random variables. One possible generalization is to consider a counting process
for which the times between successive events are independent and identically
distributed with an arbitrary distribution. Such a counting process is called a
renewal process.
Let {N(t), t  0} be a counting process and let Xn denote the time between
the (n − 1)st and the nth event of this process, n  1.
Definition 7.1 If the sequence of nonnegative random variables {X1 , X2 , . . .} is
independent and identically distributed, then the counting process {N(t), t  0}
is said to be a renewal process.
Thus, a renewal process is a counting process such that the time until the first
event occurs has some distribution F, the time between the first and second event
has, independently of the time of the first event, the same distribution F, and
so on. When an event occurs, we say that a renewal has taken place.
For an example of a renewal process, suppose that we have an infinite supply of
lightbulbs whose lifetimes are independent and identically distributed. Suppose
also that we use a single lightbulb at a time, and when it fails we immediately
replace it with a new one. Under these conditions, {N(t), t  0} is a renewal
process when N(t) represents the number of lightbulbs that have failed by time t.

Introduction to Probability Models, ISBN: 9780123756862


Copyright © 2010 by Elsevier, Inc. All rights reserved.
422 Renewal Theory and Its Applications

Figure 7.1 Renewal and interarrival times.

For a renewal process having interarrival times X1 , X2 , . . . , let


n
S0 = 0, Sn = Xi , n1
i=1

That is, S1 = X1 is the time of the first renewal; S2 = X1 + X2 is the time until
the first renewal plus the time between the first and second renewal, that is, S2 is
the time of the second renewal. In general, Sn denotes the time of the nth renewal
(see Figure 7.1).
We shall let F denote the interarrival distribution and to avoid trivialities, we
assume that F(0) = P{Xn = 0} < 1. Furthermore, we let

μ = E[Xn ], n1

be the mean time between successive renewals. It follows from the nonnegativity
of Xn and the fact that Xn is not identically 0 that μ > 0.
The first question we shall attempt to answer is whether an infinite number of
renewals can occur in a finite amount of time. That is, can N(t) be infinite for
some (finite) value of t? To show that this cannot occur, we first note that, as Sn
is the time of the nth renewal, N(t) may be written as

N(t) = max{n : Sn  t} (7.1)

To understand why Equation (7.1) is valid, suppose, for instance, that S4  t but
S5 > t. Hence, the fourth renewal had occurred by time t but the fifth renewal
occurred after time t; or in other words, N(t), the number of renewals that
occurred by time t, must equal 4. Now by the strong law of large numbers it
follows that, with probability 1,

Sn
→μ as n → ∞
n

But since μ > 0 this means that Sn must be going to infinity as n goes to infinity.
Thus, Sn can be less than or equal to t for at most a finite number of values of n,
and hence by Equation (7.1), N(t) must be finite.
However, though N(t) < ∞ for each t, it is true that, with probability 1,

N(∞) ≡ lim N(t) = ∞


t→∞
7.2 Distribution of N(t) 423

This follows since the only way in which N(∞), the total number of renewals
that occur, can be finite is for one of the interarrival times to be infinite.
Therefore,

P{N(∞) < ∞} = P{Xn = ∞ for some n}


 ∞
=P {Xn = ∞}
n=1


 P{Xn = ∞}
n=1
=0

7.2 Distribution of N(t)


The distribution of N(t) can be obtained, at least in theory, by first noting the
important relationship that the number of renewals by time t is greater than or
equal to n if and only if the nth renewal occurs before or at time t. That is,

N(t)  n ⇔ Sn  t (7.2)

From Equation (7.2) we obtain

P{N(t) = n} = P{N(t)  n} − P{N(t)  n + 1}


= P{Sn  t} − P{Sn+1  t} (7.3)

Now, since the random variables Xi , i 1, are independent and have a com-
mon distribution F, it follows that Sn = ni=1 Xi is distributed as Fn , the n-fold
convolution of F with itself (Section 2.5). Therefore, from Equation (7.3) we
obtain

P{N(t) = n} = Fn (t) − Fn+1 (t)

Example 7.1 Suppose that P{Xn = i} = p(1 − p)i−1 , i  1. That is, suppose that
the interarrival distribution is geometric. Now S1 = X1 may be interpreted as the
number of trials necessary to get a single success when each trial is independent
and has a probability p of being a success. Similarly, Sn may be interpreted as
the number of trials necessary to attain n successes, and hence has the negative
binomial distribution
⎧ 
⎨ k − 1 pn (1 − p)k−n , k  n

P{Sn = k} = n−1


0, k<n
424 Renewal Theory and Its Applications

Thus, from Equation (7.3) we have that


[t] 
 
k−1
P{N(t) = n} = pn (1 − p)k−n
n−1
k=n
[t] 
 
k − 1 n+1
− p (1 − p)k−n−1
n
k=n+1

Equivalently, since an event independently occurs with probability p at each of


the times 1, 2, . . .
 
[t] n
P{N(t) = n} = p (1 − p)[t]−n 
n

Another expression for P(N(t) = n) can be obtained by conditioning on Sn .


This yields
 ∞
 
P (N(t) = n) = P N(t) = n|Sn = y fSn (y)dy
0

Now, if the nth event occurred at time y > t, then there would have been less
than n events by time t. On the other hand, if it occurred at a time y  t, then
there would be exactly n events by time t if the next interarrival exceeds t − y.
Consequently,
 t  
P (N(t) = n) = P Xn+1 > t − y|Sn = y fSn (y)dy
0
 t
= F̄(t − y)fSn (y)dy
0

where F̄ = 1 − F.
Example 7.2 If F(x) = 1 − eλx then Sn , being the sum of n independent expo-
nentials with rate λ, will have a gamma (n, λ) distribution. Consequently, the
preceding identity gives

 −λy
 n−1
−λ(t−y) λe λy
t
P (N(t) = n) = e dy
0 (n − 1)!

λn e−λt t
= yn−1 dy
(n − 1)! 0
(λt)n
= e−λt 
n!
7.2 Distribution of N(t) 425

By using Equation (7.2) we can calculate m(t), the mean value of N(t), as

m(t) = E[N(t)]


= P{N(t)  n}
n=1


= P{Sn  t}
n=1


= Fn (t)
n=1

where we have used the fact that if X is nonnegative and integer valued, then


 ∞ 
 k
E[X] = kP{X = k} = P{X = k}
k=1 k=1 n=1

 ∞
∞  ∞

= P{X = k} = P{X  n}
n=1 k=n n=1

The function m(t) is known as the mean-value or the renewal function.


It can be shown that the mean-value function m(t) uniquely determines the
renewal process. Specifically, there is a one-to-one correspondence between the
interarrival distributions F and the mean-value functions m(t).
Another interesting result that we state without proof is that

m(t) < ∞ for all t < ∞

Remarks
(i) Since m(t) uniquely determines the interarrival distribution, it follows that the Poisson
process is the only renewal process having a linear mean-value function.
(ii) Some readers might think that the finiteness of m(t) should follow directly from the
fact that, with probability 1, N(t) is finite. However, such reasoning is not valid;
consider the following: Let Y be a random variable having the following probability
distribution:
 n
Y = 2n with probability 12 , n  1
Now,

 ∞
  1 n
P{Y < ∞} = P{Y = 2n } = 2 =1
n=1 n=1
426 Renewal Theory and Its Applications

But

 ∞
  1 n
E[Y] = 2n P{Y = 2n } = 2n 2 =∞
n=1 n=1

Hence, even when Y is finite, it can still be true that E[Y] = ∞.


An integral equation satisfied by the renewal function can be obtained by condi-
tioning on the time of the first renewal. Assuming that the interarrival distribution
F is continuous with density function f this yields
 ∞
m(t) = E[N(t)] = E[N(t)|X1 = x]f (x) dx (7.4)
0

Now suppose that the first renewal occurs at a time x that is less than t. Then,
using the fact that a renewal process probabilistically starts over when a renewal
occurs, it follows that the number of renewals by time t would have the same dis-
tribution as 1 plus the number of renewals in the first t − x time units. Therefore,

E[N(t)|X1 = x] = 1 + E[N(t − x)] if x < t

Since, clearly

E[N(t)|X1 = x] = 0 when x > t

we obtain from Equation (7.4) that


 t
m(t) = [1 + m(t − x)]f (x) dx
0
 t
= F(t) + m(t − x)f (x) dx (7.5)
0

Equation (7.5) is called the renewal equation and can sometimes be solved to
obtain the renewal function.
Example 7.3 One instance in which the renewal equation can be solved is when
the interarrival distribution is uniform—say, uniform on (0, 1). We will now
present a solution in this case when t  1. For such values of t, the renewal
function becomes
 t
m(t) = t + m(t − x) dx
0
 t
=t+ m(y) dy by the substitution y = t − x
0
7.3 Limit Theorems and Their Applications 427

Differentiating the preceding equation yields

m (t) = 1 + m(t)

Letting h(t) = 1 + m(t), we obtain

h (t) = h(t)

or

log h(t) = t + C

or

h(t) = Ket

or

m(t) = Ket − 1

Since m(0) = 0, we see that K = 1, and so we obtain

m(t) = et − 1, 0t1 

7.3 Limit Theorems and Their Applications


We have shown previously that, with probability 1, N(t) goes to infinity as t goes
to infinity. However, it would be nice to know the rate at which N(t) goes to
infinity. That is, we would like to be able to say something about limt→∞ N(t)/t.
As a prelude to determining the rate at which N(t) grows, let us first consider
the random variable SN(t) . In words, just what does this random variable repre-
sent? Proceeding inductively suppose, for instance, that N(t) = 3. Then SN(t) = S3
represents the time of the third event. Since there are only three events that have
occurred by time t, S3 also represents the time of the last event prior to (or at)
time t. This is, in fact, what SN(t) represents—namely, the time of the last renewal
prior to or at time t. Similar reasoning leads to the conclusion that SN(t)+1 repre-
sents the time of the first renewal after time t (see Figure 7.2). We now are ready
to prove the following.

Figure 7.2
428 Renewal Theory and Its Applications

Proposition 7.1 With probability 1,

N(t) 1
→ as t → ∞
t μ

Proof. Since SN(t) is the time of the last renewal prior to or at time t, and SN(t)+1
is the time of the first renewal after time t, we have

SN(t)  t < SN(t)+1

or
SN(t) t SN(t)+1
 < (7.6)
N(t) N(t) N(t)
N(t)
However, since SN(t) /N(t) = i=1 Xi /N(t) is the average of N(t) independent
and identically distributed random variables, it follows by the strong law of large
numbers that SN(t) /N(t) → μ as N(t) → ∞. But since N(t) → ∞ when t → ∞,
we obtain
SN(t)
→μ as t → ∞
N(t)

Furthermore, writing
  
SN(t)+1 SN(t)+1 N(t) + 1
=
N(t) N(t) + 1 N(t)

we have that SN(t)+1 /(N(t) + 1) → μ by the same reasoning as before and

N(t) + 1
→1 as t → ∞
N(t)

Hence,

SN(t)+1
→μ as t → ∞
N(t)

The result now follows by Equation (7.6) since t/N(t) is between two random
variables, each of which converges to μ as t → ∞. 

Remarks
(i) The preceding propositions are true even when μ, the mean time between renewals,
is infinite. In this case, we interpret 1/μ to be 0.
7.3 Limit Theorems and Their Applications 429

(ii) The number 1/μ is called the rate of the renewal process.
(iii) Because the average time between renewals is μ, it is quite intuitive that the average
rate at which renewals occur is 1 per every μ time units. 

Example 7.4 Beverly has a radio that works on a single battery. As soon as the
battery in use fails, Beverly immediately replaces it with a new battery. If the
lifetime of a battery (in hours) is distributed uniformly over the interval (30, 60),
then at what rate does Beverly have to change batteries?
Solution: If we let N(t) denote the number of batteries that have failed by time
t, we have by Proposition 7.1 that the rate at which Beverly replaces batteries
is given by

N(t) 1 1
lim = =
t→∞ t μ 45

That is, in the long run, Beverly will have to replace one battery every
45 hours. 
Example 7.5 Suppose in Example 7.4 that Beverly does not keep any surplus
batteries on hand, and so each time a failure occurs she must go and buy a new
battery. If the amount of time it takes for her to get a new battery is uniformly dis-
tributed over (0, 1), then what is the average rate that Beverly changes batteries?
Solution: In this case the mean time between renewals is given by

μ = E[U1 ] + E[U2 ]

where U1 is uniform over (30, 60) and U2 is uniform over (0, 1). Hence,
1
μ = 45 + 2 = 45 12

and so in the long run, Beverly will be putting in a new battery at the rate of
2
91 . That is, she will put in two new batteries every 91 hours. 

Example 7.6 Suppose that potential customers arrive at a single-server bank


in accordance with a Poisson process having rate λ. However, suppose that the
potential customer will enter the bank only if the server is free when he arrives.
That is, if there is already a customer in the bank, then our arriver, rather than
entering the bank, will go home. If we assume that the amount of time spent in
the bank by an entering customer is a random variable having distribution G,
then
(a) what is the rate at which customers enter the bank?
(b) what proportion of potential customers actually enter the bank?

Solution: In answering these questions, let us suppose that at time 0 a customer


has just entered the bank. (That is, we define the process to start when the first
430 Renewal Theory and Its Applications

customer enters the bank.) If we let μG denote the mean service time, then, by
the memoryless property of the Poisson process, it follows that the mean time
between entering customers is

1
μ = μG +
λ

Hence, the rate at which customers enter the bank will be given by

1 λ
=
μ 1 + λμG

On the other hand, since potential customers will be arriving at a rate λ, it


follows that the proportion of them entering the bank will be given by

λ/(1 + λμG ) 1
=
λ 1 + λμG

In particular if λ = 2 and μG = 2, then only one customer out of five will


actually enter the system. 
A somewhat unusual application of Proposition 7.1 is provided by our next
example.
Example 7.7 A sequence of independent trials,  each of which results in outcome
number i with probability Pi , i = 1, . . . , n, ni=1 Pi = 1, is observed until the
same outcome occurs k times in a row; this outcome then is declared to be the
winner of the game. For instance, if k = 2 and the sequence of outcomes is
1, 2, 4, 3, 5, 2, 1, 3, 3, then we stop after nine trials and declare outcome number 3
the winner. What is the probability that i wins, i = 1, . . . , n, and what is the
expected number of trials?
Solution: We begin by computing the expected number of coin tosses, call it
E[T], until a run of k successive heads occurs when the tosses are independent
and each lands on heads with probability p. By conditioning on the time of the
first nonhead, we obtain


k
E[T] = (1 − p)pj−1 (j + E[T]) + kpk
j=1

Solving this for E[T] yields

(1 − p)  j−1
k
E[T] = k + jp
pk j=1
7.3 Limit Theorems and Their Applications 431

Upon simplifying, we obtain

1 + p + · · · + pk−1
E[T] =
pk
1 − pk
= (7.7)
pk (1 − p)

Now, let us return to our example, and let us suppose that as soon as the
winner of a game has been determined we immediately begin playing another
game. For each i let us determine the rate at which outcome i wins. Now, every
time i wins, everything starts over again and thus wins by i constitute renewals.
Hence, from Proposition 7.1, the
1
rate at which i wins =
E[Ni ]

where Ni denotes the number of trials played between successive wins of out-
come i. Hence, from Equation (7.7) we see that

Pik (1 − Pi )
rate at which i wins = (7.8)
1 − Pik

Hence, the long-run proportion of games that are won by number i is given by
rate at which i wins
proportion of games i wins = n
j=1 rate at which j wins

Pik (1 − Pi )/(1 − Pik )


= n
j=1 (Pj (1 − Pj )/(1 − Pj ))
k k

However, it follows from the strong law of large numbers that the long-run
proportion of games that i wins will, with probability 1, be equal to the prob-
ability that i wins any given game. Hence,

Pik (1 − Pi )/(1 − Pik )


P{i wins} = n
j=1 (Pj (1 − Pj )/(1 − Pj ))
k k

To compute the expected time of a game, we first note that the



n
rate at which games end = rate at which i wins
i=1
n
Pik (1 − Pi )
= (from Equation (7.8))
i=1
1 − Pik
432 Renewal Theory and Its Applications

Now, as everything starts over when a game ends, it follows by Proposition 7.1
that the rate at which games end is equal to the reciprocal of the mean time of
a game. Hence,

1
E[time of a game} =
rate at which games end
1
= n 
i=1 (Pi (1 − Pi )/(1 − Pi ))
k k

Proposition 7.1 says that the average renewal rate up to time t will, with prob-
ability 1, converge to 1/μ as t → ∞. What about the expected average renewal
rate? Is it true that m(t)/t also converges to 1/μ? This result is known as the
elementary renewal theorem.

Theorem 7.1 Elementary Renewal Theorem

m(t) 1
→ as t → ∞
t μ

As before, 1/μ is interpreted as 0 when μ = ∞.


Remark At first glance it might seem that the elementary renewal theorem should
be a simple consequence of Proposition 7.1. That is, since the average renewal
rate will, with probability 1, converge to 1/μ, should this not imply that the
expected average renewal rate also converges to 1/μ? We must, however, be
careful; consider the next example.

Example 7.8 Let U be a random variable which is uniformly distributed on (0,


1); and define the random variables Yn , n  1, by

0, if U > 1/n
Yn =
n, if U  1/n

Now, since, with probability 1, U will be greater than 0, it follows that Yn will
equal 0 for all sufficiently large n. That is, Yn will equal 0 for all n large enough
so that 1/n < U. Hence, with probability 1,

Yn → 0 as n → ∞

However,

1 1
E[Yn ] = nP U  =n =1
n n

You might also like