Sheldon Ross-438-449
Sheldon Ross-438-449
Sheldon Ross-438-449
7.1 Introduction
We have seen that a Poisson process is a counting process for which the times
between successive events are independent and identically distributed exponential
random variables. One possible generalization is to consider a counting process
for which the times between successive events are independent and identically
distributed with an arbitrary distribution. Such a counting process is called a
renewal process.
Let {N(t), t 0} be a counting process and let Xn denote the time between
the (n − 1)st and the nth event of this process, n 1.
Definition 7.1 If the sequence of nonnegative random variables {X1 , X2 , . . .} is
independent and identically distributed, then the counting process {N(t), t 0}
is said to be a renewal process.
Thus, a renewal process is a counting process such that the time until the first
event occurs has some distribution F, the time between the first and second event
has, independently of the time of the first event, the same distribution F, and
so on. When an event occurs, we say that a renewal has taken place.
For an example of a renewal process, suppose that we have an infinite supply of
lightbulbs whose lifetimes are independent and identically distributed. Suppose
also that we use a single lightbulb at a time, and when it fails we immediately
replace it with a new one. Under these conditions, {N(t), t 0} is a renewal
process when N(t) represents the number of lightbulbs that have failed by time t.
n
S0 = 0, Sn = Xi , n1
i=1
That is, S1 = X1 is the time of the first renewal; S2 = X1 + X2 is the time until
the first renewal plus the time between the first and second renewal, that is, S2 is
the time of the second renewal. In general, Sn denotes the time of the nth renewal
(see Figure 7.1).
We shall let F denote the interarrival distribution and to avoid trivialities, we
assume that F(0) = P{Xn = 0} < 1. Furthermore, we let
μ = E[Xn ], n1
be the mean time between successive renewals. It follows from the nonnegativity
of Xn and the fact that Xn is not identically 0 that μ > 0.
The first question we shall attempt to answer is whether an infinite number of
renewals can occur in a finite amount of time. That is, can N(t) be infinite for
some (finite) value of t? To show that this cannot occur, we first note that, as Sn
is the time of the nth renewal, N(t) may be written as
To understand why Equation (7.1) is valid, suppose, for instance, that S4 t but
S5 > t. Hence, the fourth renewal had occurred by time t but the fifth renewal
occurred after time t; or in other words, N(t), the number of renewals that
occurred by time t, must equal 4. Now by the strong law of large numbers it
follows that, with probability 1,
Sn
→μ as n → ∞
n
But since μ > 0 this means that Sn must be going to infinity as n goes to infinity.
Thus, Sn can be less than or equal to t for at most a finite number of values of n,
and hence by Equation (7.1), N(t) must be finite.
However, though N(t) < ∞ for each t, it is true that, with probability 1,
This follows since the only way in which N(∞), the total number of renewals
that occur, can be finite is for one of the interarrival times to be infinite.
Therefore,
N(t) n ⇔ Sn t (7.2)
Now, since the random variables Xi , i 1, are independent and have a com-
mon distribution F, it follows that Sn = ni=1 Xi is distributed as Fn , the n-fold
convolution of F with itself (Section 2.5). Therefore, from Equation (7.3) we
obtain
Example 7.1 Suppose that P{Xn = i} = p(1 − p)i−1 , i 1. That is, suppose that
the interarrival distribution is geometric. Now S1 = X1 may be interpreted as the
number of trials necessary to get a single success when each trial is independent
and has a probability p of being a success. Similarly, Sn may be interpreted as
the number of trials necessary to attain n successes, and hence has the negative
binomial distribution
⎧
⎨ k − 1 pn (1 − p)k−n , k n
⎪
P{Sn = k} = n−1
⎪
⎩
0, k<n
424 Renewal Theory and Its Applications
Now, if the nth event occurred at time y > t, then there would have been less
than n events by time t. On the other hand, if it occurred at a time y t, then
there would be exactly n events by time t if the next interarrival exceeds t − y.
Consequently,
t
P (N(t) = n) = P Xn+1 > t − y|Sn = y fSn (y)dy
0
t
= F̄(t − y)fSn (y)dy
0
where F̄ = 1 − F.
Example 7.2 If F(x) = 1 − eλx then Sn , being the sum of n independent expo-
nentials with rate λ, will have a gamma (n, λ) distribution. Consequently, the
preceding identity gives
−λy
n−1
−λ(t−y) λe λy
t
P (N(t) = n) = e dy
0 (n − 1)!
λn e−λt t
= yn−1 dy
(n − 1)! 0
(λt)n
= e−λt
n!
7.2 Distribution of N(t) 425
By using Equation (7.2) we can calculate m(t), the mean value of N(t), as
m(t) = E[N(t)]
∞
= P{N(t) n}
n=1
∞
= P{Sn t}
n=1
∞
= Fn (t)
n=1
where we have used the fact that if X is nonnegative and integer valued, then
∞
∞
k
E[X] = kP{X = k} = P{X = k}
k=1 k=1 n=1
∞
∞ ∞
= P{X = k} = P{X n}
n=1 k=n n=1
Remarks
(i) Since m(t) uniquely determines the interarrival distribution, it follows that the Poisson
process is the only renewal process having a linear mean-value function.
(ii) Some readers might think that the finiteness of m(t) should follow directly from the
fact that, with probability 1, N(t) is finite. However, such reasoning is not valid;
consider the following: Let Y be a random variable having the following probability
distribution:
n
Y = 2n with probability 12 , n 1
Now,
∞
∞
1 n
P{Y < ∞} = P{Y = 2n } = 2 =1
n=1 n=1
426 Renewal Theory and Its Applications
But
∞
∞
1 n
E[Y] = 2n P{Y = 2n } = 2n 2 =∞
n=1 n=1
Now suppose that the first renewal occurs at a time x that is less than t. Then,
using the fact that a renewal process probabilistically starts over when a renewal
occurs, it follows that the number of renewals by time t would have the same dis-
tribution as 1 plus the number of renewals in the first t − x time units. Therefore,
Since, clearly
Equation (7.5) is called the renewal equation and can sometimes be solved to
obtain the renewal function.
Example 7.3 One instance in which the renewal equation can be solved is when
the interarrival distribution is uniform—say, uniform on (0, 1). We will now
present a solution in this case when t 1. For such values of t, the renewal
function becomes
t
m(t) = t + m(t − x) dx
0
t
=t+ m(y) dy by the substitution y = t − x
0
7.3 Limit Theorems and Their Applications 427
m (t) = 1 + m(t)
h (t) = h(t)
or
log h(t) = t + C
or
h(t) = Ket
or
m(t) = Ket − 1
m(t) = et − 1, 0t1
Figure 7.2
428 Renewal Theory and Its Applications
N(t) 1
→ as t → ∞
t μ
Proof. Since SN(t) is the time of the last renewal prior to or at time t, and SN(t)+1
is the time of the first renewal after time t, we have
or
SN(t) t SN(t)+1
< (7.6)
N(t) N(t) N(t)
N(t)
However, since SN(t) /N(t) = i=1 Xi /N(t) is the average of N(t) independent
and identically distributed random variables, it follows by the strong law of large
numbers that SN(t) /N(t) → μ as N(t) → ∞. But since N(t) → ∞ when t → ∞,
we obtain
SN(t)
→μ as t → ∞
N(t)
Furthermore, writing
SN(t)+1 SN(t)+1 N(t) + 1
=
N(t) N(t) + 1 N(t)
N(t) + 1
→1 as t → ∞
N(t)
Hence,
SN(t)+1
→μ as t → ∞
N(t)
The result now follows by Equation (7.6) since t/N(t) is between two random
variables, each of which converges to μ as t → ∞.
Remarks
(i) The preceding propositions are true even when μ, the mean time between renewals,
is infinite. In this case, we interpret 1/μ to be 0.
7.3 Limit Theorems and Their Applications 429
(ii) The number 1/μ is called the rate of the renewal process.
(iii) Because the average time between renewals is μ, it is quite intuitive that the average
rate at which renewals occur is 1 per every μ time units.
Example 7.4 Beverly has a radio that works on a single battery. As soon as the
battery in use fails, Beverly immediately replaces it with a new battery. If the
lifetime of a battery (in hours) is distributed uniformly over the interval (30, 60),
then at what rate does Beverly have to change batteries?
Solution: If we let N(t) denote the number of batteries that have failed by time
t, we have by Proposition 7.1 that the rate at which Beverly replaces batteries
is given by
N(t) 1 1
lim = =
t→∞ t μ 45
That is, in the long run, Beverly will have to replace one battery every
45 hours.
Example 7.5 Suppose in Example 7.4 that Beverly does not keep any surplus
batteries on hand, and so each time a failure occurs she must go and buy a new
battery. If the amount of time it takes for her to get a new battery is uniformly dis-
tributed over (0, 1), then what is the average rate that Beverly changes batteries?
Solution: In this case the mean time between renewals is given by
μ = E[U1 ] + E[U2 ]
where U1 is uniform over (30, 60) and U2 is uniform over (0, 1). Hence,
1
μ = 45 + 2 = 45 12
and so in the long run, Beverly will be putting in a new battery at the rate of
2
91 . That is, she will put in two new batteries every 91 hours.
customer enters the bank.) If we let μG denote the mean service time, then, by
the memoryless property of the Poisson process, it follows that the mean time
between entering customers is
1
μ = μG +
λ
Hence, the rate at which customers enter the bank will be given by
1 λ
=
μ 1 + λμG
λ/(1 + λμG ) 1
=
λ 1 + λμG
k
E[T] = (1 − p)pj−1 (j + E[T]) + kpk
j=1
(1 − p) j−1
k
E[T] = k + jp
pk j=1
7.3 Limit Theorems and Their Applications 431
1 + p + · · · + pk−1
E[T] =
pk
1 − pk
= (7.7)
pk (1 − p)
Now, let us return to our example, and let us suppose that as soon as the
winner of a game has been determined we immediately begin playing another
game. For each i let us determine the rate at which outcome i wins. Now, every
time i wins, everything starts over again and thus wins by i constitute renewals.
Hence, from Proposition 7.1, the
1
rate at which i wins =
E[Ni ]
where Ni denotes the number of trials played between successive wins of out-
come i. Hence, from Equation (7.7) we see that
Pik (1 − Pi )
rate at which i wins = (7.8)
1 − Pik
Hence, the long-run proportion of games that are won by number i is given by
rate at which i wins
proportion of games i wins = n
j=1 rate at which j wins
However, it follows from the strong law of large numbers that the long-run
proportion of games that i wins will, with probability 1, be equal to the prob-
ability that i wins any given game. Hence,
Now, as everything starts over when a game ends, it follows by Proposition 7.1
that the rate at which games end is equal to the reciprocal of the mean time of
a game. Hence,
1
E[time of a game} =
rate at which games end
1
= n
i=1 (Pi (1 − Pi )/(1 − Pi ))
k k
Proposition 7.1 says that the average renewal rate up to time t will, with prob-
ability 1, converge to 1/μ as t → ∞. What about the expected average renewal
rate? Is it true that m(t)/t also converges to 1/μ? This result is known as the
elementary renewal theorem.
m(t) 1
→ as t → ∞
t μ
Now, since, with probability 1, U will be greater than 0, it follows that Yn will
equal 0 for all sufficiently large n. That is, Yn will equal 0 for all n large enough
so that 1/n < U. Hence, with probability 1,
Yn → 0 as n → ∞
However,
1 1
E[Yn ] = nP U =n =1
n n