Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

IEOR E4404 2015 Spring: Solution To Assignment 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

IEOR E4404 Solution to Assignment 4 2015 Spring

1. Consider a life insurance company that works as follows. Customers arrive according to a Poisson
process with rate ρ, and let the inter-arrival times be X1 , X2 , · · · . The ith individual stays in the system
for an amount of time Ti , which is uniformly distributed in the interval [0, β]. Thus, the nth individual
leaves the system at time X1 + · · · + Xn + Tn . Ti are assumed to be i.i.d. Moreover, we assume that
Ui ’s and Ti ’s are independent. We wish to estimate the number of people who have left the system in
the time interval [0, t]. Let’s call it M (t).
(a) Argue that the expected number of people who have left the system by time t is equal to
Z t
ρP (T ≤ t − s) ds,
0

where T is uniformly distributed in the interval [0, β].

Solution By thinning theorem, the instantaneous arrival rate at time s of a customer who will NOT be
present in the system at time t is the arrival rate ρ times the probability that the customer will have left
the system by t, i.e. P (s + T < t), for some s ≤ t. Let’s treat the number of arrivals who will not be
present in the system by time t as an inhomogeneous Poisson process with rate ρP (T ≤ t − s), then the
number of people who have left the system by time t is Poisson with mean
Z t
ρP (T ≤ t − s) ds.
0

(b) Suppose that t = β. What is the distribution of the number of customers who are still in the
system at time t, assuming that the system is empty at time zero?

Solution Similar as in part (a), now the instantaneous arrival rate at time s ≤ t = β of a customer who
will be in the system at time β is the arrival rate ρ times the probability that the customer will leave the
system after time β, i.e. P (s + T > β). Then by thinning theorem, we can view the number of arrivals
who will be in the system at time β as an inhomogeneous Poisson process with rate ρP (T > β − s), so
the number of people who are still in the system at time β should be Poisson distributed with mean
Z β Z β
s ρβ
ρP (T > β − s) ds = ρ ds = .
0 0 β 2

(c) Suppose that the company has zero customer at time zero, and that at time β the company has 5
customers in its portfolio. Conditional on this information, explain how to simulate the distribution of
the remaining times of these customers in the system.

Solution Given N (β) = 5, we can simulate the arrival times of these 5 customers who are still in
the system at time β. According to inhomogeneous Poisson process simulation, condioning on N (β) = 5,
the arrival time has density
λ(t) ρt/β 2t
f (t) = 1 {t ∈ [0, β]} = 1 {t ∈ [0, β]} = 2 1 {t ∈ [0, β]} .
ρβ/2 ρβ/2 β

Then we can use A/R method with proposal density g(t) = β1 1 {t ∈ [0, β]}, hence f (t)/g(t) ≤ c = 2. So
the steps to generate A1 , · · · , A5 are as follows:
Step 1: generate Y ∼ U nif [0, β];
Step 2: independently generate U ∼ U nif [0, 1];
f (Y ) Y
Step 3: if U ≤ cg(Y ) = β , then accept à = Y ; otherwise go back to Step 1.
We keep doing the steps above until getting 5 outputs Ã1 , · · · , Ã5 , then sort them in increasing order
and assigned as corresponding arrival times, i.e.

A1 = Ã(1) , · · · , A5 = Ã(5) .

Next for each of these customers, we simulate the distribution of his/her remaining times after β
in the system. Suppose the ith (i ∈ {1, 2, 3, 4, 5}) customer arrives at time Ai = s ∈ [0, β], then the

1
remaining time’s distribution is, for x ∈ [0, s]:

P (Vi > β − s + x)
P (Vi − (β − s) > x|Vi > β − s) =
P (Vi > β − s)
s−x
=
s
= P (U nif [0, s] > x)

So the remaining time of ith arrival ∼ U nif [0, Ai ].

2. (Two queues in series) The system consists of two servers with infinite waiting room. Each
customer must be first served by server 1, and upon completion of service at 1, goes over to server 2. An
arriving customer is served by server 1 immediately upon arrival if server 1 is free, otherwise she joins
the queue at server 1; similarly at server 2. Both servers serve customers in the order at which they
arrive, i.e., both are first-come-first-serve (FCFS).
(1)
Let the system be empty at time 0. Let An be the arrival times of the nth customer, Sn be his
(2) (1) (2)
service requirement at server 1, and Sn be his service requirement at server 2. Let Wn and Wn be
the waiting times in queue of the nth customer at servers 1 and 2 respectively.
h i+
(1) (1) (1)
(a) Show that Wn+1 = Wn + Sn − (An+1 − An ) for all n.

Solution If we only consider the behavior of the first server, it’s a FIFO single server queue. We
have seen the result for single server queue in class. Actually we consider two cases:
(1) (1)
(1) An + Wn + Sn < An+1
In this case, the nth customer left before the (n + 1)th customer arrives. So the (n + 1)th customer
h i+
(1) (1) (1)
does not wait in the queue, thus Wn+1 = 0 = Wn + Sn − (An+1 − An )] .
(1) (1)
(2) An + Wn + Sn ≥ An+1
In this case, the (n + 1)th customer arrrives before the nth customer finishes his service with server 1.
So he/she has to wait until the nth customer finishes the first service. Thus the waiting time of (n + 1)th
h i+
(1) (1) (1) (1) (1)
customer here is Wn+1 = An + Wn + Sn − An+1 = Wn + Sn − (An+1 − An ) .
For both cases, the equality holds.

(b) Let Dn be the departure time of the nth customer from server 1. Show that

Dn = An + Wn(1) + Sn(1) ,

and h i+
(2)
Wn+1 = Wn(2) + Sn(2) − (Dn+1 − Dn )

for all n.

SolutionThe first equation is obviously rue since the first queue behaves just like a FIFO single server
queue.
For the second equation, we observe that the departure time from server 1 is exactly the arrival time
to server 2. Then following the same argument as above, we get the equation as disired
h i+
(2)
Wn+1 = Wn(2) + Sn(2) − (Dn+1 − Dn ) .

(c) Let N (t) be the number of customers that have arrived between time 0 and t, and let M (t) be the
number of customers in the system at time t. Show that
N (t) n o
1 An + Wn1 + Sn(1) + Wn(2) + Sn(2) ≥ t .
X
M (t) =
n=1

2
Solution The number of customers in the system at time t is the total number of customers who
arrive by time t and depart after time t. The nth arrival time at server 2 is Dn , and the departure time
(2) (2) (1) (1)
from server 2 is Dn + Wn + Sn . In part (b) we showed that Dn = An + Wn + Sn . If we substitute
(1) (1)
Dn with An + Wn + Sn , then we get the expression of nth departure time from the entire system as

An + Wn(1) + Sn(1) + Wn(2) + Sn(2) .

Since the N (t)th customer is the last one that arrives in the system by time t, we have
N (t) n o
1 An + Wn1 + Sn(1) + Wn(2) + Sn(2) ≥ t
X
M (t) =
n=1

as desired.

(d) Suppose now that customers arrive according to a non-homogeneous Poisson process with inten-
sity function λ(t) = 2 + cos (2πt), and their service requirements at server 1 are i.i.d. exponential r.v.
with mean 1/3 and those at server 2 are i.i.d. exponential r.v. with mean 1/4, independent from service
requirements at server 1. Estimate the probability that during the time [0, 10], the number of customers
in the system never exceeds 6, i.e., the probability that for all t ∈ [0, 10], M (t) ≤ 6.

Solution We estimate the probability by simulating the system for a number of times and count the
proportion of times that for all t ∈ [0, 10], M (t) ≤ 6.
Steps to simulate:
Step 1: Generate a non-homogeneous Poisson process with intensity λ(t) as the arrival process An ;
Let N be the number of arrivals by time 10. (Various methods can be used here, I chose the A/R
method)
(1)
Step 2: Generate the service times Sn at server 1 as i.i.d. exponential random variables with mean
1/3.
(2)
Step 3: Generate the service times Sn at server 2 as i.i.d. exponential random variables with mean
1/4.
(1)
Step 4: Let W1 = 0, and calculate the waiting time of all the other customers at server 1 by formula
h i+
(1) (1) (1)
in part (a): Wn+1 = Wn + Sn − (An+1 − An ) .
(1) (1)
Step 5: Calculate the departure times from server 1 by Dn = An + Wn + Sn .
(2)
Step 6: Let W1 = 0, then calculate the waiting time of all the other customers at server 2 by
h i+
(2) (2) (2)
Wn+1 = Wn + Sn − (Dn+1 − Dn ) .
(2) (2)
Step 7: The departure time from the system of nth customer is Dn + Wn + Sn .

To check if the number of customers in the system never exceeds 6, we only need to check that number
while the system state changes, i.e. there’s an arrival or departure. Actually we only check the number
of customers in the system while there’s an arrival, since the number of customers in the system only
increase while there’s an arrival, and they are the only possible moments for it to go above 6. So we only
need to check, for every n = 1, · · · , N ,
n n o
1 Di + Wi(2) + Si(2) ≥ An .
X
Mn =
i=1

If ∀n ∈ {1, · · · , N }, Mn ≤ 6, then M (t) ≤ 6 for all t ∈ [0, 10].

Matlab codes:
Num = 10000; %sample size
Result = zeros(Num,1);

3
for i = 1:Num
% Step 1: generate arrival times and find N
t = 0; A = [];
while(t < 10)
U = rand; V = rand;
t = t - log(U)/3;
if(V <= (2 + cos(pi*t))/3)
A = [A; t];
end
end
N = length(A);

% Step 2: generate service times with server 1


U1 = rand(N,1);
S1 = -log(U1)/3;

% Step 3: generate service times with server 2


U2 = rand(N,1);
S2 = -log(U2)/4;

% Step 4: find waiting times for server 1


W1 = zeros(N,1);
for j = 2:N
W1(j) = max(W1(j-1)+S1(j-1)-(A(j)-A(j-1)),0);
end

% Step 5: find depart times from server 1


D1 = A + W1 + S1;

% Step 6: find waiting times for server 2


W2 = zeros(N,1);
for j = 2:N
W2(j) = max(W2(j-1)+S2(j-1)-(D1(j)-D1(j-1)),0);
end

% Step 7: find departure times from the system


D2 = D1 + W2 + S2;

% check at arrival times


M = zeros(N,1);
for j = 1:N
M(j) = sum(D2(1:j)>= A(j));
end

if(sum(M > 6) == 0)
Result(i) = 1;
end
end

mean(Result)

ans =

0.6577

4
var(Result)

ans =

0.2252

3. (Tandem queueing system with feedback) Consider again two servers in series, so that
customers have to be first served at server 1 and then server 2. Both servers have an infinite buffer, and
both servers serve customers according to the FCFS policy. Suppose now that for each customer that
finishes service at server 2, independently with probability p > 0, he/she is unhappy with the service,
and joins the end of the queue at server 1 (and leaves the system with probability q = 1 − p > 0). Note
that a customer can be served by servers 1 and 2 multiple times, until he/she is happy with the services,
at which time he/she leaves the system.
Suppose that customers arrive according to a Poisson process with rate λ, their service requirements
at servers 1 and 2 are independent exponential random variables with parameters µ1 and µ2 , respectively,
and independent from everything else.

(a) Let X1 (t) be the number of customers in queue and (possibly) in service at server 1, at time t,
and X2 (t) the number of customers in queue and (possibly) in service at server 2, at time t. Show that
the process (X1 (t), X2 (t))t≥0 is a continuous-time discrete-state Markov chain. Specify the holding time
distributions for different states, and the jump/transition probabilities.

Solution The process {(X1 (t), X2 (t))}t≥0 is a continuous-time discrete state Markov chain, since it
takes values (i, j) where i, j are non-negative integers and has exponentially distributed transition times.
To calculate the holding time distributions for different states and the jump/transition probabilities,
we need to determine the transitions rates from state (X1 , X2 ) = (i, j) to state (X1 , X2 ) = (k, l). The
general transition rates and probabilities are displayed in the table below.

Transition Type State Transition Rate Transition Probability


λ
An arrival to the system (i, j) → (i + 1, j) λ λ+µ1 1(i>0)+µ2 1(j>0)
µ1 1(i>0)
Server 1 completion (i, j) → (i − 1, j + 1) µ1 1 (i > 0) λ+µ1 1(i>0)+µ2 1(j>0)
pµ2 1(j>0)
Server 2 completion, stays (i, j) → (i + 1, j − 1) pµ2 1 (j > 0) λ+µ1 1(i>0)+µ2 1(j>0)
(1−p)µ2 1(j>0)
Server 2 completion, leaves (i, j) → (i, j − 1) (1 − p)µ2 1 (j > 0) λ+µ1 1(i>0)+µ2 1(j>0)
Holding Rate (i, j) → (·, ·) λ + µ1 1 (i > 0) + µ2 1 (j > 0) 1

(b) Describe a procedure to simulate the Markov chain (X1 (t), X2 (t))t≥0 up to time T .

Solution Discrete time event simulation can be used to simulate the Markov chain. At each itera-
tion, the holding time must first be generated - this gives the time of the next event to be simulated.
Then the transition probabilities are used to determine the next state. The simulation repeats until the
time exceeds T as follows:
(0) (0)
Step 1: Set n = 0, t(0) = 0, X1 = X1initial and X2 = X2initial .
Step 2: Generate U ∼ U nif (0, 1).     
Step 3: Set t(n+1) = t(n) − log(U )/ λ + µ1 1 X1 > 0 + µ2 1 X2 > 0 .
(n) (n)

Step 4: If t(n+1) > T stop, otherwise continue to next step.


Step 5: Generate V ∼ U nif (0, 1).
λ (n+1) (n) (n+1) (n)
Step 6: If V <  , set X = X1 + 1 and X2 = X2
λ+µ1 1 X1 >0 +µ2 1 X2 >0 1
  
(n) (n)

λ+µ1 1 X1 >0
 
(n)
(n+1) (n) (n+1) (n)
elseif V <  , set X = X1 − 1 and X2 = X2 + 1.
λ+µ1 1 X1 >0 +µ2 1 X2 >0 1
  
(n) (n)

λ+µ1 1 X1 >0 +pµ2 1 X2 >0


   
(n) (n)
(n+1) (n) (n+1) (n)
else if V <  , set X = X1 + 1 and X2 = X2 − 1.
λ+µ1 1 X1 >0 +µ2 1 X2 >0 1
  
(n) (n)

5
(n+1) (n) (n+1) (n)
else set X1 = X1 and X2 = X2 − 1.
Step 7: Set n = n + 1 and go to step 2.

(c) Now suppose that instead of exponential service times, the service time distributions at both servers
are given by P (V1 > t) = P (V2 > t) = (1 + t)−3 , where V1 and V2 represent generic service times at
servers 1 and 2, respectively. Describe a procedure to simulate the system using the discrete-event sim-
ulation approach.

Solution Because here the service times are no longer exponentially distributed, we cannot use the
above calculation. But still it’s good for us to simulate the Markov chain in the discrete-event approach.
(0) (0)
Step 1: Set n = 0, t(0) = 0, X1 = X1initial and X2 = X2initial .
(0) (0)
Step 2: Generate I (0) ∼ exp(λ), V1 and V2 independently from the desired distribution. (Note
that we can use inverse transform method to have V = U 3 − 1 with U ∼ U nif (0, 1).)
Step 3: Create a matrix
 (0) (0)

 t +I   0
t + V1 1 X1 > 0 + inf ·1 X1 = 0
 (0) (0) (0) (0)
1 

EventList =       
t(0) + V2 1 X2 > 0 + inf ·1 X2 = 0
(0) (0) (0)
2

and sort it w.r.t. its first column in increasing order.


Step 4: t(n+1) = EventList(1, 1).
Step 5: If t(n+1) > T stop, otherwise continue to next steps.
Step 6:
(n+1) (n) (n+1) (2)
If EventList(1, 2) = 0, then set X1 = X1 + 1, X2 = X2 ,

EventList(1, 1) = t(n+1) + I (n+1) ,


(n+1)
where I (n+1) ∼ exp(λ). And if X1 = 1, set
(n+1)
EventList(f ind(EventList(:, 2) == 1), 1) = t(n+1) + V1 ,
(n+1)
where V1 from the service distribution.
(n+1) (n) (n+1) (n)
If EventList(1, 2) = 1, then set X1 = X1 − 1, X2 = X2 + 1,
   
EventList(1, 1) = t(n+1) + V1
(n+1)
1 X1(n+1) > 0 + inf ·1 X1(n+1) = 0 ,
(n+1)
and if X2 = 1, set
(n+1)
EventList(f ind(EventList(:, 2) == 2), 1) = t(n+1) + V2 ,
(n+1) (n+1)
where V1 , V2 from the service distribution independently.
(n+1) (n)
If EventList(1, 2) = 2, set X2 = X2 − 1,
   
EventList(1, 1) = t(n+1) + V2
(n+1)
1 X2(n+1) > 0 + inf ·1 X2(n+1) = 0 .
(n+1) (n) (n+1) (n)
Generate U ∼ U nif (0, 1), if U > p, then set X1 = X1 ; Otherwise set X1 = X1 + 1 and if
(n+1)
X1 = 1, set
(n+1)
EventList(f ind(EventList(:, 2) == 1), 1) = t(n+1) + V1 ,
(n+1) (n+1)
where V1 , V2 from the service distribution independently. .
Step 7: Sort the matrix EventList w.r.t. its first column in increasing order, set n = n + 1, and go
to Step 4.

6
4. Suppose that P (X > 0) = 1, and consider the problem of estimating α = E[X] via simulation.
For simplicity, assume that V ar(X) = 2. Use the CLT to find the smallest value n such that if
n
1X
α̂n = Xj ,
n j=1

where the Xj ’s are i.i.d. replications of X, we have that α̂n is within 5% of α with roughly 95% confidence.

Solution By CLT we know that


α̂n − α
√ =⇒ N (0, 1),
σ/ n

where σ = 2. We want to find the smallest sample size n such that

P (|α̂n − α| > 0.05α̂n ) < δ = 0.05.

For each n, we can calculate a corresponding δn to be

δn = P (|α̂n − α| > 0.05α̂n )


 
|α̂n − α| α̂n
=P | √ | > 0.05 √
σ/ n σ/ n
 
α̂n
≈ P |N (0, 1)| > 0.05 √
σ/ n
  
α̂n
= 2 1 − Φ 0.05 √
σ/ n

where Φ(·) is the cdf of standard normal distribution. So we keep increasing n until we find one with its
corresponding δn < δ = 0.05.

Or we can turn the problem to find the smallest n such that


α̂n
0.05 √ ≥ zδ/2 = z0.025 = 1.96
σ/ n

2 × 1.962
=⇒ n ≥ 2.
(0.05α̂n )

Remark We need to update α̂n every time we increase n and use it accordingly to calculate δn .

5. Suppose that you want to estimate α = E [X]. Suppose that X has density function f (·) and
that X is obtained by using the acceptance/rejection method with some proposal density g(·). Suppose
that f (·)/g(·) ≤ 2 for all x. Let’s say that you have a budget equal to n total random proposals.

(a) Let N (n) be the total number of actual number of i.i.d. replications generated given a budget
of n total proposals. Express N (n) as the sum of i.i.d. random variables of the form Zj = 1 (Wj < Uj ),
where Wj and Uj are independent. You must provide an explicit representation of Wj in terms of f (·)
and g(·) and explain how to simulate Wj .

Solution In this acceptance/rejection method, we know that upper bound c = 2. So each time we
generate X̃ from the proposal distribution g(·), and then generate a random number V ∼ unif (0, 1): if
f (X̃)
V ≤ 2g( X̃)
, then accept X = X̃; otherwise reject and redo the process. Thus we can express N (n) as
follows:
n n
1 (Wj < Uj ) ,
X X
N (n) = Zj =
j=1 j=1

7
where
Uj ∼ unif (−1, 0),
f (X̃j )
Wj = − ,
2g(X̃j )
and {X̃j : 1 ≤ j ≤ n} are random numbers we simulate from the proposal density function g(·).

(b) Define
N (n)
1 X
α̂n = Xj .
N (n) j=1

Express Pn
j=1 Vj
α̂n = Pn ,
j=1 Zj
for explicit random variables {(Vj , Zj ) : 1 ≤ j ≤ n}. Explain how to simulate Vj and Zj . Are Vj and Zj
correlated?

Solution we can express α̂n this way by letting

Zj = 1 (Wj < Uj ) ,

Vj = X̃j Zj = X̃j 1 (Wj < Uj ) ,


where Wj , Uj and {X̃j : 1 ≤ j ≤ n} are the same as in part (a). Clearly Vj and Zj are correlated.
To simulate Vj and Zj at each time j = 1, · · · , n:
Step 1: Generate a random number X̃j from proposal density g(x) and a random number Uj from
U nif (−1, 0) distribution;
f (X̃ )
Step 2: Calculate Wj = − 2g(X̃j ) ;
j

Step 3: If Wj < Uj , Vj = X̃j and Zj = 1; otherwise let Vj = 0 and Zj = 0.

(c) Provide a detailed explanation of how to obtain a 95% approximate confidence interval for α based
on the simulated values of Vj and Zj obtained in part (b). You can assume that n has been given.

Solution We want to estimate α = E[X] = f (E[V ], E[Z]), so we use α̂n = f (α̂n (V ), α̂n (Z)) to ap-
proximate it, where
n
1X
α̂n (V ) = Vj ,
n j=1
n
1X
α̂n (Z) = Zj ,
n j=1
x
f (x, y) = .
y
Since A = ∂
∂x f (α̂n (V ), α̂n (Z)) = 1
α̂n (Z) , and B = ∂
∂y f (α̂n (V ), α̂n (Z)) = − α̂α̂nn(Z)
(V )
2 . Then by CLT we

know that

α − α̂n = f (E[V ], E[Z]) − f (α̂n (V ), α̂n (Z))


≈ A(E[V ] − α̂n (V )) + B(E[Z] − α̂n (Z))
σ̂n (A, B)
=⇒ √ N (0, 1)
n

where
σ̂n2 (A, B) = A2 σ̂V2 (n) + B 2 σ̂Z
2
(n) + 2AB σ̂V2 Z (n),

8
n
1 X
σ̂V2 (n) = (Vj − α̂n (V ))2 ,
n − 1 j=1
n
2 1 X
σ̂Z (n) = (Zj − α̂n (Z))2 ,
n − 1 j=1
n
1 X
σ̂V2 Z (n) = (Vj − α̂n (V ))(Zj − α̂n (Z)).
n − 1 j=1

So we want to find out the error  such that

P (|α − α̂n | ≤ ) ≥ 1 − δ = 0.95,

which is equivalent to √
 n
P (|N (0, 1)| ≤ ) ≥ 0.95.
σ̂n (A, B)
Thus we have √
 n
= zδ/2 = z0.025 = 1.96
σ̂n (A, B)
σ̂n (A, B)
=⇒  = 1.96 √
n
Finally we obtain a 95% approximate confidence interval for α, which is
 
σ̂n (A, B) σ̂n (A, B)
α̂n − 1.96 √ , α̂n + 1.96 √ .
n n

You might also like