Computer Simulation
Computer Simulation
PUBLISHED TITLES
Yahya E. Osais
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2018 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
This book contains information obtained from authentic and highly regarded sources. Reasonable
efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication
and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information
storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access
www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc.
(CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization
that provides licenses and registration for a variety of users. For organizations that have been granted
a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and
are used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
To my wife, Asmahan,
and my daughters, Renad, Retal, and Remas.
Contents
List of Programs xv
Preface xxxi
Symbols xxxvii
Chapter 1 Introduction 3
1.1 THE PILLARS OF SCIENCE AND ENGINEERING 3
1.2 STUDYING THE QUEUEING PHENOMENON 4
1.3 WHAT IS SIMULATION? 5
1.4 LIFECYCLE OF A SIMULATION STUDY 6
1.5 ADVANTAGES AND LIMITATIONS OF SIMULATION 9
1.6 OVERVIEW OF THE BOOK 10
1.7 SUMMARY 11
ix
x Contents
2.2.2 Attributes 15
2.2.3 State Variables 16
2.2.4 Events 17
2.2.5 Activities 17
2.3 THE SINGLE-SERVER QUEUEING SYSTEM 18
2.4 STATE DIAGRAMS 22
2.5 ACTUAL TIME VERSUS SIMULATED TIME 23
2.6 SUMMARY 24
2.7 EXERCISES 24
Bibliography 271
Index 273
List of Programs
xv
xvi Contents
12.2 Computing unreliability for the graph in Figure 12.2 using crude
Monte Carlo simulation. 213
12.3 Computing unreliability for the graph in Figure 12.2 using strat-
ified sampling. 214
12.4 Computing unreliability for the graph in Figure 12.2 using an-
tithetic sampling. 215
12.5 Computing unreliability for the graph in Figure 12.2 using dag-
ger sampling. The number of samples is significantly less. 216
12.6 Python implementation of the event graph in Figure 12.4 220
12.7 Python implementation of the event graph of the simple stop-
and-wait ARQ protocol in Figure 12.8. 228
A.1.1 Starting a new Python interactive session. 235
A.1.2 Running a Python program from the command line. 236
A.1.3 A Python source file. It can also be referred to as a Python script. 236
A.2.1 Input and output functions. 237
A.3.1 Binary operations on integer numbers. 238
A.3.2 Handling unsigned binary numbers. 239
A.4.1 Lists and some of their operations. 239
A.5.1 Transposing a matrix using the zip function. Matrix is first un-
packed using the start (*) operator. 240
A.6.1 Importing the random module and calling some of the functions
inside it. 241
A.7.1 Implementing the event list using the queue module. 242
A.7.2 Implementing the event list using the hqueue module. 243
A.7.3 Implementing the event list by sorting a list. 243
A.8.1 The name of the function can be stored in a list and then used
to call the function. 244
A.8.2 The name of the function can be passed as an argument to
another function. 244
A.9.1 A tuple can be used as a record that represents an item in the
event list. 245
A.10.1 Code for generating Figure 4.12(b). 245
A.10.2 Code for generating Figure 10.6(a). 247
A.10.3 Code for generating Figure 10.6(b). 248
B.1 Event. 251
B.2 Simulation Entity. 252
B.3 Event list and scheduler. 252
B.4 Example 1. 254
xviii Contents
2.1 A mental image of the system and its behavior must be devel-
oped before a conceptual model can be constructed. 14
2.2 Different mental images can be developed for the same system.
They include different levels of details. Complexity increases as
you add more details. 14
2.3 A continuous state variable takes values from a continuous set
(e.g., [0, 5] in (a)). A discrete state variable, on the other hand,
takes values from a discrete set (e.g., {0, 1, 2, 3, 4, 5} in (b)). 16
2.4 Events are used to move dynamic entities through a system.
A packet is moved from a source to a destination through two
routers using eight events. 17
2.5 An activity is delimited by two events and lasts for a random
duration of time. 18
2.6 A queueing phenomenon emerges whenever there is a shared
resource and multiple users. 19
2.7 Conceptual model of the queueing situation in Figure 2.6. 20
2.8 A sample path of the state variable Q which represents the
number of persons in the single-server queueing system. Note
the difference in the time between every two consecutive arrival
events. 20
2.9 Four activities occur inside the single-server queueing system:
(a) Generation, (b) Waiting, (c) Service, and (d) Delay. The
length of each activity is a random variable of time. 21
xix
xx LIST OF FIGURES
2.10 A simple electrical circuit and its state diagram. Only the switch
and lamp are modeled. Events are generated by the switch to
change the state of the lamp. 22
2.11 State diagrams of the state variables associated with the queue
and server in the single-server queueing system in Figure 2.7. A
portion of the state space of the system is shown in (c). 23
4.1 Sample space for the random experiment of throwing two dice.
The outcome of the experiment is a random variable X ∈
{2, 3, ..., 12}. 40
4.2 The PMF of a discrete random variable representing the out-
come of the random experiment of throwing two dice. 41
4.3 The cumulative distribution function of a discrete random vari-
able representing the outcome of the random experiment of
throwing two dice. 42
4.4 Probability density function of a continuous random variable. 43
4.5 Elements of a histogram. Bins can be of different widths. Length
of a bar could represent frequency or relative frequency. 45
4.6 Histogram for an exponential data set. This figure is generated
using Listing 4.1. 46
4.7 The situation of observing four successes in a sequence of seven
Bernoulli trials can be modeled as a binomial random variable. 48
4.8 The PMF of the Poisson random variable for λ = 10. Notice
that P (x) approaches zero as x increases. 49
4.9 Probability distribution functions for the uniform random vari-
able where a = 3 and b = 10. 50
4.10 Probability distribution functions of the exponential random
variable where µ = 1.5. 53
LIST OF FIGURES xxi
7.8 Event graph for the single-server queueing system with reneging. 116
7.9 Event graph for the single-server queueing system with balking. 116
7.10 A template for synthesizing simulation programs from event
graphs. 121
7.11 Two parallel single-server queueing systems with one shared
traffic source. 122
7.12 A simple network setup where a user communicates with a server
in a data center over a communication channel created inside a
network. Propagation delay (Pd ) and rate (R) are two important
characteristics of a channel. 122
7.1 Event table for the event graph in Figure 7.4. 118
12.1 Sample space of the system in Figure 12.2 along with the status
of the network for each possible system state. 211
12.2 Restructuring the sample space of the system in Figure 12.2
along with the probability of each stratum. The first row indi-
cates the number of UP links. 212
12.3 State variables of the event graph in Figure 12.4. 219
xxvii
Foreword
xxix
Preface
xxxi
xxxii Preface
To the Reader
While writing this book, I had assumed that nothing is obvious. Hence, all
the necessary details that you may need are included in the book. However,
you can always skip ahead and return to what you skip if something is not
clear. Also, note that throughout this book, “he” is used to to refer to both
genders. I find the use of “he or she” disruptive and awkward. Finally, the
source code is deliberately inefficient and serves only as an illustration of the
mathematical calculation. Use it at your own risk.
Website
The author maintains a website for the book. The address is http:
//faculty.kfupm.edu.sa/coe/yosais/simbook. Presentations, pro-
grams, and other materials can be downloaded from this website. A code
repository is also available on Github at https://github.com/yosais/
Computer-Simulation-Book.
Acknowledgments
I would like to thank all the graduate students who took the course with
me while developing the material of the book between 2012 and 2017. Their
understanding and enthusiasm were very helpful.
I would also like to thank King Fahd University of Petroleum and Minerals
(KFUPM) for financially supporting the writing of this book through project
number BW151001.
Last but not least, I would like to thank my wife for her understanding
and extra patience.
Yahya Osais
Dhahran, Saudi Arabia
2017
About the Author
xxxiii
Abbreviations
RV Random Variable
CDF Cumulative Distribution Function
iCDF Inverse CDF
PDF Probability Distribution Function
PMF Probability Mass Function
BD Birth-Death
LFSR Linear Feedback Shift Registers
RNG Random Number Generator
RVG Random Variate Generator
REG Random Event Generator
IID Independent and Identically Distributed
xxxv
Symbols
Variable that are used only in specific chapters are explained directly at their
occurrence and are not mentioned here.
µ Population Mean
σ2 Population Variance
σ Population Standard Deviation
x̄ Sample Mean
s2 Sample Variance
s Sample Standard Deviation
xxxvii
I
The Fundamentals
CHAPTER 1
Introduction
3
4 Computer Simulation: A Foundational Approach Using Python
E
O C
Figure 1.1
The three pillars of science and engineering: Observation (O), Experimenta-
tion (E), and Computation (C). By analogy, the table needs the three legs to
stay up.
Figure 1.2
A queue at a checkout counter in a supermarket. A phenomenon arising when-
ever there is a shared resource (i.e., the cashier) and multiple users (i.e., the
shoppers).
Mathematical Theoretical
Model Data
Simulation
Simulation
(Synthetic)
Model
Data
Measurements
Physical Model
(Actual Data)
Figure 1.3
Types of models and the data generated from them.
1 2 3
Formal
Problem System Model Description
6 5 4
Performance Statistical Computer
Summary Analysis Program
Figure 1.4
Phases of a simulation study.
the elements of a simulation study. More about this will be said in the next
section.
The model is a conceptual representation of the system. It represents a
modeler’s understanding of the system and how it works. A computer is used
to execute the model. Therefore, the model must first be translated into a
computer program using a programming language like Python.2 The execution
of the computer program results in the raw data.
The raw data is also referred to as simulation data. It is synthetic because it
is not actual data. Actual data is collected from a physical model of the system.
There is another type of data called theoretical data which is generated from
a mathematical model of the system. Figure 1.3 shows these types of models
and the types of data generated from them.
Table 1.1
Description of the phases of a simulation study of the system in Figure 1.2.
1. There is no need to build the physical system under study and then
observe it. Thus, knowledge about the behavior of the system can be
acquired with a minimum cost.
2. Critical scenarios can be investigated through simulation with less cost
and no risk.
1.7 SUMMARY
Simulation is a tool that can be used for performing scientific studies. It may
not be the first choice. But, it is definitely the last resort if a physical or
mathematical model of the system under study cannot be constructed. The
main challenge in simulation is developing a sound model of the system and
translating this model to an efficient computer program. In this book, you will
learn the skills that will help you to overcome this challenge.
CHAPTER 2
Building Conceptual
Models
13
14 Computer Simulation: A Foundational Approach Using Python
Figure 2.1
A mental image of the system and its behavior must be developed before a
conceptual model can be constructed.
Mental
Level n
Image n
Mental
Level n-1
Image n-1
Complexity Behavioral
Details
Mental
Level 1
Image 1
Figure 2.2
Different mental images can be developed for the same system. They include
different levels of details. Complexity increases as you add more details.
become more complex. Eventually, the mental image cannot fit in the head of
the modeler. So, he needs other tools to manage them.
The type of systems we deal with in this book is called Discrete-Event
Systems (DESs). A DES is made up of elements referred to as entities. For
example, in the supermarket example in Chapter 1 (see Figure 1.2), the en-
tities are the customers, cashier, and queue. Entities participate in activities
which are initiated and terminated by the occurrence of events. Events are
a fundamental concept in simulation. They occur at discrete points of time.
The occurrence of an event may cause other events to occur. Entities change
their state upon occurrence of events. For example, the cashier becomes busy
when a customer starts the checkout process. More will be said about these
concepts in the next section. For now, you just need to become aware of them.
What is in a Name?
The meaning of the name “discrete-event simulation” is not clear for many
people. The word “event” indicates that the simulation is advanced by the
occurrence of events. This is why the name “event-driven simulation” is
also used. The word “discrete” means that events occur at discrete points
of time. So, when an event occurs, the simulation time is advanced to
the time at which the event occurs. Hence, although time is a continuous
quantity in reality, simulation time is discrete.
Building Conceptual Models 15
1. Entity,
2. Attribute,
3. State Variable,
4. Event, and
5. Activity.
Next, each one of these elements is discussed in detail. They will be used in
the next section to build a conceptual model for the single-server queueing
system.
2.2.1 Entities
An entity represents a physical (or logical) object in your system that must
be explicitly captured in the model in order to be able to describe the overall
operation of the system. For example, in Figure 2.6, in order to describe the
depicted situation, an entity whose name is coffee machine must be explicitly
defined. The time the coffee machine takes to dispense coffee contributes to
the overall delay experienced by people. The coffee machine is a static entity
because it does not move in the system and its purpose is to provide service
only for other entities.
A person is another type of entity that must be defined in the model.
A person is a dynamic entity because it moves through the system. A person
enters the system, waits for its turn, and finally leaves the system after getting
his coffee.
A static entity maintains a state that can change during the lifetime of
the system. On the other hand, dynamic entities do not maintain any state.
A dynamic entity typically has attributes which are used for storing data.
2.2.2 Attributes
An entity is characterized using attributes, which are local variables defined
inside the entity. For example, a person can have an attribute for storing the
time of his arrival into the system (i.e., arrival time). Another attribute can be
defined to store the time at which the person leaves the system (i.e., departure
time). In this way, the time a person spends in the system is the difference
between the values stored in these two attributes.
16 Computer Simulation: A Foundational Approach Using Python
State
1 2 3 4 5 6 7 8 9 10 Time
(a)
State
1 2 3 4 5 6 7 8 9 10 Time
(b)
Figure 2.3
A continuous state variable takes values from a continuous set (e.g., [0, 5] in
(a)). A discrete state variable, on the other hand, takes values from a discrete
set (e.g., {0, 1, 2, 3, 4, 5} in (b)).
Departure
Router 1 Router 2
Arrival
Arrival Departure
Departure Arrival
Figure 2.4
Events are used to move dynamic entities through a system. A packet is moved
from a source to a destination through two routers using eight events.
car and causes it to turn itself on. Because of this startup event, the state of
the car has changed from OFF to ON. Hence, an event triggers a change in
one or more state variables in a conceptual model.
2.2.4 Events
An event represents the occurrence of something interesting inside the system.
It is a stimulus that causes the system to change its state. For instance, in
the supermarket example, the arrival of a new customer represents an event
which will cause the state variable representing the number of people waiting
in line to increase by one. The departure of a customer will cause the cashier
to become free.
Events can also be used to delimit activities and move active entities in
the system. For example, in Figure 2.4, a packet is moved from a source to
a destination using eight events. The first event generates the packet. After
that, a sequence of departure and arrival events moves the packets through the
different static entities along the path between the source and destination. For
instance, for router 1, its arrival event indicates that the packet has arrived
at the router and it is ready for processing. After some delay, the same packet
leaves the router as a result of a departure event.
2.2.5 Activities
An activity is an action which is performed by the system for a finite (but
random) duration of time. As shown in Figure 2.5, an activity is delimited
by two distinct events. The initiating event starts the activity. The end of
the activity is scheduled at the time of occurrence of the terminating event.
The difference in time between the two events represents the duration of the
activity.
In the supermarket example, an important activity is the time a customer
spends at the checkout counter. The duration of this activity depends on how
many items the customer has. Durations of activities are modeled as random
variables. Random variables are covered in Chapter 4.
18 Computer Simulation: A Foundational Approach Using Python
Initiating Terminating
Event Event
Activity Name
Time
ti tt
Δt = tt - ti
Duration
Figure 2.5
An activity is delimited by two events and lasts for a random duration of time.
Every day at 8 am, after check-in, each person goes directly to the kitchen to
get his coffee. There is only one coffee machine in the kitchen. As a result,
if someone already is using the machine, others have to wait. People use the
machine in the order in which they arrive. On average, a person waits for a
non-zero amount of time before he can use the machine. This amount of time
is referred to as the delay.
Table 2.1 shows the details of the conceptual model which results from the
above mental image. The same information is presented pictorially in Figure
2.7. There are three entities. The queue and server are static entities. A person
is a dynamic entity since he can move through the system. Three events can
be defined: (1) arrival of a person into the system, (2) start of service for a
person, and (3) departure of a person from the system. Remember that these
events are used to move the person entity through the system.
Two state variables need to be defined to keep track of the number of
persons in the queue and the state of the server. Everytime a person arrives
into the system, the state variable of the server, S, is checked. If its value is
Free, it means the server can serve the arriving person. On the other hand,
if the value is Busy, the arriving person has to join the queue and wait for his
Building Conceptual Models 19
Figure 2.6
A queueing phenomenon emerges whenever there is a shared resource and
multiple users.
Table 2.1
Details of the conceptual model of the queueing situation in Figure 2.6.
Element Details
Entity Queue, Server, Person
State Variables Q = Number of Persons in Queue
Q ∈ {0, 1, 2, ...}
S = Status of Server
S ∈ {Free, Busy}
Events Arrival, Start_Service,
End_Service (or Departure)
Activities Generation, Waiting,
Service, Delay
turn. Whenever the server becomes free, it will check the queue state variable,
Q. If its value is greater than zero, the server will pick the next person from
the queue and serve it. On the other hand, if the value of the queue state
variable is zero, the server becomes idle because the queue is empty.
State variables are functions of time. Their evolution over time is referred
to as a sample path. It can also be called a realization or trajectory of the state
variable. Figure 2.8 shows one possible sample path of the state variable Q.
Sample paths of DESs have a special shape which can be represented by a
piecewise constant function. This function can also be referred to as a step
function. In this kind of function, each piece represents a constant value that
extends over a interval of time. The function changes its value when an event
occurs. The time intervals are not uniform. They can be of different lengths.
These observations are illustrated in Figure 2.8.
Four possible activities take place inside the single-server queueing system.
They are shown in Figure 2.9. In the first activity, arrivals are generated into
the system. This activity is bounded between two consecutive arrival events.
20 Computer Simulation: A Foundational Approach Using Python
Environment
System
Waiting Service
Arrival Start_Service End_Service
(Departure)
Q S
Figure 2.7
Conceptual model of the queueing situation in Figure 2.6.
2 4 8 10.5 14 18 20 t
e1 e2 e3 e4 e5 e6 e7
A A A D D A
1 2 3 1 2 4
IAT1
IAT2
IAT3
IAT4
Figure 2.8
A sample path of the state variable Q which represents the number of persons
in the single-server queueing system. Note the difference in the time between
every two consecutive arrival events.
The time between two such arrivals is random and it is referred to as the
Inter-Arrival Time (IAT). This information is also shown in Figure 2.9.
The next activity involves waiting. This activity is initiated when an ar-
riving person finds the server busy (i.e., S = Busy). The waiting activity is
terminatd when the server becomes free. Everytime the server becomes free,
a Start_Service event is generated to indicate the start of service for the
Building Conceptual Models 21
Generation
Waiting
Inter-Arrival Time Time Waiting Time Time
(a) (b)
Service Delay
(c) (d)
Figure 2.9
Four activities occur inside the single-server queueing system: (a) Generation,
(b) Waiting, (c) Service, and (d) Delay. The length of each activity is a random
variable of time.
next person in the queue. The difference between the time of the arrival of a
person and his start of service is referred to as the Waiting Time (WT).
The third activity is about the time spent at the server. It is referred to as
the Service Time (ST). This activity is initiated by a Start_Service event
and terminated by an End_Service (or Departure) event, provided the
two events are for the same person.
The length of the last activity is the total time a person spends in the
system. It includes the waiting time and service time. This quantity represents
the delay through the system or how long the system takes to respond to
(i.e., fully serve) an arrival. This is the reason this quantity is also called the
Response Time (RT). The events that start and terminate this activity are
the Arrival and Departure events, respectively.
Lamp Switch_Closed
OFF ON
Switch
Battery Switch_Open
(a) (b)
Figure 2.10
A simple electrical circuit and its state diagram. Only the switch and lamp
are modeled. Events are generated by the switch to change the state of the
lamp.
Start_Service
Start_Service End_Service
Start_Service Start_Service (Departure)
(0, ‘F’)
End_Service
Arrival
Arrival Arrival
(0, ‘B’) (1, ‘B’) (2, ‘B’)
Arrival
End_Service
Start_Service
End_Service
(2, ‘F’)
(c) System
Figure 2.11
State diagrams of the state variables associated with the queue and server in
the single-server queueing system in Figure 2.7. A portion of the state space
of the system is shown in (c).
server event occurs. The server, on the other hand, becomes busy whenever a
service starts. Then, at the end of the service, it becomes free again.
Figure 2.11(c) shows a portion of the state space of the system. The state
diagram of the system combines the two state diagrams in Figures 2.11(a)
and 2.11(b). In this new state diagram, each state is composed of two state
variables (i.e., a state vector). Further, the state of the system is driven by
three events: Arrival, Start_Service, and End_Service.
example, the execution time of your program will be large if you have many
state variables, events, and activities in your model.
On the other hand, the simulated time is the time inside your conceptual
model. It is not the time of the computer program that executes your model.
Another name for this kind of time is simulation time. Simulation time does
not pass at the same speed as actual time. That is, one second of simulated
time is not necessarily equal to one second of actual time. In fact, they will
be equal only in real-time simulation.
Because of this distinction between runtime and simulation time, you may
simulate a phenomenon that lasts for a few years of actual time in one hour
of simulation time. Similarly, you may simulate a phenomenon that lasts for
a few seconds of simulated time in one hour of actual time.
2.6 SUMMARY
Five essential concepts used in building conceptual models have been covered
in this chapter. Also, the famous single-server queueing system and its concep-
tual model have been introduced. The relationship between events and state
variables has been shown using state diagrams. Finally, the difference between
actual time and simulated time has been discussed. Clearly, this has been an
important chapter, providing you with essential terms and tools in simulation.
2.7 EXERCISES
2.1 Consider a vending machine that accepts one, two, five, and ten dollar
bills only. When a user inserts the money into a slot and pushes a button,
the machine dispenses one bottle of water, which costs one dollar. The
vending machine computes the change and releases it through another
slot. If the vending machine is empty, a red light will go on.
- Check balance,
- Withdraw cash,
- Deposit cash, and
Building Conceptual Models 25
- Pay bills.
Simulating Probabilities
Ω = {Head, Tail}.
As another example, the sample space for the random experiment of throwing
a die, which is shown in Figure 3.2, is as follows:
Ω = {1, 2, 3, 4, 5, 6}.
27
28 Computer Simulation: A Foundational Approach Using Python
{Head, Tail}
Tossing a Coin
Figure 3.1
A random experiment of tossing a coin. There are two possible outcomes.
{1, 2, 3, 4, 5, 6}
Throwing a Die
Figure 3.2
A random experiment of throwing a die. There are six possible outcomes.
An event occurs whenever any of its outcomes are observed. For example,
consider event E1 above. This event occurs whenever we observe the out-
come 1. Similarly, the event E3 occurs if the outcome 5 or 6 is observed. Of
course, both outcomes cannot be observed at the same time since the random
experiment of throwing a die has only one outcome.
Length of [j, k]
P ([j, k]) =
Length of [a, b]
(3.2)
|k − j|
= .
|b − a|
1 From now on, we are going to use the words “outcome” and “event” interchangeably.
30 Computer Simulation: A Foundational Approach Using Python
P (Ei ) = RF (Ei )
No. of times Ei occurs (3.3)
= .
No. of times the random experiment is performed
As an example, consider again the random experiment of throwing a die.
We want to approximate the probability of an outcome by means of a computer
program. First, we need to learn how we can simulate this random experiment.
Listing 3.1 shows how this random experiment can be simulated in Python.
On line 1, the function randint is imported from the library random. That
means the imported function becomes part of your program. The random
experiment is represented by the function call randint(1, 6) on lines 3-5. Each
one of these function calls return a random integer between 1 and 6, inclusive.
The result of each function call is shown as a comment on the same line.
Now, after we know how to simulate the random experiment of throwing
a die, we need to write a complete simulation program that contains the
necessary code for checking for the occurrence of the event of interest and
maintaining a counter of the number of times the event is observed. Also, the
Simulating Probabilities 31
Listing 3.1
Simulating the experiment of throwing a die. The output is shown as a com-
ment on each line.
1 from random import randint
2
Listing 3.2
Approximating the probability of an outcome in the experiment of throwing
a die.
1 from random import randint
2
6 for i in range(n):
7 outcome = randint(1, 6)
8 if(outcome == 3): # Check for event of interest
9 ne += 1 # ne = ne + 1
10
program should compute the probability of the event using Eqn. (3.3). This
program is shown as Listing 3.2.
The program has four parts. In the first part (line 1), the function randint
is included into the program. This function is used to simulate the random
experiment of throwing a die. Next, in the second part (lines 3-4), two variables
are defined. The first one is a parameter whose value is selected by you before
you run the program. It represents the number of times the random experiment
is performed. The second variable is an event counter used while the program is
running to keep track of the number of times the event of interest is observed.
The third part of the program contains the simulation loop (line 6). Inside this
loop, the random experiment is performed and its outcome is recorded (line
7). Then, a condition is used to check if the generated outcome is equal to
the event of interest (line 8). If it is indeed equal to the event of interest, the
event counter is incremented by one. The experiment is repeated n number of
times. Finally, in the last part of the program, the probability of the event of
interest is computed as a relative frequency (line 11). The function round is
used to round the probability to four digits after the decimal point.
The function round is not explicitly included into the program. This is
because it is a built-in function. Python has a group of functions referred to
as the built-in functions which are always available. Some of these functions
are min, max, and len. You are encouraged to check the Python documentation
for more information.2
1 2 3 4 5 6 7 8 9 10
Iteration (n)
Cumulative Sum: 1 2 3 4 4 4 4 4 5 5
Figure 3.3
Three different samples for the random experiment of tossing a coin 10 times.
The running mean is computed for the third sample using cumulative sums.
The last value at position 10 of the list of running means is equal to the sample
mean. The sample mean is the probability of seeing a head.
Figure 3.4
Running mean for the random experiment of tossing a coin. The mean even-
tually converges to the true value as more samples are generated.
Note that as the number of iteration increases, the sample mean converges
to a specific value, which is the true mean. Before this convergence happens,
the sample mean will fluctuate. A large number of samples will be needed
before the mean stabilizes and hits the true mean. The true mean is referred
to also as the population mean. The the difference between the population
and sample means is explained in the above side note. The use of the sample
mean as a probability is supported by the law of large numbers stated in the
next side note.
Simulating Probabilities 35
Listing 3.3
Simulation program for studying the running mean of the random experiment
of tossing a coin. This program is also used to generate Figure 3.4.
1 ### Part 1: Performing the simulation experiment
2 from random import choice
3 from statistics import mean
4
5 n = 1000
6 observed = []
7
8 for i in range(n):
9 outcome = choice([’Head’, ’Tail’])
10 if outcome == ’Head’:
11 observed.append(1)
12 else:
13 observed.append(0)
14
20 cum_observed = cumsum(observed)
21
22 moving_avg = []
36 Computer Simulation: A Foundational Approach Using Python
23 for i in range(len(cum_observed)):
24 moving_avg.append( cum_observed[i] / (i+1) )
25
33 xlabel(’Iterations’, size=20)
34 ylabel(’Probability’, size=20)
35
36 plot(x, moving_avg)
37 plot(x, p, linewidth=2, color=’black’)
38
39 show()
3.5 SUMMARY
In this chapter, you have learned how to build simulation models for simple
random experiments. You have also learned how to write a complete simulation
program that includes your simulation model in the Python programming
language. Further, you have been exposed to the essence of simulation, which
is estimation. As has been shown in Figure 3.4, the length of a simulation run
(i.e., the value of n) plays a significant role in the accuracy of the estimator.
3.6 EXERCISES
3.1 Consider the random experiment of throwing two dice. What is the
probability of the event that three spots or less are observed? Show how
you can compute the probability of this event both mathematically and
programmatically.
3.2 Write a Python program to simulate the random experiment of tossing
a fair coin five times. Your goal is to estimate the probability of seeing
four heads in five tosses.
Simulating Probabilities 37
Simulating Random
Variables and Stochastic
Processes
39
40 Computer Simulation: A Foundational Approach Using Python
Sample Space
{1,3}
{1,2}
{1,1} {3,1}
{2,1}
{2,2}
Event: {X = 4}
2 3 4 10 11 12 X
Figure 4.1
Sample space for the random experiment of throwing two dice. The outcome
of the experiment is a random variable X ∈ {2, 3, ..., 12}.
pX (x)
6
36
5
36
4
36
3
36
2
36
1
36
2 3 4 5 6 7 8 9 10 11 12 X
Figure 4.2
The PMF of a discrete random variable representing the outcome of the ran-
dom experiment of throwing two dice.
variable, the PMF associates a probability with it. The PMF is denoted by
pX (x) and it is defined as follows:
Hence, because of the above two properties, the PMF is a probability function.
Figure 4.2 shows the PMF of a random variable representing the outcome of
the random experiment of throwing two dice. The length of each bar repre-
sents a probability. There are gaps between the bars since they correspond to
discrete values.
Basically, the CDF gives the probability that the value of the random variable
X is less than or equal to x. Thus, it is a monotonically non-decreasing function
of X. That is, as the value of X increases, FX (x) increases or stays the same.
Figure 4.3 shows the CDF of the random variable representing the experiment
42 Computer Simulation: A Foundational Approach Using Python
FX (x)
1
35
36
33
36
10
36
6
36
3
36 P (X = 3)
1
P (X 3)
36 P (X = 2)
1 2 3 4 5 6 7 8 9 10 11 12 X
Figure 4.3
The cumulative distribution function of a discrete random variable represent-
ing the outcome of the random experiment of throwing two dice.
P [X ≤ 5] = FX (5)
X
= pX (i)
i≤5
fX (x)
Rb
P [a X b] = fX (x)dx
2.5 a
1.5
a b X
Figure 4.4
Probability density function of a continuous random variable.
The following are the relationships between the CDF and its PDF:
d
fX (x) = FX (x) (4.8)
dx
Z
+∞
4.1.4 Histograms
A histogram is a graph that shows the distribution of data in a data set. By
distribution, we mean the frequency (or relative frequency) of each possible
value in the data set. A histogram can be used to approximate the PDF of a
continuous random variable. It can also be used to construct the PMF of a
discrete random variable.
The range of values in a data set represents an interval. This interval can
be divided into subintervals. In a histogram, each subinterval is represented
by a bin on the x-axis. On each bin, a bar is drawn. The length of the bar
is relative to the number of samples (i.e., data values) in the corresponding
bin. The area of the bar is thus the product of its length and the width of
the bin. This quantity is equal to the probability that a sample falls in the
subinterval represented by the bin. Figure 4.5 illustrates the common elements
of a histogram.
Listing 4.1
Python program for generating the histogram from an exponential data set
(see Figure 4.6).
1 from random import expovariate
2 from matplotlib.pyplot import hist, xlabel, ylabel, title,
show, savefig
3
Simulating Random Variables and Stochastic Processes 45
fX(x)
0.5
X
bin 2 Boundary between bins 2 and 3
Figure 4.5
Elements of a histogram. Bins can be of different widths. Length of a bar
could represent frequency or relative frequency.
15
16 xlabel(’$X$’, size=18)
17 ylabel(’$f_X(x)$’, size=18)
18 title(’Histogram of exponential data: $\mu$ = 1.5’, size=15)
19
1.2
0.8
fl
>;:
'->-., 0.6
0.4
0.2
4
X
Figure 4.6
Histogram for an exponential data set. This figure is generated using Listing
4.1.
21 #show()
22 savefig(’hist_expov.pdf’, format=’pdf’, bbox_inches=’tight’)
Indicator Functions
An indicator function is denoted by the symbol 1 with a subscript E
describing the event of interest. If the event is observed, the function returns
1; otherwise, it returns 0.
1, if E occurs,
1E =
0, otherwise.
pX (x) = p · 1{x=1} .
µ=p (4.11)
4.2.2 Binomial
The binomial random variable is an extension of the Bernoulli random vari-
able, where the number of trials n is another parameter of the new random
experiment. Basically, the Bernoulli experiment (or trial) is repeated n times.
Then, the number of successes X in n trials is given by the following PMF:
n x
pX (x) = p (1 − p)n−x , (4.13)
x
48 Computer Simulation: A Foundational Approach Using Python
✖ p✖
(1-p) p✖ (1-p)
✖ p
✖ (1-p)
✖ p 4 3
= p (1-p)
F S S F S F S
1 2 3 4 5 6 7
2✖ 2✖ 2✖ 2✖ 2 ✖ 2✖ 2 = 27 = 128
Figure 4.7
The situation of observing four successes in a sequence of seven Bernoulli trials
can be modeled as a binomial random variable.
where p is the probability of success in a single trial. The following are the
mean and variance, respectively:
µ = np (4.14)
4.2.3 Geometric
The random experiment of repeating a Bernoulli trial until the first success
is observed is modeled by a geometric random variable. This random variable
can also be defined as the number of failures until the first success occurs. The
PMF for a geometric random variable is the following:
0.14
0.12
= 10
0.1
0.08
P(x)
P(x)
0.06
0.04
0.02
0
0 10 20 30 40 50 60 70 80 90 100
X
Figure 4.8
The PMF of the Poisson random variable for λ = 10. Notice that P (x) ap-
proaches zero as x increases.
1−p
σ2 = . (4.18)
p2
4.2.4 Poisson
A Poisson random variable X is a discrete random variable which has the
following probability mass function.
λx · e−λ
P (X = x) = , (4.19)
x!
where P (X = x) is the probability of x events occurring in an interval of
preset length, λ is the expected number of events (i.e., mean) occurring in the
same interval, x ∈ {0, 1, 2, ...}, and e is a constant equal to 2.72. Figure 4.8
shows the PMF of the Poisson random variable for λ = 10.
The Poisson random variable can be used to model the number of frames4
that arrive at the input of a communication system. The length of the ob-
servation interval must be specified when giving λ (e.g., five frames per 10
milliseconds which is equal to 0.5 frame per one millisecond). The next side
note elaborates more.
50 Computer Simulation: A Foundational Approach Using Python
1. 0 0 .16
0.14
0 .8
0.12
0.6 0.10
H'
::;: ~'< 0 .08
~ 0.4 "->
0 .06
0 .04
0 .2
0 .02
0 .00 14 0 .000 10 12 14
4 6 8 10 12
X X
Figure 4.9
Probability distribution functions for the uniform random variable where a =
3 and b = 10.
4.2.5 Uniform
A uniform random variable X is a continuous random variable that has the
following cumulative distribution function.
x−a
F (x) = , (4.20)
b−a
where x ∈ [a, b]. The probability density function is
1
b−a , for x ∈ [a, b],
fX (x) = (4.21)
0, otherwise.
4 Do you know that packets cannot travel through the wire? Actually, frames are the data
units that travel through wires and they carry packets. This is why we use frames as our
data unit.
Simulating Random Variables and Stochastic Processes 51
Figures 4.9(a) and 4.9(b) shows the CDF and PDF of uniform random variable
with a = 3 and b = 10. Listing 4.2 is the program used to generate these two
figures.
Listing 4.2
Python program for plotting the CDF and PDF of a uniform random variable
(see Figures 4.9(a) and 4.9(b)).
1 from numpy import *
2 from matplotlib.pyplot import *
3
4 # Parameters
5 a = 3
6 b = 10
7
18 for x in X:
19 Y.append(pdf(x))
20
21 matplotlib.rc(’xtick’, labelsize=18)
22 matplotlib.rc(’ytick’, labelsize=18)
23 plot(X, Y, Linewidth=2, color=’black’)
24 xlabel(’$X$’, size=22)
25 ylabel(’$f_X(x)$’, size=22)
26 #show()
52 Computer Simulation: A Foundational Approach Using Python
44 for x in X:
45 Y.append(cdf(x))
46
47 matplotlib.rc(’xtick’, labelsize=18)
48 matplotlib.rc(’ytick’, labelsize=18)
49 plot(X, Y, Linewidth=2, color=’black’)
50 xlabel(’$X$’, size=22)
51 ylabel(’$F_X(x)$’, size=22)
52 #show()
53 savefig(’uniform_cdf.pdf’, format=’pdf’, bbox_inches=’tight’
)
The mean and variance of the uniform random variable are the following,
respectively:
1
µ = (a + b) (4.22)
2
Simulating Random Variables and Stochastic Processes 53
1.4
1.2
0.2
O . OO);---e;---~4--~------;o8;---~10 0 · 0 o~--~""""--:-4------:c------;;,----~1.o
X X
Figure 4.10
Probability distribution functions of the exponential random variable where
µ = 1.5.
1
σ2 =(b − a)2 . (4.23)
12
This random variable is typically used to model equally likely events, such
as the random selection of one item from a list of candidate items. The events
can be modeled as equally likely because the PDF is constant for all the
possible values of the random variable. This is why it is called a uniform
random variable.
4.2.6 Exponential
An exponential random variable X is a continuous random variable which has
the following cumulative distribution function.
Figures 4.10(a) and 4.10(b) show the shapes of these two functions. Notice
the initial value of the PDF. It is greater than one. This is normal since the
PDF is not a probability function.
The exponential random variable can be used to model the time between
the occurrences of two consecutive events. For example, it is used to model
the time between two consecutive arrivals or departures in the single-server
queueing system. The next side note explains the relationship between the
Poisson and exponential random variables.
54 Computer Simulation: A Foundational Approach Using Python
4.2.7 Erlang
The Erlang random variable is continuous. It can be expressed as a sum of
exponential random variables. This property will be used in Section 10.4 to
generate samples from the Erlang distribution. The Erlang random variable
has two parameters:
1. Scale or rate (θ), and
2. Shape (k).
k is an integer and it represents the number of independent exponential ran-
dom variables that are summed up to form the Erlang random variable. Hence,
the Erlang distribution with k equal to 1 simplifies to the exponential distri-
bution.
The following are the probability density and cumulative distribution func-
tions of the Erlang random variable X:
xk−1 θk e−θx
f (x) = , x≥0 (4.26)
(k − 1)!
k−1
X (θx)j
F (x) = 1 − e−θx , x ≥ 0. (4.27)
j=0
j!
4.2.8 Normal
A normal (or Gaussian) random variable is a continuous random variable that
has the following probability density function.
1 (x−µ)2
f (x) = √ · e− 2σ2 (4.28)
σ 2π
where µ is the mean, σ is the standard deviation, and x ∈ (−∞, ∞). Figure
4.11 shows the shape of the PDF of the normal random variable. If µ = 0 and σ
= 1, the resulting PDF is referred to as the standard normal distribution and
the resulting random variable is called the standard normal random variable.
Simulating Random Variables and Stochastic Processes 55
0.040.---------~----,--~-------.
0.035
0.030
0.025
f:l
::;; 0 .020
'+-,
0.015
0.010
0 .005
Figure 4.11
The PDF of the normal random variable with µ = 30 and σ = 10.
4.2.9 Triangular
A triangular random variable has three parameters: a, b, and c. The last
parameter is referred to as the mode. At this point, the PDF has the highest
density. The following is the CDF:
0, if x ≤ a,
(x−a)2
(b−a)(c−a) , if a < x ≤ c,
FX (x) = (4.29)
1 − (b−x)2 , if c < x < b,
(b−a)(b−c)
1, if x ≥ b.
The PDF is defined as follows:
0, if x < a,
2(x−a)
if a ≤ x < c,
(b−a)(c−a) ,
fX (x) = 2
(4.30)
b−a , if x = c,
2(b−x)
if c < x ≤ b,
(b−a)(b−c) ,
0, if x > b.
56 Computer Simulation: A Foundational Approach Using Python
1 .0 0.25
0 .8 0.20
0 .6 0.15
H' H'
--;:;
r.. 0.4 ~ 0 .1 0
0.2 0.05
0 .00 0.000
4 6 8 10 6 8 10 12
X X
Figure 4.12
Probability distribution functions of the triangular random variable with a =
1, b = 10, and c = 7.
Figures 4.12(a) and 4.12(b) show the shapes of the CDF and PDF, respec-
tively. The expected value of a triangular random variable X is
a+b+c
µ= (4.31)
3
and the variance is
a2 + b2 + c2 − ab − ac − bc
σ2 = . (4.32)
18
Vertical (Ensemble)
Mean
Horizontal (Temporal)
f1(t,ѡ1) Mean
g1
ѡ1 t t
ѡ2
ѡ|Ω|
✲
f|Ω|(t,ѡ|Ω|) Horizontal (Temporal)
Sample Space g2 Mean
t t
ti tj
Time Functions
Xi( ti , · )
Sample Paths
Figure 4.13
A stochastic process maps each outcome in the sample space to a time func-
tion. Time functions are combined (convoluted) to produce two sample paths:
g1 and g2 . Two kinds of means can be defined for a stochastic process.
vertical mean is called the ensemble mean. It is calculated over all the possible
sample paths. The horizontal mean, however, is calculated using one sample
path. This is why it is referred to as a time average. Fortunately, as you will
learn in the next section, the horizontal mean can be used as an approximation
of the vertical mean.
The Bernoulli random process is illustrated in Figure 4.14. This process is
composed of two time functions, which are both constant (see Figure 4.14(c)).
Figure 4.14(c) shows the result of running the fundamental Bernoulli random
experiment in each trial (i.e., time slot). The function f (t) in Figure 4.14(d)
does not represent the real behavior of the Bernoulli random process. However,
it is used to construct this behavior in Figure 4.14(e). The Bernoulli random
process is a counting process. It counts the number of ones observed so far.
58 Computer Simulation: A Foundational Approach Using Python
8 8
‘H’ >
<1, if ! = ‘H 0 , >
<f1 (t) = 1, if ! = ‘H 0 ,
X= X(t) =
‘T’ >
: >
:
0, if ! = ‘T 0 . f2 (t) = 0, if ! = ‘T 0 .
Sample Space
f1(t)
1
N(t)
t
4 f2(t)
Sample 3
Path 0
2
t
1
1 0 1 1 0 1 0 f(t)
0 1 2 3 4 5 6 t 1 0 1 1 0 1 0
0
t
(e) (d)
Figure 4.14
The Bernoulli random process: (a) sample space, (b) random variable, (c) time
functions, (d) result of running the random experiment in each slot, and (e)
final sample path.
Terminal
State
Initial
State
Figure 4.15
A sample path through the state space of a dynamic system. Entry and exit
points are random. Data is generated along this sample path and a time
average is computed as an estimate of the performance metric of interest.
1 2 3
Figure 4.16
A sample path through the state space of the single-server queueing system.
The initial state does not have to be (0, ‘F’).
simulation runs are performed and their average is used instead. Hopefully,
each simulation run will exercise a different trajectory in the state space. The
next side note is very important in this regard.
Ergodic Systems
If a dynamic system is run for a long period of time, then each possi-
ble system state would be visited. Then, the mean over the state space
(i.e., ensemble mean ) can be approximated by the mean of a sample path
through the state space (i.e., temporal mean. ) Such dynamic systems are
referred to as ergodic systems wherein the temporal mean converges to the
ensemble mean.
60 Computer Simulation: A Foundational Approach Using Python
N(t)
A6
4
A3 A5
3
A2 D1 A4
2
A1 D2
1
1 2 3 4 5 6 7 8 9 t
Sample Path: N(1) = 0 N(2) = 1 N(3) = 2 N(4) = 3 N(5) = 2 N(6) = 1 N(7) = 2 N(8) = 3 N(9) = 4
Slot No.: 1 2 3 4 5 6 7 8 9
Figure 4.17
A sample path of a discrete-time Markov chain over nine time units. Events
occur at integer times only. N(1) is the number of entities in the system during
the first time slot.
N(t)
A4
3
A2 A3 D2
2
A1 D1
1
ST1
Figure 4.18
A sample path of a continuous-time Markov chain. Events occur at random
times. The time spent in a state has an exponential distribution.
62 Computer Simulation: A Foundational Approach Using Python
0.7
0.5 G B
0.3
0.5
Figure 4.19
A graphical representation of a two-state, discrete-time Markov chain.
Similarly, if the present state is Xn = B, then the next state Xn+1 has the
following PMF.
Since we know the PMF for the next state given any present state, we
can now simulate the DTMC. In fact, the task of simulating the DTMC boils
down to simulating the random variable Xn+1 as follows.
(
G, if u ∈ (0, 0.5)
If Xn = G, Xn+1 =
B, if u ∈ [0.5, 1.0).
Simulating Random Variables and Stochastic Processes 63
(
G, if u ∈ (0, 0.7)
If Xn = B, Xn+1 =
B, if u ∈ [0.7, 1.0).
where u is a uniform random number between 0 and 1.
Listing 4.3 shows a program that generates a possible trajectory of the
above DTMC given that the initial state is X0 = G. The output of the program
could be the following.
X0 = G, X1 = G, X2 = B, X3 = G, ....
Listing 4.3
Simulating a two-state discrete-time Markov chain given its probability tran-
sition matrix and an initial state.
1 from random import random
2
3 n = 10
4 S = []
5
8 for i in range(n):
9 u = random()
10 if S[i] == ’G’:
11 if u < 0.5:
12 S.append(’G’)
13 else:
14 S.append(’B’)
15 elif S[i] == ’B’:
16 if u < 0.7:
17 S.append(’G’)
18 else:
19 S.append(’B’)
20
21 print(’Sample Path: ’, S)
64 Computer Simulation: A Foundational Approach Using Python
N(t)
A5
5
A4
4
A3
3
A2
2
A1
1
Sample path of a Poisson process. Only arrival events occur inside a Poisson
process.
Listing 4.4
Simulating a Poisson process.
1 from random import expovariate
2
5,--------,--------,-------mm--.,----r--------n
N
2
Time
Figure 4.21
Sample path of a birth-death process.
Listing 4.5
Simulating a birth-death process and plotting its sample path (see Figure
4.21).
1 from random import expovariate
2 from matplotlib.pyplot import *
3
4 Avg_IAT = 2.0
5 Avg_ST = 1.0 # Avg service time
6 Sim_Time = 100 # Total simulation time
7 N = 0
8 clock = 0 # Simulation time
9 X = [] # Times of events
10 Y = [] # Values of N
11
25 Y.append(N)
26
4.6 SUMMARY
In this chapter, you have learned about several important random variables
and their probability distribution functions. You have also learned about
stochastic processes and their fundamental role in system modeling. In ad-
dition, you have learned new conventions when writing simulation programs
in Python. For example, you should be comfortable now with the following
programming concepts:
1. Simulation loop,
2. Keeping track of simulation time using the clock variable,
3. Advancing the simulation time using randomly generated numbers, and
4. Using lists for collecting simulated data.
You will need all these concepts and techniques in the next chapters.
4.7 EXERCISES
4.1 Write a Python program to plot the PDF and CDF of the Erlang random
variable.
4.2 A Bernoulli random process X(n) counts the number of successes at
the end of the nth time slot. Let the initial state be X(0) = 0. Write
a Python program which simulates this process over 15 time slots. Plot
one sample path.
4.3 A manufacturer distributes a coupon in every box he makes. The coupon
put in each box is chosen randomly from a set of N distinct coupons.
Your goal is to collect all the N distinct coupons. Write a Python pro-
gram to estimate the expected number of boxes that you must buy.
68 Computer Simulation: A Foundational Approach Using Python
Simulating the
Single-Server Queueing
System
“Learning by doing and computer simulation are all part of the same equa-
tion.”
−Nicholas Negroponte
69
70 Computer Simulation: A Foundational Approach Using Python
Sink
Source Buffer Server
Figure 5.1
Physical structure of the single-server queueing system.
into a buffer. The server fetches the packets from the buffer and then delivers
them to the sink after they are processed.
Packets are transferred to the server in the same order in which they enter
the buffer. This buffering mechanism is referred to as the First-In First-Out
(FIFO) mechanism. This observation is very helpful when collecting simulated
data. That is, since the order of packets is maintained by a FIFO policy, there
is no need to assign indexes (or identifiers) to packets. The first packet which
enters the system is going to be the first packet which leaves the system. The
same observation applies to all the subsequent packets.
Since the individual inter-arrival times and service times are unpredictable,
they are modeled as random variables. Thus, we need to specify the probability
distributions of these two random variables. The choice of a specific probability
distribution has to be supported by an evidence that it is appropriate. The
exponential probability distribution is a reasonable model of the inter-arrival
and service times.
Listing 5.1 shows a Python implementation of the simulation model of the
single-server queueing system. It is based on the C-language implementation
provided in [8]. In this simulation model, there are two fundamental events:
arrival and departure. The state variable N represents the number of packets
inside the system. It is the state of the random process we are going to observe.
Remember that this process is a BD process. The birth and death events are
the arrival and departure events of a packet, respectively.
The arrival process is a Poisson process with an average inter-arrival time
of 2.0 (line 4). The departure process is also a Poisson process with an average
service time of 1.0 (line 5). The system will be simulated for 100.0 time units
(line 6). The simulation clock is initialized to zero and it is used to keep track
of the simulation time (line 7).
For every event, a variable is needed to keep track of its time of occurrence.
For the arrival event, the variable Arr_Time is used. After an arrival occurs,
this variable is updated with the time of the next arrival. Similarly, the variable
Dep_Time keeps track of the time of next departure (i.e., service completion).
The variable clock represents the current simulation time. It acts like an
internal clock for the simulation model.
If the system becomes empty (i.e., N = 0) due to a departure event, the
variable Dep_Time is set to ∞ to ensure that the next event will be an arrival.
This is also done in the initialization phase to ensure that the first event will
Simulating the Single-Server Queueing System 71
Listing 5.1
Simulation program of the single-server queueing system.
1 from random import expovariate
2 from math import inf as Infinity
3
Service!
Time
ST3
ST2
ST1
D3
0 A1 D1 A2 A3 D2 A4 A5 D4 D5
Time
W1 W2 W4
W3 W5
(b) Arrival and departure events mapped onto the time line. For each
packet i, Wi represents the total time spent in the system.
Figure 5.2
Graphical representation of the relationship between random variables and
simulation events.
N(t)
Figure 5.3
A sample path of the random process N (t).
Arrival Departure
Process Process
A(t) D(t)
System
Process
N (t)
(a)
A(t) A(t1 ) = 3
t1 t
(b)
D(t)
D(t2 ) = 3
t2 t
(c)
N (t)
N (t2 ) = 0
t2 t
(d)
Figure 5.4
Random processes present in the single-server queueing system. Both the ar-
rival and departure processes are Poisson processes. (a) Places where the ran-
dom processes are defined. (b) Total number of arrivals which have occurred
up to time t1 is three. (c) The sample path of the departure process is a
shifted version of the sample path of the arrival process. (d) Sample path of
the queueing (birth-death) process which tracks the number of packets in the
system.
74 Computer Simulation: A Foundational Approach Using Python
Table 5.1
Manual simulation of the single-server queueing system using a simulation
table.
Manual Simulation
Table 5.1 shows how a manual simulation of the single-server queueing
system shown in Figure 5.1 can be performed. The simulation table has
eight columns, which are divided into two groups. The first three columns
represent the information needed before starting the simulation. The fourth
column is used to record the absolute arrival time which is clock + IAT. The
fifth column is the time at which the service of a packet starts. The service
of packet number i starts at the time of departure of packet number i − 1.
Of course, the service of the first packet starts immediately. The departure
time is recorded in the sixth column. The waiting time in the queue is the
difference between the departure time and arrival time. It is captured in
the seventh column. The last column is used for recording the total time a
packet spends in the system (i.e., system response time).
Simulating the Single-Server Queueing System 75
Parameters
λ μ Num_Pkts
IAT
Simulation Delay
Experiment
ST
Figure 5.5
A simulation experiment represents an execution of a simulation model with
a specific set of parameters, inputs, and outputs.
5.3.1 Throughput
Throughput measures how many packets the system can process in one time
unit. It is defined as the ratio of the number of departures divided by the total
simulation time. Mathematically, this law can be written as follows.
D
τ= . (5.1)
T
The unit of throughput is packets per a time unit (pkt/time unit).
5.3.2 Utilization
Server utilization is the proportion of simulation time during which the server
is busy. It is the product of its throughput and the average service time per
Simulating the Single-Server Queueing System 77
U = τ · Ts (5.2)
where Ts is the average service time per customer and it is defined as follows.
B
Ts = (5.3)
D
where B is the total server busy time which can be computed as follows.
D
X
B= Ti (5.4)
i=1
Listing 5.2
Estimating the average response time of the system.
1 from random import expovariate
2 from statistics import mean
3 from math import inf as Infinity
4
5 # Parameters
6 lamda = 1.3 # Arrival rate (Lambda)
7 mu = 2.0 # Departure rate (Mu)
8 Num_Pkts = 100000 # Number of Packets to be simulated
9 count = 0 # Count number of simulated packets
78 Computer Simulation: A Foundational Approach Using Python
10 clock = 0
11 N = 0 # State Variable; number of packets in system
12
13 Arr_Time = expovariate(lamda)
14 Dep_Time = Infinity
15
16 # Output Variables
17 Arr_Time_Data = [] # Collect arrival times
18 Dep_Time_Data = [] # Collect departure times
19 Delay_Data = [] # Collect delays of individual packets
20
39 for i in range(Num_Pkts):
40 d = Dep_Time_Data[i] - Arr_Time_Data[i]
41 Delay_Data.append(d)
42
Little’s Law
L=λ·W
This law asserts that the time average number of packets in the system
is the product of the arrival rate and the response time. This law is due
to Little who proved it in 1961 [7]. Remember that λ is a parameter of
the arrival Poisson process. In simulation, it is the argument passed to the
function random.expovariate.
Listing 5.2 shows the Python code necessary to perform the experiment
in Figure 5.5(b). In this program, the values of the input variables are gen-
erated whenever they are needed. On the other hand, for the output variable
Delay, a list is explicitly defined to hold its values. These values are the
results of subtracting the values in two intermediate output variables (i.e.,
Arr_Time_Data and Dep_Time_Data). At the end of the program, the
mean function in the statistics module is applied on the Delay output
variable to get the average of the individual packet delays.
5.3.4 E[N(t)]
The state variable N (t) represents the number of packets in the system at
time t. In the previous section, the Little’s law is used to compute the average
number of customers in the system; i.e., E[N(t)]. This quantity can be directly
computed by using one sample path of N (t) as follows:
Z T
1
E[N(t)] = · N (t), (5.7)
T 0
where T is the total simulation time.
The integral in Eqn. (5.7) is the sum of the areas of the individual rect-
angles under the curve of N (t). For example, in Figure 5.6, there are eight
rectangles. The length of each rectangle is equal to the number of packets in
the system while the width is the time interval between the events causing
the change in N . Hence, the areas of the eight rectangles are 0, 2, 2, 3, 6, 1,
0, and 1. Since the total simulation time is 12, then the average number of
packets in the system can be computed as follows.
80 Computer Simulation: A Foundational Approach Using Python
N(t)
t
0 1 3 4 5 8 9 11 12
Δt = 3
Figure 5.6
A sample path of the number of packets in the single-server queueing system.
There are eight rectangles under the curve of the sample path.
0+2+2+3+6+1+0+1
E[N(t)] =
12
15
=
12
= 1.25.
Listing 5.3 shows how the above technique can be implemented in Python.
The new code is on lines 16, 17, 22-24, 31-33, and 41. A new variable is defined
on line 16 and it is used to record the time of occurrence of the last simulated
event. Inside the simulation loop, after updating the simulation clock, the
area of the current rectangle delimited by the current and previous events is
calculated on lines 23 and 32. After this operation, the value of the variable
Prev_Event_Time is changed to the current simulation time. At the end of
the simulation run, the total area under the curve is computed as shown on
line 41. The variable clock stores the total simulation time.
Listing 5.3
Estimating the average number of customers in the sytem (E[N(t)]).
1 from random import expovariate
2 from statistics import mean
3 from math import inf as Infinity
4
5 # Parameters
Simulating the Single-Server Queueing System 81
6 lamda = 1.3
7 mu = 2.0
8 Num_Pkts = 1000000
9 count = 0
10 clock = 0
11 N = 0
12
13 Arr_Time = expovariate(lamda)
14 Dep_Time = Infinity
15
40
5.3.5 P[N]
P[N = k] is the probability that there are exactly k packets in the system. In
order to estimate this probability, we sum up all time intervals during which
there are exactly k packets in the system. Then, the sum is divided by the
total simulation time. For instance, in Figure 5.6, the system contains one
packet only during the following intervals: [1, 3], [8, 9], and [11, 12]. Thus, the
probability that there is exactly one packet in the system can be estimated as
follows.
(3 − 1) + (9 − 8) + (12 − 11)
P[N = 1] =
12
2+1+1
=
12
= 0.33.
Listing 5.4 shows how P[N = k] can be estimated using simulation. The
new code is on lines 15, 17, 22-26, 33-37, 45-47, and 49-50. In this program,
a new data structure called dictionary is used. In a dictionary, keys are used
for storing and fetching items. A dictionary is defined using two curly braces
as shown on line 17. This defines an empty dictionary. The dictionary is pop-
ulated on lines 24-25 and 35-36. Basically, if the key N is already used, the
value which corresponds to this key is updated. Otherwise, a new key is in-
serted into the dictionary and its value is initialized. The value of the key is
updated using the length of the current time interval on lines 23 and 34. As in
the previous example, the time of the current event is saved to be used in the
next iteration of the simulation loop. Also, the state variable N is updated
after computing the time interval and updating the dictionary.
In order to verify the simulation program, two checks are performed. First,
on lines 49-50, the sum of probabilities is checked to be equal to one. Second, on
lines 52-55, the mean is computed and compared against the theoretical value.
If the two checks evaluate to true, then the simulation program is correct.
Listing 5.4
Estimating the steady-state probability distribution (P[N = k]).
1 from random import expovariate
2 from statistics import mean
Simulating the Single-Server Queueing System 83
5 # Parameters
6 lamda = 1.3
7 mu = 2.0
8 Num_Pkts = 1000000
9 count = 0
10 clock = 0
11 N = 0
12
13 Arr_Time = expovariate(lamda)
14 Dep_Time = Infinity
15 Prev_Event_Time = 0.0
16
17 Data = {} # Dictionary
18
37 Prev_Event_Time = clock
38 N = N - 1.0
39 count = count + 1
40 if N > 0:
41 Dep_Time = clock + expovariate(mu)
42 else:
43 Dep_Time = Infinity
44
45 # Compute probabilities
46 for (key, value) in Data.items():
47 Data[key] = value / clock
48
52 # Check expectation
53 mean = 0.0
54 for (key, value) in Data.items():
55 mean = mean + key * value
56
Output Variable
Samples of
Simulation Performance
Run 1 2 3 k n
Measure
th
1 d1,1 d1,2 d1,3 d1,k d1,n D1 The i Sample of the Performance Measure
n
X
2 d2,1 d2,2 d2,3 d2,k d2,n D2 1
Di = dr,i
d3,1 d3,2 d3,3 d3,k d3,n
n k
3 D3 i=k+1
th
The i Ensemble Average
R-1 dR-1,1 dR-1,2 dR-1,3 dR-1,k dR-1,n DR-1
PR
DR r=1 dr,i
R dR,1 dR,2 dR,3 dR,k dR,n di =
R
Ensemble d1 d2 d d d
3 k n
Averages
Figure 5.7
Raw data generated when running a simulation program.
lation run, values of an output variable are not independent. Thus, the value
of a performance metric resulting from a single simulation run cannot be used
as an estimate.
For example, in order to compute an estimate of the response time (W ) of
the single-server queueing system, we need to define an output variable (say)
Z = [Wi ], where Z is a list of the times each simulated packet spends in the
system. The response time for each packet is computed as Wi = Tqi + Tsi ,
where Tqi is the time spent in the queue and Tsi is the time spent at the
server. Note that Wi = Wi−1 + Wi−2 + ... + W1 . Hence, we can conclude that
Wi ’s are not independent. This serial dependence exists in both the transient
and steady phase. So, what should we do?
The simplest remedy to the above problem is to construct your sample set
by making multiple independent simulation runs. In this case, each simulation
run will generate one sample in your sample set. In this way, you will have
a sample set with IID samples and thus can apply the classical statistical
techniques. Listing 5.5 shows how you generate multiple independent samples
of the delay performance measure using the simulation model of the single-
server queueing system defined in the external library simLib.
The number of independent simulation runs to be performed is stored in
the variable Num_Repl. In each simulation run, n packets are simulated. The
former is necessary to ensure IID samples. Also, the latter is necessary to
ensure that the transient phase is eliminated.
Finally, to make sure that every simulation run is independent from all the
other simulation runs, you have to reseed the random number generator (see
line 17). That is, for every simulation run, you must assign a unique seed to
the function random(). This way the sequence of generated random numbers
86 Computer Simulation: A Foundational Approach Using Python
Listing 5.5
Performing multiple independent simulation runs of the simulation model of
the single-server queueing system.
1 # simLib is your simulation library, which you will reuse
2 # in your homework and projects.
3 # It is available in the github repository
4
9 lamda = 1.3
10 mu = 2
11 n = 100000 # Number of packets to be simulated
12
16 for i in range(Num_Repl):
17 seed() # Reseed RNG
18 d = mm1(lamda, mu, n)
19 Delay.append(d)
20
11.5
11
10.5
Wcum
10
9.5
Wcum Transient
9 Phase
Tcum
8.5
Transient Steady Phase
8 Phase
7.5
6
0 1 2 3 4 5 6 7 8 9 10
npktn 4
x 10
11
10.5
Wcum
10
9.5
9
Tcum
8.5
7.5
6.5
6
0 500 1000 1500 2000 2500 3000 3500 4000
npkt n
Figure 5.8
Cumulative average versus number of simulated packets. The theoretical value
is Wavg = 10. After the transient phase is over, the cumulative average starts
approaching the theoretical value.
88 Computer Simulation: A Foundational Approach Using Python
Wcum vary dramatically. They are significantly different from the theoretical
value (Wavg = 10) computed using standard queueing theory formulas. Wcum
can be computed using Eqn. (5.8), where n is the number of simulated packets.
Finally, in this specific simulation run, the transient phase extends from one
to approximately n = 40000 simulated packets. That means the first 40000
samples are dropped from the output variable. At the end of the simulation
run, the output variable will contain only 60000 samples.
Pn
Wi
Wcum = i=1 . (5.8)
n
In the transient phase, output variables fluctuate due to the effect of the
initial state of the simulation model. Thus, no simulation data should be
collected during this phase. Instead, the simulation program should be allowed
to run until it exits this phase. Interestingly, this phase is also referred to as
the warm-up phase. Figure 5.8(b) shows a detailed view of the transient phase.
Several techniques exist for estimating the length of the transient phase .
In this book, we are going to use a simple but effective technique based on the
Welch’s method introduced in [11]. This technique uses the running average of
the output variable. Several realizations of the output variable are generated.
Then, they are combined into one sequence in which each entry represents
the average of the corresponding entries in the generated realizations. This
final sequence is then visually inspected to identify an appropriate truncation
point. Figure 5.9 shows the final sequence (Z) resulting from five realizations.
The entries before the truncation point will be discarded. This is because they
will introduce a bias in the point estimate of the performance measure.
The first two steps in the Welch’s method are shown in Figure 5.10. The
following is a description of these two steps.
1. For each output variable Y , run the simulation at least five times. Each
simulation run i generates a realization Y [i] of size m.
2. Calculate the mean across all the generated realizations, i.e.,
PR
Y [i]
Z[i] = i=1 , (5.9)
R
where R is the number of simulation runs performed in step 1.
3. Plot the sequence Z.
4. The warm-up period ends at a point k when the curve of Z becomes
flat. Choose this point as your truncation point.
Listing 5.6 gives a Python implementation of the above two-step technique
for determining truncation points. It also gives the code used for generating
Figure 5.9. As you can tell from this figure, using the average of multiple
realizations is more effective than using a single realization of the output
variable to determine the length of the transient phase.
Simulating the Single-Server Queueing System 89
3.0
•I -- Y[O]
2.5
,,
II -- Y[l]
II -- Y[2]
II
II
-- Y[3]
I' -- Y[4]
.
I I
2.0 I
I
-- z
I II
~'~ lJ'A~~\.
~,,, ,,
~ p, - r"--.!' ......... _...,... -....'"""--.r--
'''~"r."~o' ... .,. '"\to ..... -"~-.... ....... --- J"- ----
I
I \•''"' '•"' .J '~- - - ~- -
1.0 u
~
0.5
0.0
0 2000 4000 6000 8000 10000
n
Figure 5.9
Z is the average of the five output sequences Y[0]-Y[4]. A truncation point
can visually be determined by using the curve of Z. In this example, a good
truncation point is n = 3000.
Figure 5.10
The first two steps in the Welch’s method. In step 1, multiple realizations of
the output variable are generated. These realizations are combined into one
sequence in step 2.
90 Computer Simulation: A Foundational Approach Using Python
Listing 5.6
Determining a good trunction point using the average of several realizations
of an output variable.
1 from simLib import out_var_cum_mm1
2 from random import seed
3 from matplotlib.pyplot import *
4 import numpy as np
5
6 lamda = 1.3
7 mu = 2
8
24 # Plot Y and Z
25 plot(Y[0], "k--", label="Y[0]")
26 plot(Y[1], "k--", label="Y[1]")
27 plot(Y[2], "k--", label="Y[2]")
28 plot(Y[3], "k--", label="Y[3]")
29 plot(Y[4], "k--", label="Y[4]")
30 plot(Z, "k", linewidth=2, label="Z")
31
Simulating the Single-Server Queueing System 91
1
Packets
Figure 5.11
A two-server queueing system with a finite buffer of size three.
Table 5.2
IATs and STs for Exercise 5.1.
Pkt 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
IAT 2 5 1 3 1 3 3 2 4 5 3 1 1 1 2
ST 12 10 16 9 10 13 17 10 8 12 6 5 4 3 3
32 xlabel("$n$", size=16)
33 ylabel("$W_{cum}$", size=16)
34 legend(loc=’upper right’, shadow=True)
35 show()
5.6 SUMMARY
The selection of the next event by direct comparison of event occurrence times
becomes cumbersome as the number of servers increases. In Chapter 7, you will
learn about the event list, which is the preferred way for next event selection.
This data structure is natively supported by Python and leads to concise and
more manageable simulation programs. You will just need to learn how to
include it in your simulation program and use it.
5.7 EXERCISES
5.1 Consider the two-server queuing system shown in Figure 5.11. The two
servers are indexed from 1 to 2 and the buffer has a finite buffer of size
three. That is, at most three packets can be stored inside the system at
any instant of time. The Inter-Arrival Times (IATs) and Service Times
(STs) for 15 packets are given in Table 5.2. A packet goes to the server
with the lowest index. If all the two servers are occupied, the packet
waits in the queue. Perform a manual simulation of the system and then
answer the following questions:
92 Computer Simulation: A Foundational Approach Using Python
5.2 Extend the simulation program in 5.1 to simulate the system in Fig-
ure 5.11. Verify the simulation program by applying the workload (i.e.,
15 packets) in Exercise 5.1 and then comparing the results with those
obtained manually.
CHAPTER 6
Statistical Analysis of
Simulated Data
93
94 Computer Simulation: A Foundational Approach Using Python
Population:
0 ∞ R+
Figure 6.1
Population and samples for the simulation experiment of estimating the delay
through the single-server queueing system by simulating five packets. The
population is (0, ∞).
Random Samples
Each observation (or sample) of a performance metric is a random variable.
Hence, a random sample of size n consists of n random variables such
that the random variables are independent and have the same probability
distribution. For example, in Figure 6.1, each random sample contains five
observations. The first observation is different in both sample sets. This is
because the first observation is a random variable and we cannot predict
its value in each sample set.
Statistics, such as the sample mean and variance, are computed as func-
tions of the elements of a random sample. Statistics are functions of random
variables. The following are some of the most commonly used statistics.
1. Sample mean
n
1X
X= Xi ,
n i=1
2. Sample variance
n
1 X
S2 = (Xi − X)2 ,
n − 1 i=1
Table 6.1
Notation for the sample and population statistics.
Run 1 S1
S1 S2
Run 2 S2
Multiple
Simulation
Runs
Sn
Run n Sn
Set of
Independent
Samples
Set of Sample
Means
Figure 6.2
Probability distribution of the sample mean is normal.
30
25
20
Frequency
15
10
0
3 3.5 4 4.5 5
mean
Figure 6.3
Frequency distribution of the average delay D through the single-server queue-
ing system with λ = 1 and µ = 1.25. The population mean is 4.
Figure 6.3 shows the frequency distribution of the average delay (D) for the
single-server queueing system. For this specific example, the population mean
is 4. The population mean is equivalent to the theoretical mean which can be
calculated using queueing theory. The standard deviation of this probability
distribution is the standard error.
Now, since we know the probability distribution for D, we can study how
far the sample mean might be from the population mean. According to the
empirical rule, approximately 68% of the samples fall within one standard de-
viation of the population mean. In addition, approximately 95% of the samples
fall within two standard deviations of the population mean and approximately
99% fall within three standard deviations. Figure 6.4 illustrates the empirical
rule. In the next section, we are going to use the fact that 95% of the samples
lie within two standard deviations (i.e., t = 1.96) of the mean to establish a
95% confidence interval.
99 %
95 %
68 %
Figure 6.4
The empirical rule for the distribution of samples around the population mean.
95% of the area under the curve of the normal distribution lies within two
standard deviations (equal to 1.96) of the mean.
1. Confidence level (1 − α)
Consider the following samples for estimating the average delay. Calculate
the 80%, 90%, 95%, 98%, and 99% confidence intervals.
{3.33, 3.15, 2.91, 3.05, 2.75}
Solution
1. Calculate the sample mean and sample standard deviation.
x̄ = 3.038
s = 0.222
CL t
0.80 1.533
0.90 2.132
0.95 2.776
0.98 3.747
0.99 4.604
Notice that as the confidence level increases, the value of t also increases.
3. Use Eqn. (6.1) to get the confidence intervals.
CL t Confidence Interval
0.80 1.533 (2.886, 3.190)
0.90 2.132 (2.826, 3.250)
0.95 2.776 (2.762, 3.314)
0.98 3.747 (2.666, 3.410)
0.99 4.604 (2.580, 3.495)
Notice that as the confidence level increases, the confidence interval gets
wider.
100 Computer Simulation: A Foundational Approach Using Python
Listing 6.1
Calculating the confidence interval using Python.
1 import statistics as stat
2 import math
3
7 mean = stat.mean(sample_set)
8 std_dev = stat.stdev(sample_set)
9
10 t = 2.776
11 ci1 = mean - t * (std_dev/math.sqrt(n))
12 ci2 = mean + t * (std_dev/math.sqrt(n))
13
16 # Output
17 # Confidence Interval: 2.8 3.2
Note that when the number of samples is large (i.e., n > 30), the t-
distribution approaches the normal distribution. As a result, the values of
t become fixed. This is clearly shown in the last row of the table given in
Appendix D.
6.3.1 Interpretations
The confidence interval is a random interval which may contain the population
mean. The following is the mathematical expression for the probability that
a confidence interval contains the population mean.
s s
P [x̄ − t × √ < µ < x̄ + t × √ ] = 1 − α,
n n
30
l
25
20
"'c
"'
QJ
::;:
QJ 15
a.
E
"'
l
V1
10
0
0 2 3 4 5 6 7
Confidence Intervals
Figure 6.5
Two of the calculated confidence intervals do not include the population mean.
The population mean is 15.
a (α × 100)% chance that the population mean lies outside the confidence
interval.
Another interpretation is the following. If a simulation is performed n times
with different seed values, then in ((1−α)×100)% of the cases, the population
mean lies within the confidence interval. In (α × 100)% of the cases, however,
the population mean lies outside the interval. Figure 6.5 shows an example in
which the confidence interval can miss the population mean. Listing 6.2 shows
how this figure is generated.
Listing 6.2
Plotting confidence intervals and population mean.
1 import numpy as np
2 import matplotlib.pyplot as plt
3
4 x = [1, 2, 3, 4, 5, 6, 7]
5 y = [17, 7, 14, 18, 12, 22, 13]
6
7 plt.figure()
102 Computer Simulation: A Foundational Approach Using Python
You have been asked to evaluate the performance of five machines. The
95% confidence interval for the average performance of one machine is
(10.3, 13.1). Evaluate the following statements:
1. You are 95% confident that the performance for all the five machines is
between 10.3 and 13.1.
2. 95% of all the samples generated as a result of running one machine will
give an average performance between 10.3 and 13.1.
3. There is a 95% chance that the true average is between 10.3 and 13.1.
Solution
1. This is not correct: a confidence interval is for the population parameter,
and in this case the mean, not for individuals.
2. This is not correct: each sample will give rise to a different confidence
interval and 95% of these intervals will contain the true mean (i.e., the
population mean).
3. This is not correct: µ is not random. The probability that it is between
10.3 and 13.1 is 0 or 1.
Solution
1. Construct the 95% confidence interval for the mean delay.
( 57 − 0.2352, 57 + 0.2352 )
2. The confidence interval does not support the claim of the sales person
because it does not contain the claimed population mean. Therefore, do
not buy.
104 Computer Simulation: A Foundational Approach Using Python
2. If the confidence interval for θ is to the left of zero, then there is a strong
statistical evidence that θ1 − θ2 < 0. This means that the performance
measure value for design 1 is smaller than that for design 2. Hence,
design 1 is better.
3. If the confidence interval for θ is to the right of zero, then there is a strong
statistical evidence that θ1 − θ2 > 0. This means that the performance
measure value for design 2 is smaller than that for design 1. Hence,
design 2 is better.
Statistical Analysis of Simulated Data 105
The table below shows five samples of the response time for two designs. In
each simulation run, the same set of random numbers is used in simulating
the two designs. The last column gives the difference in the response time
of the two designs.
6.5 SUMMARY
Statistical analysis of the data resulting from running a simulation model is a
very important step in a simulation study. This step will enable you to gain
insights about the performance of the system you study. As a rule of thumb,
for each performance measure you want to compute, you should report its
mean and confidence interval. The confidence interval gives a range of values
which may include the population mean. It enables you to assess how far your
estimate (i.e., the sample mean) is from the true value (i.e., the population
mean) of the performance measure.
6.6 EXERCISES
6.1 Simulate the single-server queueing system using 100 packets. Use λ = 1
and µ = 1.25. Construct a 95% confidence interval for the average delay
using a sample set of size 10. Answer the following questions:
Event Graphs
Event graphs are a formal modeling tool which can be used for building
discrete-event simulation models. They were introduced in [6] as a graphical
representation of the relationships between events in a system. Event graphs
can be used to model almost any discrete-event system. This chapter is an
introduction to event graphs. It is also going to show you how you can syn-
thesize simulation programs from simulation models constructed using event
graphs.
109
110 Computer Simulation: A Foundational Approach Using Python
( Condition )
A B
t
Source Target
( Condition )
A B
t
A B
A B
( Condition )
t
( Condition )
A B
A t
Figure 7.1
Types of edges in event graphs.
Event Graphs 111
7.2 EXAMPLES
In this section, several examples will be given to illustrate the modeling power
of event graphs. These examples can be used as a basis for modeling more
complex systems.
0 1 2 3
(a)
ta
Start Arr
{A=0} {A=A+1}
(b)
Figure 7.2
State diagram (a) and event graph (b) for the Poisson arrival process.
follows. First, when the Start event occurs at time t = 0, it sets the value
of the state variable A to zero and schedules the first Arrival event to occur
immediately at time t = 0. Then, whenever an arrival event occurs, the state
variable A is incremented by one and the next arrival event is scheduled to
occur after ta time units.
ta
(Q>0)
( S == 0 ) ts
Start Arr Beg Dep
Figure 7.3
Event graph for the single-server queueing system.
ta
(N>0)
ts
ts
Start Arr Dep
( N == 1 )
{N=0} {N=N+1} {N=N-1}
Figure 7.4
Reduced event graph for the single-server queueing system.
addition, the next arrival event is scheduled to occur after ta time units. This
is very important to keep the simulation running.
Next, whenever the service of a packet begins, the Beg event schedules a
Dep event to occur after ts time units. It also changes the state of the server
to busy and decrements the size of the queue by one. Similarly, when the Dep
event occurs, the state of the server is changed back to free and a new Beg
event is scheduled if the queue is not empty.
The complexity of event graphs is measured by the number of vertices and
edges present in the graph. Fortunately, an event graph can be reduced to a
smaller graph, which is equivalent as far as the behavior of the system under
study is concerned. However, it may be necessary to eliminate (and/or intro-
duce new) state variables, attributes, and conditions in order to accommodate
the new changes. In this specific example, the reduced event graph contains
only one state variable, N , which represents the total number of packets inside
the system.
Figure 7.4 shows a reduced event graph for the single-server queueing sys-
tem. With this new event graph, the number of vertices and edges is reduced
from four to three and five to four, respectively. Of course, for larger systems,
the reduction will be significant.
114 Computer Simulation: A Foundational Approach Using Python
ta
(Q>0)
(S>0) ts
Start Arr Beg Dep
Figure 7.5
Event graph for the K-server queueing system.
{L=L+1,Q=Q-1}
Loss
)
ta
N
(Q>0)
>
(Q
( S == 0 ) ts
Start Arr Beg Dep
Figure 7.6
Event graph for the single-server queueing system with a limited queue ca-
pacity.
ta
(Q>0)
(S>0) ts
Arr Beg Dep {S=1}
(Q>0) tf
{ F = 1, S = 0,
Rep Fail
Q=Q+1}
{F=0}
tr
Start
{ F = 0, S = 1 , Q = 0 }
Figure 7.7
Event graph for the single-server queueing system with a server that fails.
the introduction of a new event type to capture this situation. This new event
is referred to as the Loss event.
The Loss event occurs whenever there is an Arrival event and the number of
packets in the buffer is N , which is the maximum queue size. When the arrival
event occurs, the state variable Q is incremented by one and the next events
are scheduled. If Q > N , a loss event is scheduled to occur immediately. When
the loss event occurs, the state variable L is incremented by one to indicate
a loss of a packet. On the other hand, the state variable Q is decremented by
one since it has been incremented by one when the arrival event occurred.
Renege
ta
T ts
(S == 0)
Start Arr Beg Dep
(Q > 0)
{S=0,Q=0} {Q=Q+1} { S = 1, Q = Q - 1 } {S=0}
Figure 7.8
Event graph for the single-server queueing system with reneging.
{Q=Q-1,B=B+1}
Balk
b )
ta
P
b ≤
(u
(Q>0)
( ub > Pb & S == 0 )
Start Arr Beg Dep
ts
Figure 7.9
Event graph for the single-server queueing system with balking.
balking means that the customer leaves the system upon arrival. A customer
may balk with probability Pb or enter the system with probability 1 − Pb .
After he enters the system, a customer is either scheduled for service or he
waits in the queue if the server is busy. Figure 7.9 shows the event graph for
the single-server queueing system with balking. The state variable B is used
to keep track of the customers who balk.
Table 7.1
Event table for the event graph in Figure 7.4.
events and starts the simulation. A block of code must exist in the simulation
program before the simulation loop to explicitly place the initial events pointed
to by the start event into the event list.
Listing 7.1
Python implementation of the event graph in Figure 7.4.
1 from random import *
2 from bisect import *
3
4 # Parameters
5 lamda = 0.5
6 mu = 0.7
7 n = 100 # Number of packets to be simulated
8
9 # Initialization
10 clock = 0.0 # Simulation clock
11 evList = [] # Event list
12 count = 0 # Count number of packets simulated so far
13
22 if count <= n:
23 ev = ( clock + expovariate(lamda) , Handle_Arr_Ev )
24 insert(ev)
25
50 # Simulation loop
51 while evList:
52 ev = evList.pop(0)
53 clock = ev[0]
54 ev[1](clock) # Handle event
120 Computer Simulation: A Foundational Approach Using Python
Table 7.1 is the event table for the event graph in Figure 7.4. Listing 7.1
shows how the information in Table 7.1 is translated into Python code. As
you can tell, the code is very structured. Thus, this process can be automated
very easily. Next, this translation process is described.
In the the first part of the program (lines 1-2), standard Python libraries
(random and bisect) are imported into the program. The first one contains
functions which can be used for random number generation. The second one,
however, contains functions for manipulating the event list. Then, parameters
of the simulation models are defined on lines 5-7. There are only three param-
eters: arrival rate (lamda), service rate (mu), and number of packets to be
simulated (n).
In the third part of the program (lines 10-13), the simulator is initialized.
First, the simulation clock is set to zero. This variable is used to keep track of
the simulation time. After that, an empty list is created to keep the simulation
events. In this list, events are kept in order using the predefined function in-
sort˙right from the bisect library (see line 16). There is only one state variable,
N , in this simulation model. It is initialized on line 47. The variable count
defined on line 13 is used for counting the number of packets which have been
simulated. This is necessary to make sure that no more than n packets are
simulated. The value of this variable is incremented and checked inside the
event generator of the arrival event (see lines 20-24).
In the fourth part of the program (lines 15-16), a convenience function
is defined. This abstraction hides the unconventional name used for the pre-
defined function used for sorting events in the event list. This part is not
necessary. However, we believe it helps in enhancing the readability of the
code.
Figure 7.10 shows a template which can be used as an aid when performing
a manual translation of an event graph into a Python simulation program. The
match between program in Listing 7.1 and the proposed template is almost
perfect. Each block in the template corresponds to a modeling concept. This
concept is expanded into code in the final simulation program.
7.4 SUMMARY
A visual simulation model of any discrete-event system can be constructed
using event graphs. Although an event graph gives a very high-level represen-
tation of the system, it still helps in capturing and understanding the complex
relationships between events inside the simulation model. In addition, event
graphs can be translated into Python code in a systematic way using the ab-
stractions discussed in this chapter. The resulting code is easy to understand
and maintain. Of course, as the size of the system grows, its event graphs
become very complicated.
7.5 EXERCISES
Event Graphs 121
Simulation Program
Simulation loop
Figure 7.10
A template for synthesizing simulation programs from event graphs.
7.1 Consider the system in Figure 7.11. There are two independent single-
server queueing systems. There is one traffic source which feeds the two
systems. Traffic is randomly split between the two systems. That is, an
arriving packet joins the system i with probability λλi , where λ1 +λ2 = λ.
7.2 Consider the setup in Figure 7.12 where a user communicates with an
online service hosted in a data center. The channel connecting the user
to the service has two characteristics: (1) propagation delay (Pd ) and
(2) rate (R). The propagation delay is the time required for an electrical
signal to travel from the input of the channel to its output. Hence, if a
packet is injected into the channel, it will arrive at the other end after
Pd time units. The rate, however, is the speed of the channel. It gives
the number of bits which can be injected into the channel in one second.
Thus, R1 is the time required to inject (i.e., transmit) one bit. The user
communicates with the server using a simple protocol. Basically, the user
transmits a message. Then, it waits until it receives an acknowledgment
from the server that the message has been received successfully. If the
user does not receive an acknowledgment within a preset time period,
122 Computer Simulation: A Foundational Approach Using Python
1
λ₁
λ
λ₂
2
Figure 7.11
Two parallel single-server queueing systems with one shared traffic source.
Channel
Pd = 10 nsec
User Data Center
R = 10 M bps
Figure 7.12
A simple network setup where a user communicates with a server in a data
center over a communication channel created inside a network. Propagation
delay (Pd ) and rate (R) are two important characteristics of a channel.
a. Identify all the possible events which can occur in this system.
b. Draw an event graph which describes the operation of this system.
7.3 Consider the event graph for the single-server queuing system with
reneging (see Figure 7.8). Assume that the renege time of each packet
is different. In this case, the order of reneges is not necessarily the same
as the order of arrivals to the system. Modify the event graph in Figure
7.8 to reflect this new situation.
CHAPTER 8
Building Simulation
Programs
123
124 Computer Simulation: A Foundational Approach Using Python
Slot No. 1 2 3 4 5
Figure 8.1
In time-driven simulation, simulated time evolves in increments of size ∆t.
The variable clock represents the current simulated time, which advances in
increments of size Delta_T.
Listing 8.1 shows a time-driven simulation program for a discrete-
time model of the single-server queueing system. In this case, the arrival
and departure processes are Bernoulli random processes (see Figure 4.14).
In each time slot, an arrival and departure can occur with probabilities
Pa and Pd , respectively. The system will be simulated for a period of
Total_Number_Of_Slots slots. There is only one state variable which is
Q.
The simulation loop starts on line 15. In each iteration, the simulated time
is updated by Delta_T. Then, a random number is generated to check if an
arrival has occurred (see line 17). The auxiliary variable A is set to one if there
is an arrival. Similarly, on lines 21 and 22, a random number is generated and
the auxiliary variable D is set to one if there is a departure. Finally, the state
variable Q is updated at the end of the simulation loop.
Listing 8.1
A time-driven simulation program for the discrete-time single-server queueing
system.
1 from random import *
2 from statistics import *
3
7 clock = 0
8 Delta_T = 1
9
10 Total_Number_Of_Slots = 10000
11
14 # Simulation loop
15 for n in range(1, Total_Number_Of_Slots):
16 A = 0 # Auxiliary variable for indicating an arrival
17 D = 0 # Departure
18 clock = clock + Delta_T
19 if random() <= Pa:
20 A = 1
21 if random() <= Pd and Q > 0:
22 D = 1
23 # Update state variable
24 Q = Q + (A - D)
126 Computer Simulation: A Foundational Approach Using Python
Figure 8.2
Arrival and departure processes and their random variables in continuous-
and discrete-time queues.
Departure
Arrival
1. --..
I
/
I
,-- .... "',"
I
,. ...... -- .....
''-'"
I
.,.,.---- .......
..... ""'"
I
'~
1 ~Time
0 5 7 15 21 25
Llt2
Figure 8.3
In event-driven simulation, simulated time evolves in steps of random sizes
(∆t1 6= ∆t2 ).
one with the earliest event time. Fortunately, in Python, this list is already
implemented for us. For more details, see section A.7 in Appendix A.
Simulator
New
Events
Events
Model
Performance
Estimates
Figure 8.4
An event-driven simulation program has two independent components: simu-
lator and model.
u v
seed RNG RVG REG event
Figure 8.5
How a random number u is used to generate an event.
Executing the simulation model results in new events which are passed to
the simulator. After sorting them, the simulator applies them back to the sim-
ulation model. Whenever an event is applied, the current values of some state
variables are recorded in predefined output variables. These values (or sam-
ples) will eventually be used for computing statistics about the performance
of the system under study.
Figure 8.6 shows the general structure of any event-driven simulation pro-
gram. The two components mentioned above are explicitly identified. There
are mainly four steps. Step 1 and 2 are part of the simulator. In Step 1, the
program is initialized. Parameters are read from the user and variables are de-
clared. The event list is created and initial events are inserted into it. Finally,
the simulation clock is set to zero. After that, in Step 2, the simulation loop
is executed. In each iteration of this loop, the next event is fetched from the
event list. It is the event with the earliest event time. The clock is updated
and the event handler of the event is called.
In Step 3, the model is executed as a result of calling event handlers. Inside
Building Simulation Programs 129
Start
Simulator
1. Initialization
Simulation parameters
State variables
Output variables
Event list
clock = 0
2. Simulation loop
ev = EventList.Next_Event( )
clock = ev.Time
ev.Event_Handler( clock )
Model
3. Event handler for event type 1 3. Event handler for event type N
Update state and output variables Update state and output variables
Generate and schedule new events Generate and schedule new events
4. Output
Stop
Figure 8.6
A flowchart of the event-driven simulation program.
each event handler, state variables are first updated. Then, they are sampled
and their current values are stored in the corresponding output variables.
Finally, new events are generated and they are passed to the simulator. Steps
2 and 3 are executed repetitively until the condition of the simulation loop
becomes true. For instance, the simulation loop should be terminated when
the event list becomes empty. In the final step of the program (i.e., Step
4), statistical estimates of the performance measures are computed using the
values of state variables stored in the output variables.
130 Computer Simulation: A Foundational Approach Using Python
Table 8.1
Mapping concepts to code in Listing 8.2.
Operations Lines
Initialization 6 - 23
REG for the arrival event 25 - 29
REG for the departure event 31 - 35
Event handler for the arrival event 37 - 47
Event handler for the departure event 49 - 58
Insert initial events into the event list 61 - 63
Simulation loop 80 - 83
Statistical summaries 96 - 104
Listing 8.2
An event-driven simulation program for the single-server queueing system.
1 from random import *
2 from queue import *
3 from statistics import *
4 from math import *
5
6 # Simulation parameters
7 lamda = 0.2
8 mu = 0.3
9 n = 10000 # Number of simulated packets
10
17 # State variables
18 Q = 0
19 S = False # Server is free
20
21 # Output variables
22 arrs = []
Building Simulation Programs 131
23 deps = []
24
25 # Event list
26 evList = None
27
91
92 def main():
93 global arrs, deps
94 m = 50 # Number of replications
95 Samples = []
96 for i in range(m):
97 d = []
98 seed() # Reseed RNG
99 sim()
100 d = list( map(lambda x,y: x-y, deps, arrs) )
101 Samples.append( mean(d) )
102
8.5 SUMMARY
There are two approaches to writing simulation programs: time-driven and
event-driven. The second approach is the most common one. A template for
discrete-event simulation programs was proposed in this chapter. In addi-
tion, several programming issues were mentioned and their solutions were
suggested.
8.6 EXERCISES
8.1 Write a time-driven simulation program for the single-server queueing
system where the arrival process is Poisson. Assume that in each time
slot, one departure will occur if the queue is not empty. Compute the
average delay through this system.
8.2 Consider the system configuration in Figure 8.7. Write a discrete-event
simulation program that simulates this system and computes the average
delay through it.
8.3 Consider the single-server queueing system with reneging (see Figure
7.8). After waiting for five minutes in the queue, a customer reneges.
Write a discrete-event simulation program to estimate the average time
between customers who renege.
136 Computer Simulation: A Foundational Approach Using Python
1 u1
u2 2
Figure 8.7
Two single-server queueing systems in series with external arrivals.
III
Problem-Solving
CHAPTER 9
The Monte Carlo (MC) method was born during the second world war.
It was used in the simulation of atomic collisions which then resulted in the
first atomic bomb. Nowadays, the MC method is used in different fields such
as mathematics, physics, biology, and engineering. In its simplest form, a MC
method is an algorithm that use random variates to compute its output. In this
chapter, we are going to explore through concrete applications the usefulness
of the MC method. In addition, several enhanced versions of the original MC
method are discussed.
x2 + y 2 ≤ r 2 (9.1)
Both x and y take values from the interval [−1, +1]. r has a fixed value of 1.
In Figure 9.1, there are two regions: Circle (C) and Square (S). S contains
C. From measure theory, the probability that a point (x, y) lies inside C is
139
140 Computer Simulation: A Foundational Approach Using Python
y
(0,1)
(x,y)
1
r=
(1,0) x
(-1,0) (0,0)
(0,-1)
2r
Figure 9.1
Setup used for performing MC simulation to estimate π.
given by:
measure of C
P [(x, y) ∈ C] =
measure of S
area of C
=
area of S (9.2)
πr2
= 2
4r
π
= .
4
Hence, the following equation for π can be deduced.
π = 4 · P. (9.3)
Now, we have an expression for π. However, we still need to estimate the
value of P . Since P is the probability of an event, a binary (i.e., Bernoulli)
random variable should be used in the simulation. This variable is defined as
follows: (
1, if (x, y) ∈ C,
Z= (9.4)
0, otherwise.
The expected (i.e., average) value of Z represents the value of P . It is the
proportion of times the event of interest (i.e., {(x, y) ∈ C}) occurs in a long
series of trials. It can mathematically be expressed as follows:
E[Z] = 1 · P [{(x, y) ∈ C}] + 0 · P [{(x, y) 6∈ C}]
= P [{(x, y) ∈ C}] (9.5)
π
= .
4
The Monte Carlo Method 141
Listing 9.1
Python procedure for estimating π using MC simulation.
1 from random import *
2 from statistics import *
3
4 N = 100000
5
6 Z = []
7 for i in range(N):
8 x = uniform(-1, 1)
9 y = uniform(-1, 1)
10 if x**2 + y**2 <= 1:
11 Z.append(1)
12 else:
13 Z.append(0)
14
π = 4 · E[Z]. (9.6)
f(x)
2
c
1 J
I
1 4 x
a b
Figure 9.2
Setup used for performing MC simulation to estimate a one-dimensional inte-
gral.
The probability that a randomly generated point falls inside region I can
be computed as follows:
measure of region I
P [(x, y) ∈ I] =
measure of region J
area of region I
= (9.9)
area of region J
AI
P = ,
AJ
where the area of region J is equal to AJ = c · (b − a).
Hence, the integral can be estimated using the following estimator:
AI = P · [(b − a) · c] (9.10)
The Monte Carlo Method 143
P = E[Z]
N
1 X (9.11)
≈ Zi
N i=1
where Zi is a Bernoulli random variate that can be generated using the fol-
lowing equation: (
1, if (x, y) ∈ I,
Z= (9.12)
0, otherwise.
Listing 9.2 shows how a one-dimensional integral can be estimated using the
CMC method.
Listing 9.2
Python procedure for estimating a one-dimensional integral.
1 from random import *
2 from statistics import *
3
4 # Specify parameters
5 a = 1
6 b = 8
7 N = 100000
8
9 # Integrand
10 def f(x):
11 return x**2
12
13 # Find value of c
14 c = f(b)
15
16 # Area of rectangle
17 A_J = (b-a) * c
18
19 Z = [0]*N
20 for i in range(N):
21 x = uniform(a, b)
22 y = uniform(0, c)
144 Computer Simulation: A Foundational Approach Using Python
Figure 9.3
The goal of the Buffon’s needle experiment is to compute the probability that
a needle of length l will intersect a horizontal line in a set of horizontal lines
separated by a distance equal to d.
23 if y <= f(x):
24 Z[i] = 1
25
Figure 9.4
Two random variables (a and φ) are used in the simulation. The needle will
intersect with the closest horizontal line if b ≥ a.
ables. These two random variables uniquely identify the location of the needle
on the floor. The two random variables are the following:
a: Distance from the midpoint of the needle to the closest horizontal line
(a ∈ [0, d2 ])
θ: Angle the needle makes with the closest horizontal line (θ ∈ [0, π])
Figure 9.4 shows a portion of the floor with one needle and two horizontal
lines. It also shows how the two random variables defined above are used
to characterize the location of the needle. Clearly, the needle will intersect
a horizontal line if a ≤ b. Figure 9.5 is a reminder of how the value of b
can be computed by using basic trigonometry. The exact expression for the
probability is the following [5]:
2l
P = . (9.13)
πd
Listing 9.3
Python procedure for the Buffon’s needle experiment.
1 from random import *
2 from math import *
3 l = 1
4 d = 1
5 n = 1000000
6 count = 0
7 for i in range(n):
8 a = uniform(0, d/2)
146 Computer Simulation: A Foundational Approach Using Python
Figure 9.5
According to trigonometry, the length of the line segment b is equal to the
value of the y-coordinate of the upper tip of the needle.
9.3.2 Reliability
Consider the block in Figure 9.6(a) where the input is connected to the output
if the switch is closed. The probability of this event (i.e., switch is closed)
corresponds to the portion of time the block is working. Let R be the reliability
of a block. Then, the reliability of the system (i.e., Relsys ) in Figure 9.6(b) is
R3 . It is the product of the reliability of the three blocks in series. Next, we
develop a simulation model to computationally estimate this number. Since we
know the exact answer in advance, we can easily tell if the proposed Python
procedure is correct.
First, let us define the sample space of the problem. The state of the system
(denoted by si ) is a set of three random variables (denoted by b1 , b2 , and b3 ),
where each random variable corresponds to the state of an individual block in
The Monte Carlo Method 147
Input Output
(a)
Input Output
(b)
Figure 9.6
Reliability is the probability that the input is connected to the output. (a)
The input is connected to the output if the swtich is closed. (b) Reliability of
the overall system is a function of the reliabilities of the individual blocks. In
this case, Relsys = R3 , where R is the reliability of a block.
Table 9.1 shows the individual points in the sample space, which is of size
23 . It also shows the probability of each possible systems state.
Now, let us define a new random variable φ over the sample space of system
states. This random variable is defined as follows:
(
1, if input is connected to output
φ(si ) = (9.15)
0, otherwise.
Next, the system reliability can be calculated as follows:
Relsys = E[φ]
3
2
X (9.16)
= φ(si )P [si ].
i=1
The random variable φ will be one for s8 only. This event occurs with a
probability of p3 = 0.343. Hence, the reliability of the system is calculated as
follows:
148 Computer Simulation: A Foundational Approach Using Python
Table 9.1
Sample space of the system in Figure 9.6(b) with probability of each sample
point.
Relsys = E[φ]
7
X
= φ(si ) · P [si ] + φ(s8 ) · P [s8 ]
i=1
7
X
= 0 · P [si ] + 1 · P [s8 ]
i=1
= 1 × 0.343.
= 0.343.
Listing 9.4 shows how this reliability can be estimated using the CMC
method. A realization of the system is generated on lines 15-17. Then, the
realization is checked if it represents a connected system, which is the event
of interest.
Listing 9.4
Estimating the reliability of the system in Figure 9.6(b).
1 from random import *
2
3 Num_Trials = 100000
4 count = 0
5 p = 0.3 #Probability a block is working
6
7 def Phi(X):
The Monte Carlo Method 149
8 if sum(X) == 3:
9 return 1
10 else:
11 return 0
12
13 for i in range(Num_Trials):
14 X = []
15 for j in range(3):
16 if random() <= p: X.append(1)
17 else: X.append(0)
18 count = count + Phi(X)
19
Hence, the second term in Eqn. (9.17) evaluates to zero. However, since the
number of samples is finite, the samples of Y are going to reduce the variance
in the estimator of E[X]. The result is an estimator that is better than using
only CMC.
As an example, consider the following integral which is to be estimated
using control variates: Z 1
I= ex dx. (9.19)
0
Listing 9.5
Estimating an integral in Eqn. (9.19) using the method of control variates.
1 from random import *
2 from math import *
3 from statistics import *
4
5 n = 10000
6
7 Y_mean = 1/2
8
9 X = []
10 Y = []
11
12 for i in range(n):
13 u = random()
14 X.append( exp(u) )
15 Y.append(u)
16
17 X_bar = mean(X)
18 Y_bar = mean(Y)
19
The Monte Carlo Method 151
24 for i in range(n):
25 A.append( (X[i] - X_bar) * (Y[i] - Y_bar) )
26 B.append( (Y[i] - Y_bar)**2 )
27
28 c = sum(A) / sum(B)
29
40 # Output
41 # I(CMC) = 1.7299 , Variance = 0.2445
42 # I(CV) = 1.7185 , Variance = 0.0039
Ni
1 X
E[f (x)|x ∈ Si ] = · f (xij ), (9.21)
Ni j=1
Listing 9.6 R1
Estimating the integral 0 e−x dx using the crude Monte Carlo and stratified
methods.
1 from random import *
2 from math import *
3 from statistics import *
4
5 n = 10000
6
7 X = []
8
9 for i in range(n):
10 u = random()
11 X.append( exp(-u) )
12
14
15 Y = []
16
17 K = 4 # Number of strata
18 N_i = int(n / K) # Number of samples from each stratum
19
20 for i in range(K):
21 for j in range(N_i):
22 a = i * 1/K
23 b = a + 1/K
24 u = uniform(a,b)
25 Y.append( exp(-u) )
26
29 # Output
30 # I(CMC) = 0.6323 , Variance = 0.0325
31 # I(Stratified) = 0.6309 , Variance = 0.0323
v + v0
s∗ = . (9.24)
2
where s0 is the antithetic variate of s. Figure 9.7 illustrates how the antithetic
value is calculated for each point in the sample space of the random experiment
of throwing two dice. Surprisingly, this simple technique leads to a significant
reduction in the variance for the same number of samples.
154 Computer Simulation: A Foundational Approach Using Python
P[x]
6/36
5/36
4/36
3/36
2/36
1/36
2 3 4 5 6 7 8 9 10 11 12 x
Variate Its
Antithetic
Value
Figure 9.7
Sample space of the random experiment of throwing two dice. For the random
variate 4, 4+10
2 = 7 is generated instead if antithetic sampling is used.
Listing 9.7
Estimating the mean of a uniform random variable using antithetic sampling.
1 from random import *
2 from statistics import *
3
4 n = 1000
5
16 for i in range(n):
The Monte Carlo Method 155
17 v = uniform(a, b)
18 S_cmc.append( v )
19 v_ = a + b - v
20 S_ant.append( (v + v_) / 2 )
21
25 # Output
26 # Mean(S_cmc) = 25.6361 , Variance = 178.0452
27 # Mean(S_ant) = 25.0 , Variance = 0.0
Listing 9.8 shows how the value of the following integral can be estimated
using antithetic sampling. Z 1
2
ex dx. (9.25)
0
Although both the crude Monte Carlo and antithetic sampling methods
achieve a good accuracy, they significantly differ in the variance. Antithetic
sampling achieves a very low variance for the same number of samples.
Listing 9.8
Estimating the value of the integral in Eqn. (9.25) using CMC and antithetic
sampling. The reduction in variance is about 12%.
1 from random import *
2 from statistics import *
3 from math import *
4
5 n = 10000
6
7 S_cmc = []
8 S_ant = []
9
156 Computer Simulation: A Foundational Approach Using Python
10 for i in range(n):
11 u = random()
12 u_ = 1 - u
13 S_cmc.append( exp(u**2) )
14 S_ant.append( ( exp(u**2) + exp(u_**2) ) / 2)
15
19 # Output
20 # Mean(S_cmc) = 1.4693 , Variance = 0.2296
21 # Mean(S_ant) = 1.4639 , Variance = 0.0287
‘H’ if u ≤ 0.3
X=
‘T’ if u > 0.3
u = 0.4
0 1
0.3 0.6 0.9
Trial 1 X₁ = ‘T’
Trial 2 X₂ = ‘H’
Trial 3 X₃ = ‘T’
Figure 9.8
With dagger sampling, three trials are performed using a single random num-
ber. Hence, three samples are generated.
Listing 9.9
Estimating the reliability of the system in Figure 9.6(b) using dagger sampling.
1 from random import *
2 from math import *
3
4 Num_Trials = 10000
5 count = 0
6 p = 0.3 # Probability a block is working
7
10 def Phi(X):
11 if sum(X) == 3:
12 return 1
13 else:
14 return 0
15
158 Computer Simulation: A Foundational Approach Using Python
20 for i in range(Num_Trials):
21 s1 = [0] * 3
22 s2 = [0] * 3
23 s3 = [0] * 3
24 for j in range(3):
25 u = random()
26 if u <= p:
27 s1[j] = 1
28 elif p < u <= 2*p:
29 s2[j] = 1
30 elif 2*p < u <= 3*p:
31 s3[j] = 1
32
39 # Output
40 # Exact = 0.027
41 # Rel_sys = 0.028
g(x)
p(x)
X
Region 1 Region 2
(a)
g(x)
q(x)
X
(b)
Figure 9.9
Values of g(x) are very close to zero over region 1. Probability distribution of
x is very close to zero over region 1. Another probability distribution has to
be used in order to generate samples from region 1 where the values of the
function g(x) are more interesting.
Computationally speaking, the new notation for the average of g(x) com-
puted using importance sampling is the following:
N
1 X
E[g(x)] ≈ g(xi ) · w(xi ), where xi ∼ q. (9.26)
N i=1
Listing 9.10 shows how this sampling procedure is used in estimating the
expected value of a function of a random variable.
Listing 9.10
Estimating the average of a function using importance sampling.
1 from random import *
2
3 N = 100000
4 E_g = 0
5
6 def g(x):
7 return 8*x
8
9 for i in range(N):
10 x = random() # Sample from p(x)
11 y = normalvariate(0, 10) # Sample from q(x)
12 w = x/y # Importance weight for current sample
13 E_g = E_g + g(y) * w
The Monte Carlo Method 161
14
9.5 SUMMARY
Monte Carlo is a powerful tool for estimating probabilities and expected
values. The reader is reminded that the design of MC algorithms is not as
straightforward as one might think. This is specially true in applications con-
taining events with small probabilities (i.e., rare events).
9.6 EXERCISES
9.1 Using the CMC method, write a program for estimating the probability
P [X > 5], where X is a Poisson random variable with parameter λ = 2.
Compare the estimated probability with the exact value.
9.2 Consider the network in Figure 9.10 where the length of each edge is a
random variable normally distributed over [1, 5]. The random variables
are IID. Write a program for estimating the expected length of the
shortest path between nodes A and D.
9.3 RUsing the method of control variates, estimate the integral I =
2 −x2
0
e dx using an appropriate Y .
9.4 Using importance sampling, write a program for estimating the prob-
ability P [X ∈ [10, ∞)], where X has an exponential distribution with
parameter λ = 1.
B
X₁ X₄
A X₃ D
X₂ X₅
C
Figure 9.10
Estimating the shortest path between nodes A and D (see Exercise 9.2).
IV
Sources of Randomness
CHAPTER 10
Random Variate
Generation
165
166 Computer Simulation: A Foundational Approach Using Python
0.8
0.6
u=0.5
0.4
0.2
v=8
0
0 5 10 15 20
(a) A continuous CDF where u = 0.5 corresponds to v = 8 (u =
1 − e−v ).
1
0.9
0.8
0.7
u = 0.6 0.6
u = 0.5 0.5
0.4
0.3
0.2
0.1
0
0 0.5 1 1.5 2 2.5 3 3.5 4
v=2
(b) Multiple random numbers are mapped onto one random vari-
ate in a discrete CDF.
Figure 10.1
Generating random variates from cumulative distribution functions.
On the other hand, the inversion method works differently on discrete ran-
dom variables. Figure 10.1(b) shows the CDF of a discrete random variable. In
this case, the relationship is many-to-one. That is, multiple random numbers
can be mapped onto one random variate. This only applies to the iCDF of
discrete random variables.
Random Variate Generation 167
xb − xa + a = y
y = x(b − a) + a.
v = u · (b − a) + a.
v = u · (b − a) + a
168 Computer Simulation: A Foundational Approach Using Python
F (x) = 1 − e−µx .
3. Swap x and y.
x = 1 − e−µy .
4. Solve for y.
e−µy = 1 − x.
−1
y= · ln(1 − x).
µ
5. Replace y with v and x with u.
−1
v= · ln(1 − u).
µ
1 − u can be replaced with u only.
6. The following expression is used as the exponential RVG:
−1
v= · ln(u)
µ
Example 10.3
1. Start with the CDF of the random variable.
x
F (x) = ; x > 0.
x+1
3. Swap x and y.
y
x= .
y+1
4. Solve for y.
x(y + 1) = y
xy + x = y.
x = y − xy
x = y(1 − x).
x
y= .
1−x
5. Replace y with v and x with u to get the RVG:
u
v=
1−u
The CDF for the above PMF can be expressed as the following:
i
X
F (i) = P (X ≤ i) = pj . (10.2)
j=0
Hence, the RVG for a discrete random variable can be described as follows:
0 if 0 ≤ u < p0 ,
P1
1
if p0 ≤ u < p0 + p1 (= j=0 pj ),
P1 P2
2 if j=0 pj ≤ u < j=0 pj ,
v= (10.3)
...
...
Pn−2 Pn−1
(n − 1) if j=0 pj ≤ u < j=0 pj ,
170 Computer Simulation: A Foundational Approach Using Python
0.45
0.4
0.35
0.3
0.25
P(x)
0.2
0.15
0.1
0.05
0
0 1 2 3 4
x
Figure 10.2
Generating random variates using the PMF of a discrete random variable.
Listing 10.1
Generating random variates using the information in Figure 10.2(a).
1 import random as rnd
Random Variate Generation 171
5 u = rnd.random()
6
18 print (’u = ’ , u , ’ , v = ’ , v)
Listing 10.2
Generating Bernoulli random variates.
1 import random as rnd
2
5 u = rnd.random()
6
172 Computer Simulation: A Foundational Approach Using Python
7 if 0 <= u <= p:
8 print( ’1’ ) # Success
9 else:
10 print( ’0’ ) # Failure
Listing 10.3
Generating binomial random variates.
1 import random as rnd
2
14 for i in range(n):
15 count = count + Bernoulli(p)
16
17 print( ’v = ’ , count )
Random Variate Generation 173
Listing 10.4
Generating geometric random variates.
1 import random as rnd
2
7 def Bernoulli(p):
8 u = rnd.random()
9 if 0 <= u < p:
10 return 1
11 else:
12 return 0
13
14 while(Bernoulli(p) == 0):
15 count = count + 1
16
17 print( ’v = ’ , count )
y
6
g(x) g(x = 3)
5
f (x)
4
f (x = 3)
(2,1)
1
x
1 2 3 4 5 6
Figure 10.3
Generating a random variate from the PDF f(x) using the auxiliary PDF g(x).
x = 3 is accepted if u ≤ fg(3)
(3)
.
In this method, the PDF of the random variable (i.e., f (x)) is used in-
stead of its CDF. In addition, another auxiliary PDF g(x) is used. Only one
assumption is made about g(x). That is, we know how to generate a random
variate using g(x). Thus, g(x) can be as simple as the uniform PDF which has
a rectangular shape.
Figure 10.3 shows how g(x) can be used to enclose f (x). In fact, any PDF
with a very complicated shape can always be enclosed within a uniform PDF.
In this way, two regions are created. The first one is enclosed by the curve
of f (x). The second one is above the curve of f (x) and below that of g(x).
Now, when a point (x, y) is randomly generated such that 1 ≤ x ≤ 5 and
0 ≤ y ≤ 5, we can visually tell if it lies below the curve of f (x) or above it. If
it lies below the curve, then the x-coordinate of the point is reported as the
random variate generated according to f (x), which is what we are after. But,
how do we generate the random point (x, y)?
The coordinates are generated as uniform random variates. x is uniformly
distributed between 1 and 5. Also, y is uniformly distributed between 0 and
5. x is accepted if y ≤ f (x), where y = u · g(x). It is this condition that will
make sure that the generated random variates will follow the distribution of
f (x).
Random Variate Generation 175
g(x)
0.8
0.6
Y
0.4
f(x)
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5 4
X
2.5
2
N
1.5
0.5
0
0 0.5 1 1.5 2 2.5 3 3.5 4
X
Figure 10.4
Random variates generated using the rejection method.
We want to generate random variates such that they follow f (x). Figure
10.4(a) shows the shape of f (x). It also shows g(x) which encloses f (x). In
176 Computer Simulation: A Foundational Approach Using Python
this example, since the maximum value of f (x) is 0.8, we choose g(x) = 1 for
all x ∈ [0, 4]. g(x) is always a constant and it can be greater than one.
Listing 10.5 shows a procedure for computing random variates from f (x).
The functions f (x) and g(x) are declared on lines four and eight, respectively.
Then, on line 12, the random variate generation process starts. First, x is
randomly assigned a value from its set of values. Then, a uniform number is
generated and then used in the comparison on line 15. Notice that u × g(x)
represents a value on the y-axis. If this value is less than or equal to the value
of f (x) at the same x, then the point (x, y) lies below the curve of f (x) and
x can be accepted as a random variate. Otherwise, the process is repeated. In
fact, fg(x)
(x)
is a probability (i.e., 0 ≤ fg(x)
(x)
≤ 1).
In order to check the validity of the procedure in Listing 10.5, one million
random variates are generated and then a histogram is constructed. Figure
10.4(b) shows the histogram. Clearly, the distribution of the generated random
variates follows that of f (x).
Listing 10.5
Generating random variates based on the rejection method.
1 import random as rnd
2 import math as M
3
4 def f(x):
5 return 0.2 * M.exp(-(x - 0.2)**2.0) +
6 0.8 * M.exp(-(x - 2.0)**2.0 / 0.2)
7
8 def g(x):
9 return 1 # Uniform PDF
10
11 Stop = False
12 while not Stop:
13 x = rnd.uniform(0, 4) # Generate x
14 u = rnd.random() # y = u * g(x)
15 if u <= f(x) / g(x): # y <= f(x)
16 print x
17 Stop = True
Random Variate Generation 177
f(x)
f2(x) = (1/3) x + 1 1
f1(x) = (-1/3) x + 1
-3 0 3 x
0 - (-3) = 3 3-0=3
3 - (-3) = 6
3 / 6 = 0.5 3 / 6 = 0.5
Figure 10.5
Triangular distribution. f (x) is composed of two functions each of which is
defined over half the interval over which f (x) exists.
−1
3 ·x+1 if 0 ≤ x ≤ 3,
f (x) = 13 · x + 1 if − 3 ≤ x < 0, (10.5)
0 otherwise.
where
−1
f1 (x) = · x + 1,
3
and
1
f2 (x) = · x + 1.
3
2. Determine p1 and p2 . From Figure 10.5, the size of the interval over
which each function is defined is 3. The size of the interval over which
the original function f is defined is 6. Hence, each function is defined
over an interval that represents 50% of the domain of the original func-
tion.
p1 = p2 = 0.5.
Y = X1 + X2 + ... + XK . (10.6)
y = x1 + x2 + ... + xK .
Listing 10.6 shows how an Erlang variate can be generated using the con-
volution method. Remember that an Erlang random variable is a sum of k
independent exponential random variables. For more details, see Section 4.2.7.
Figures 10.6(a) and 10.6(b), respectively, show the PDF of the Erlang random
variable and histogram constructed from the Elrang random variates gener-
ated using the convolution method. Clearly, the two graphs match.
Listing 10.6
Generating an Erlang random variate using the convolution method.
1 from random import *
2 from math import *
3
4 k = 10
5 theta = 1.5
6
7 y = 0
8
9 for i in range(k):
10 u = random()
11 x = (-1 / theta) * log(1-u) # Exponential variate
12 y = y + x
13
14 print("Y", y)
30 40 50
y
(a) PDF.
25 30 35
(b) Histogram.
Figure 10.6
The shape of the histogram constructed using the random variates generated
using the convolution method resembles that of the PDF of the Erlang random
variable.
and variance of 1 can also be generated using the convolution method. The
reader should be reminded of the following properties of the uniform random
variable defined on (0, 1):
1. From Eqn. (4.22) and for a = 0 and b = 1,
1
µ= ,
2
Figure 10.7
Histogram constructed from standard normal variates generated using the
convolution method in Listing 10.7.
Listing 10.7
Generating a standard normal random variate using the convolution method.
1 from random import *
2
3 z = -6
4 for i in range(12):
5 u = random()
6 z = z + u
7
8 print("Z = ", z)
182 Computer Simulation: A Foundational Approach Using Python
Inequality (10.9) states that five arrivals can be reported to have occurred if
we can sequentially generate five random numbers whose product is greater
than or equal to e−λ . As shown in Figure 10.8, the sixth arrival cannot be
considered to have arrived during the first time slot because the sum of the
six inter-arrival times would be greater than 1. This is the stopping condition
that should be used in the random variate generation scheme for the Poisson
distribution.
Listing 10.8 shows how a Poisson random variate can be generated in
Python. The left-hand side of inequality (10.9) is implemented by line number
12. The right-hand side, however, is the stopping condition of the while loop
(line number 10). So, as long as the product of the generated random numbers
does not exceed the threshold set on line number 7, the variable count is
incremented in every iteration of the while loop. Once the product of the
generated random numbers exceeds the threshold, the while loop is exited and
the content of the variable count is reported as the Poisson random variate.
2 Use the following rules ln(a · b) = ln(a) + ln(b) and eln(x) = x.
Random Variate Generation 183
A1 A2 A3 A4 A5 A6
T1 T2 T3 T4 T5 T6
0 1
Time Slot # 1
Figure 10.8
Arrivals during a time slot can be modeled as a Poisson random variable.
Listing 10.8
Generating a Poisson random variate.
1 import random as rnd
2 import math
3
7 b = math.exp(-lmda)
8 u = rnd.random()
9
10 while u >= b:
11 count = count + 1
12 u = u * rnd.random()
13
Figure 10.9(a) shows the graph of the Poisson distribution using its PMF
with λ = 10. On the other hand, Figure 10.9(b) shows a histogram of one
million Poisson random variates generated using the program in Listing 10.8.
In this figure, the y − axis represents the number of variates in every bin of
the histogram. To obtain the probability of the Poisson variate corresponding
to every bin, the size of the bin is divided by the total number of generated
variates. But, clearly, the shape of the histogram resembles that of the PMF
of the Poisson random variable with λ = 10.
184 Computer Simulation: A Foundational Approach Using Python
0.14
0.12
= 10
0.1
0.08
P(x)
0.06
0.04
0.02
0
0 10 20 30 40 50 60 70 80 90 100
x
12
= 10
10
8
N
0
0 10 20 30 40 50 60 70 80 90 100
v
Figure 10.9
The shape of the histogram constructed using simulated Poisson random vari-
ates resembles that of the PMF of a Poisson random variable.
Random Variate Generation 185
400 400
300 300
200 200
N
N
100 100
0 0
−5 0 5 −4 −2 0 2 4
z1 z2
150 150
100 100
N
50 50
0 0
0 0.5 1 0 0.5 1
u1 u2
Figure 10.10
Using the procedure in Listing 10.9, the uniform random numbers are trans-
formed into random standard normal variates.
the transformation. The first row, however, shows the resulting histograms of
z1 and z2 after applying the Box-Muller transformation. Clearly, the shape of
the new histograms follows that of the standard normal distribution.
In order to generate a random variate from a non-standard normal dis-
tribution with µ 6= 0 and σ 6= 1, the following equation can be used. In this
equation, z is a random variate from a standard normal distribution:
v = µ + σ × z.
Listing 10.9
Generating a random variate from a standard normal distribution.
1 from math import sqrt, log, sin, cos, pi
2 from random import random
3
4 def normal(u1,u2):
5 z1 = sqrt( -2 * log(u1) ) * cos ( 2 * pi * u2 )
186 Computer Simulation: A Foundational Approach Using Python
9 u1 = random()
10 u2 = random()
11 z = normal(u1,u2)
12
10.6 SUMMARY
Techniques for generating random variates from many important continuous
and discrete probability distributions have been introduced and illustrated by
examples. The correctness of these techniques have been proved by comparing
the shapes of the histograms constructed from the generated random variates
with the shapes of the theoretical probability distribution functions.
10.7 EXERCISES
10.1 Consider the following triangular density function defined on [−1, 1]:
1 + x, if − 1 ≤ x ≤ 0
bi = 1 − x, if 0 < x ≤ 1
0, otherwise.
a. Draw f (x).
b. Develop a Python program to generate samples from this distribution
using the inversion, rejection, composition, and convolution methods.
c. For each method, generate 10000 random variates and plot the his-
togram. Does the shape of the histogram match that of the given
PDF?
10.2 Write a Python program for generating random variates from the log-
normal probability distribution. Use the fact that the natural logarithm
of a log-normal random variable has a normal distribution.
CHAPTER 11
Random Number
Generation
187
188 Computer Simulation: A Foundational Approach Using Python
f(u)
1.0
0 1.0 u
Figure 11.1
Probability distribution of u.
1. Mean
1
E(u) = . (11.2)
2
2. Variance
1
V (u) = . (11.3)
12
3. Expectation of the autocorrelation
As will be seen later, for a large set of random numbers, the above three
statistics can be used as a quick (and first) test for uniform randomness (see
Listing 11.1). If the computed values match the above theoretical values, then
the generated random numbers could be uniformly distributed. Of course,
further testing needs to be done.
Listing 11.1
Testing a set of random numbers if they are uniformly distributed.
1 from random import *
2 from statistics import *
3
4 N = 10000
Random Number Generation 189
7 corr = 0
8 for i in range(N-1):
9 corr = corr + data[i]*data[i+1]
10 corr = corr / N
11
16 # Output
17 # Mean = 0.5
18 # Variance = 0.08
19 # Autocorrelation = 0.25
Table 11.1
Random variates are repeated after a cycle of size 3. This is due to the repe-
tition in the generated sequence of the random numbers.
r = 7 − 75 · 5
= 7 − b1.4c · 5
= 7 − (1) · 5
=7−5
= 2.
i bi bi (mod 7)
0 1 1
1 3 3
2 9 2
3 27 6
4 81 4
5 243 5
192 Computer Simulation: A Foundational Approach Using Python
i bi bi (mod 13)
0 1 1
1 2 2
2 4 4
i bi bi (mod 7)
3 8 8
0 1 1
4 16 3
1 2 2
5 32 6
2 4 4
6 64 12
3 8 1
7 128 11
4 16 2
8 256 9
5 32 4
9 512 5
10 1024 10
11 2048 7
12 4096 1
n 0 1 2 3 4 5 6 7 8
Xn 0 3 9 1 5 3 9 1 5
un 0.0 0.3 0.9 0.1 0.5 0.3 0.9 0.1 0.5
Random Number Generation 193
Seed 1 Delay = 13
Seed N Delay = 11
Figure 11.2
Multiple seeds are used to make different simulation runs. Different paths in
the system state space are explored and the average is computed using the
values resulting from these different paths.
u0 is not considered as part of the sequence. Thus, the sequence will repeat
itself after four steps. That is, the same random number will re-appear after
four steps.
11.5.1 2k Modulus
In order to produce a long sequence of unique random numbers, the values of
the parameters can be set as follows.
X0 is an odd integer,
a = 8t ± 3, where t is a positive integer, and
m = 2k , where k is equal to the word size of the computer (e.g., 64 bits).
b
For a, choose the value which is closest to 2 2 . If the above recipe is followed,
it is guaranteed that we will get a sequence of 2(b−2) random numbers before
the sequence is repeated.
Consider a multiplicative congruential random number generator with the
following paramters: t = 1, b = 4, and X0 = 1. The resulting sequence will
have a period of size four, a = 11, and m = 16.
194 Computer Simulation: A Foundational Approach Using Python
n 0 1 2 3 4 5 6 7 8
Xn 1 11 9 3 1 11 9 3 1
un 0.062 0.688 0.562 0.188 0.062 0.688 0.562 0.188 0.062
u0 is not considered as part of the sequence. Thus, the sequence will repeat
itself after four steps. That is, the same random number will re-appear after
four steps.
n 0 1 2 3 4 5 6 7
Xn 3 2 6 4 5 1 3 2
un 0.43 0.29 0.86 0.57 0.71 0.14 0.43 0.29
b3 b2 b1 + b0
x0 x1 x2 x3 x4
Figure 11.3
A four-bit linear feedback shift register with characteristic polynomial c(x) =
1 + x3 + x4 .
Table 11.2
Maximum-length sequence of random numbers generated by the LFSR in
Figure 11.3.
b3 b2 b1 b0 Number
0 0 0 1 1
1 0 0 1 9
0 0 0 1 13
0 0 0 1 15
0 0 0 1 14
0 0 0 1 7
0 0 0 1 10
0 0 0 1 5
0 0 0 1 11
0 0 0 1 12
0 0 0 1 6
0 0 0 1 3
0 0 0 1 8
0 0 0 1 4
0 0 0 1 2
0 0 0 1 1∗
operation b3 b2 b1 b0
num 0 0 0 1
num >> 1 0 0 0 0 b0
aligned
num << 0 0 0 0 1 b1
XOR 0 0 0 1
AND 0 0 0 1
temp1 0 0 0 1
Figure 11.4
Computing the first intermediate binary number on line 13 in Listing 11.2.
b7 b6 b5 b4 + b3 + b2 + b1 b0
0 1 2 3 4 5 6
x x x x x x x x 7
x8
Figure 11.5
An eight-bit linear feedback shift register with characteristic polynomial
c(x) = 1 + x4 + x5 + x6 + x8 .
Listing 11.2
Generating the maximum-length random sequence from the four-bit LFSR
shown in Figure 11.3.
1 seed = 0b_0001
2 num = seed
3
4 # Define masks
5 mask1 = 0b_0001
6 mask2 = 0b_0110
7 mask3 = 0b_0001
8
9 # Counter
10 period = 0
11
12 while True:
13 print(num)
14
20 period += 1
21
22 if num == seed:
23 break
24
198 Computer Simulation: A Foundational Approach Using Python
27 # Period: 2ˆ4 - 1 = 15
28 # Numbers: 1 9 13 15 14 7 10 5 11 12 6 3 8 4 2
Listing 11.3
Generating the maximum-length random sequence from an eight-bit LFSR.
1 seed = 0b_00111000
2 num = seed
3
4 # Define masks
5 mask1 = 0b_00000010
6 mask2 = 0b_00000100
7 mask3 = 0b_00001000
8 mask4 = 0b_01110001
9 mask5 = 0b_00000001
10
11 period = 0
12
13 while True:
14 temp1 = ( (num >> 1) ˆ (num << 1) ) & mask1
15 temp2 = ( (num >> 1) ˆ (num << 2) ) & mask2
16 temp3 = ( (num >> 1) ˆ (num << 3) ) & mask3
17 temp4 = ( num >> 1 ) & mask4
18 temp5 = (num & mask5) << 7
19 num = temp1 | temp2 | temp3 | temp4 | temp5
20
21 print(num)
22
23 period += 1
24
Random Number Generation 199
25 if num == seed:
26 break
27
Next, we compute the degree of freedom (df ) for the χ2 statistic. By definition,
the degree of freedom is always K − 1. After obtaining df , we need to find the
row in the chi-squared table corresponding to df = 9:
0.995 0.99 0.95 0.90 0.75 0.50 0.25 0.10 0.05 0.01 0.005
1.73 2.09 3.33 4.17 5.90 8.34 11.4 14.7 16.9 21.7 23.6
Assuming a significance level α = 0.05, the critical value from the above table
is χ29,0.95 = 16.9. Now, since the obtained value (χ2 = 7.4) is less than the
critical value, it is concluded that the random numbers in the given sequence
are uniformly distributed with a 95% level of confidence.
Random Number Generation 201
Table 11.3
Types of five-digit numbers according to the poker rules.
Combination Type
AAAAA Five of a Kind
AAAAB Four of a Kind
AAABB Full House
AAABC Three of a Kind
AABBC Two Pairs
AABCD One Pair
ABCDE Five Different Digits
2. Choose the first five digits in every random number. You may need to
round the numbers.
Following the above procedure, we will end up with a sequence of five-digit
numbers. Now, we are ready to apply the poker test to the random sequence.
In this test, every random number is treated as a poker hand. Thus, each
random number can be classified using the same poker rules. Table 11.3 shows
the possible combinations of five-digit numbers that are considered in the
poker test. It also shows the type of each combination according to the game
of poker.
Consider a sequence of random numbers of size 100. The following table
gives the distribution of the random numbers in the seven possible categories
in the poker test.
(Oi −Ei )2
Category Ei Oi (Oi − Ei )2 Ei
Five Different Digits 30 35 25 0.83
One Pair 50 51 1 0.02
Two Pairs 10 9 1 0.1
Three of a Kind 7 3 16 2.29
Full House 1 0 1 1
Four of a Kind 1 1 0 0
Five of a Kind 1 1 0 0
χ2 = 4.24
202 Computer Simulation: A Foundational Approach Using Python
Figure 11.6
104 triplets of successive random numbers generated using Listing 11.4. Planes
can be seen when the figure is viewed from the right angle.
Notice that the numbers in the second column Ei are based on empirical
observations. In fact, they represent percentages and thus can be applied to a
random sequence of any length.
The degree of freedom is six since there are seven categories. The critical value
of chi2 for df = 6 at α = 0.05 is χ26,0.95 = 12.6. Since the obtained value (χ2 =
4.24) is less than the critical value, it is concluded that the random numbers
in the given sequence are independent with a 95% level of confidence.
Listing 11.4
Python program for generating a 3D scatter plot for the spectral test.
1 import math
Random Number Generation 203
5 a = 65539
6 M = math.pow(2, 31)
7 seed = 123456
8
9 X = []
10 Y = []
11 Z = []
12
13 for i in range(10000):
14 num1 = math.fmod(a * seed, M)
15 num2 = math.fmod(a * num1, M)
16 num3 = math.fmod(a * num2, M)
17 seed = num2
18 X.append(num1)
19 Y.append(num2)
20 Z.append(num3)
21
22 fig = plt.figure()
23 ax = fig.add_subplot(111, projection=’3d’)
24 ax.scatter(X, Y, Z, c=’b’, marker=’o’)
25 # Remove axis ticks for readability
26 ax.set_xticks([])
27 ax.set_yticks([])
28 ax.set_zticks([])
29 ax.set_xlabel(’X’)
30 ax.set_ylabel(’Y’)
31 ax.set_zlabel(’Z’)
32 plt.show()
204 Computer Simulation: A Foundational Approach Using Python
1.5
1.0
Figure 11.7
The lag plot for a sequence of sinusoidal values. An elliptical pattern can be
clearly identified.
Listing 11.5
Python procedure for generating a lag plot for a random sequence.
1 import random as rnd
2 import pandas
3 from pandas.tools.plotting import lag_plot
4 import matplotlib.pyplot as plt
5
8 plt.figure()
9 lag_plot(s, marker=’o’, color=’grey’)
10 plt.xlabel(’Random Number - s[i]’)
Random Number Generation 205
1.2
1.0
+ 0.8
"
1
0.6
I
0.4
0.2
0.0
-0.2
-0.2 0.0
Random Number - s[i]
Figure 11.8
The lag plot generated by the code in Listing 11.5. The sequence uniformly
fills the 2D space.
11.8 SUMMARY
This chapter has discussed several methods for the generation of pseudo-
random numbers. These pseudo-random numbers are used in the computation
of pseudo-random variates and pseudo-random processes. According to [3], a
RNG should not produce a zero or one. In addition, the generated random
numbers should look random although they are generated using deterministic
procedures.
11.9 EXERCISES
11.1 Show that the multiplicative RNG does indeed pass both the chi-squared
and poker tests.
11.2 Consider the 16-bit LFSR with characteristic polynomial c(x) = 1+x4 +
x13 + x15 + x16 . Draw the structure of this LFSR and write a Python
program that implements it.
V
Case Studies
CHAPTER 12
Case Studies
“Discovery is seeing what everybody else has seen and thinking what nobody
else has thought.”
−Albert Szent-Györgyi
The main purpose of this chapter is to show the reader how the transition
from a system description to a simulation model is made. The first case study is
about estimating the reliability of a network using the Monte Carlo methods
and several variance-reduction techniques. The second case study is about
modeling a point-to-point wireless transmission system where packets may be
lost either due to a full queue or bad channel state. There is also an upper
limit on the number of transmission attempts before the packet is dropped.
Both packet delay and system throughput are analyzed. The final case study
is about modeling a simple error-control protocol and studying the impact of
error probability on system throughput.
209
210 Computer Simulation: A Foundational Approach Using Python
v4
e3
e7
v2
e1 e4 e8 v7
v5
v1 e9 e11
e5
e2 v8
v3 e10
e6 v6
Figure 12.1
A graph consisting of eight vertices and 11 edges.
v2
e1 e4
v1 v4
e2 e3
v3
Figure 12.2
Network fails if nodes v1 and v4 become disconnected. The event will occur if
any of the following groups of links fail: {(e1 , e2 ), (e1 , e3 ), (e2 , e4 ), (e3 , e4 )}.
U nRel = P [s1 ] + P [s2 ] + P [s3 ] + P [s4 ] + P [s5 ] + P [s6 ] + P [s9 ] + P [s11 ] + P [s13 ].
(12.1)
Case Studies 211
Table 12.1
Sample space of the system in Figure 12.2 along with the status of the network
for each possible system state.
where Φ(si ) evaluates to one if the given network realization (i.e., sample) si
represents a connected network. The alert reader should realize that Φ(si ) is
a Bernoulli random variable whose expectation is equal to the unreliability of
the network.
The crude Monte Carlo method suffers from a fundamental problem. Con-
sider the expression for the expected relative half width confidence interval of
u:
t × √sn
CIhw = . (12.3)
E[Φ]
Clearly, this expression grows to infinity as u approaches zero for the same
212 Computer Simulation: A Foundational Approach Using Python
Listing 12.1
Computing unreliability for the graph in Figure 12.2 using the exact expression
in Eqn. (12.1).
1 q = 0.3
2 Unreliability = q**4 + 4 * (1-q) * q**3 \
3 + 4 * (1-q)**2 * q**2
4 print("Unreliability = ", round(Unreliability, 10)) # 0.2601
Table 12.2
Restructuring the sample space of the system in Figure 12.2 along with the
probability of each stratum. The first row indicates the number of UP links.
0 1 2 3 4
0000 0001 0011 0111 1111
0010 0101 1110
0100 1001 1101
1000 1010 1011
1100
0110
P0 = 0.0625 P1 = 0.25 P2 = 0.375 P3 = 0.25 P4 = 0.0625
Listing 12.2
Computing unreliability for the graph in Figure 12.2 using crude Monte Carlo
simulation.
1 from random import *
2 from statistics import *
3
25 # Result
26 print("Unreliability = ", round(mean(rv), 4)) # 0.2593
27 print("Variance = ", round(variance(rv), 4)) # 0.1921
214 Computer Simulation: A Foundational Approach Using Python
Listing 12.3
Computing unreliability for the graph in Figure 12.2 using stratified sampling.
1 from random import *
2 from math import *
3 from statistics import *
4
9 K = L # Number of strata
10 P = [0.0625, 0.25, 0.375, 0.25, 0.0625] # Pi for each
stratum i
11
15 # Generate a sample
16 # n = Number of UP links
17 def samp(n):
18 if n == 0:
19 return [0, 0, 0, 0]
20 elif n == 4:
21 return [1, 1, 1, 1]
22 elif n == 1:
23 i = randint(0, 3)
24 s = [0] * L
25 s[i] = 1
26 return s
27 elif n == 2:
28 idx = sample([0, 1, 2, 3], 2) # Unique indexes
29 s = [0] * L
30 s[ idx[0] ] = 1
31 s[ idx[1] ] = 1
Case Studies 215
32 return s
33 elif n == 3:
34 idx = sample([0, 1, 2, 3], 3)
35 s = [0] * L
36 s[ idx[0] ] = 1
37 s[ idx[1] ] = 1
38 s[ idx[2] ] = 1
39 return s
40
49 rv = []
50 for i in range(K+1):
51 m = N_i[i]
52 for j in range(m):
53 s = samp(i)
54 rv.append( Phi(s) )
55
56 # Result
57 print("Unreliability = ", round(mean(rv), 4)) # 0.5636
58 print("Variance = ", round(variance(rv), 4)) # 0.246
Listing 12.4
Computing unreliability for the graph in Figure 12.2 using antithetic sampling.
1 from random import *
216 Computer Simulation: A Foundational Approach Using Python
29 # Result
30 print("Unreliability = ", round(mean(rv), 4)) # 0.2597
31 print("Variance = ", round(variance(rv), 4)) # 0.0784
Case Studies 217
Listing 12.5
Computing unreliability for the graph in Figure 12.2 using dagger sampling.
The number of samples is significantly less.
1 from random import *
2 from statistics import *
3 from math import *
4
32 rv.append(Phi(s1))
33 rv.append(Phi(s2))
34 rv.append(Phi(s3))
35
36 # Result
37 print("Unreliability = ", round(mean(rv), 4)) # 0.2617
38 print("Variance = ", round(variance(rv), 4)) # 0.1932
Wireless
Transmitter Channel Receiver
B Perr
Figure 12.3
A point-to-point wireless system. The transmitter has a buffer which can store
up to B packets. The probability that a transmission attempt is successful is
1 − Perr .
Case Studies 219
Table 12.3
State variables of the event graph in Figure 12.4.
mission attempts is still below the preset threshold (i.e., T < τ ). Otherwise,
a Drop event is scheduled. After the current packet is discarded, the next
packet in the queue is scheduled for transmission. Table 12.3 shows the state
variables used in the construction of the event graph for this system. The
event graph is shown in Figure 12.4. There are two system parameters which
are the maximum allowed number of transmission attempts for every packet
(τ ) and the packet error rate of the wireless channel (Perr ).
220 Computer Simulation: A Foundational Approach Using Python
ta
(Q > 0)
(C == 0) (u ≤ Perr)
Start Art Transmit Receive
(T < τ) Drop
Loss
(T == τ) D++
L++ Timeout T=0
Q--
C=0
C=0
Q++
Figure 12.4
Event graph for the system in Figure 12.3.
Listing 12.6
Python implementation of the event graph in Figure 12.4
1 from random import *
2 from bisect import *
3 from statistics import *
4
5 # Simulation parameters
6 n = 1000 # Number of packets to be simulated
7 lamda = 0.7
8 P_err = 0.99
9 tau = 3
10 Tout = 1 # Length of timeout period
11 B = 10 # Size of transmitter buffer
12
13 # Initialization
Case Studies 221
14 clock = 0.0
15 evList = []
16 count = 0 # Used for counting simulated packets and as
Pkt_ID
17 evID = 0 # Unique ID for each event
18 Timeout_Event = None # Reference to currently pending
timeout event
19
35 # Output variables
36 Num_Received_Pkts = 0 # Pkts received successfully
37 Arr_Time = [0] * n
38 Rec_Time = [0] * n
39
40 # Event generators
41 def Gen_Arr_Evt(clock):
42 global count, n, lamda, evID
43 if count < n:
44 insert( (clock + expovariate(lamda), evID, count,
Handle_Arr_Evt) )
222 Computer Simulation: A Foundational Approach Using Python
45 count += 1
46 evID += 1
47
74 # Event handlers
75
78 Q += 1
79 Gen_Arr_Evt(clock + expovariate(lamda))
80 if C == 0:
81 Gen_Transmit_Evt(clock, Pkt_ID)
82 if Q > B:
83 Gen_Loss_Evt(clock, Pkt_ID)
84 # Output variable
85 Arr_Time[Pkt_ID] = clock
86
111
50
40
>-
"'
Qi
Cl
30
OJ
"'["
~ 20
<t
10
Figure 12.5
Average packet delay increases as the quality of the wireless channel degrades.
100
w
Ill
80
""u
"'
CL
"0
OJ
> 60
"Qj
u
OJ
cr:
4-
0
OJ 40
"'c
fl
OJ
'='
OJ
CL 20
0
0.0 0.2 0.4 0.6 0.8 1.0
Probability of Error
Figure 12.6
Percentage of delivered packets drops as the quality of the wireless channel
degrades.
Timeout
Sender I Frame 1 Frame 21 I Frame 21 I Frame 31 Time
\
\
\
\
\
\
\
\
\
\
\
\
\
• \
\
\
\
\ \ \ \
\ \ \ \
\
\
\ ACK \
\ \ ACK \
\
\ \
\ \
\ \
\
\ \
\
~ \ \
\
\ \ \
\
\
X \
\
\
\
\
\
Frame Lost \
\
\
\ \
Receiver------------------1-----------------------------------~~--------------~Time
Figure 12.7
Behavior of the simple stop-and-wait ARQ protocol with two possibilities:
acknowledgment and frame loss.
(u1 P ER )
trec
Start Transmit Receive
tout
Timeout ACK
Figure 12.8
Event graph for the simple stop-and-wait ARQ protocol.
Listing 12.7
Python implementation of the event graph of the simple stop-and-wait ARQ
protocol in Figure 12.8.
1 import random as rnd
2 import queue
3 import statistics as stat
4
17
51 return ev
52
0.9
0.8
0.7
0.6
Throughput
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
PERFER
Figure 12.9
Throughput deteriorates as the packet error rate increases.
12.4 SUMMARY
Moving from a system description to a simulation program is not a trivial
task. A simulation model must be constructed using event graphs before any
code can be written. Then, the simulation model can be translated into code
using the concepts and conventions discussed in Chapter 7. The purpose of
this chapter was to reinforce this skill.
12.5 EXERCISES
12.1 Study the relationship between the average packet delay and transmis-
sion attempt threshold by extending the program in Listing 12.6.
12.2 Identify a redundant event in the event graph in Figure 12.8.
Overview of Python
“I was looking for a hobby. So, I decided to develop a new computer language.”
−Guido van Rossum
A.1 BASICS
Python is an interpreted language. When you type python at the command
prompt, the Python prompt (>>>) appears where you can start typing
Python statements. Listing A.1.1 shows how a new Python interactive ses-
sion cab be started.
Listing A.1.1
Starting a new Python interactive session.
1 C:/> python
2 Python 3.6.2 .... more information will be shown
3 >>> 1 + 1
4 2
5 >>>
You can store your code in a file and call the Python interpreter on the file
from the command prompt. The command prompt of the operating system
235
236 Computer Simulation: A Foundational Approach Using Python
will appear after the execution of the file finishes. Listing A.1.2 is an example
of running a Python file containing a program that adds two numbers.
Listing A.1.2
Running a Python program from the command line.
1 C:/> python my_prog.py
2 Enter the two numbers to add: 1 3
3 Result = 4
4 C:/>
Listing A.1.3
A Python source file. It can also be referred to as a Python script.
1 from random import choice
2
15 selection = []
16
17 for i in range(n):
18 r = choice(numbers)
19 selection.append(r)
20 numbers.remove(r)
21
Listing A.2.1
Input and output functions.
1 >>> m = input("Enter the mean service time: ")
2 Enter the mean service time: 5 # Enter number 5
3 >>> m
4 5
5 >>> print( "You entered: ", m )
6 You entered: 5
238 Computer Simulation: A Foundational Approach Using Python
Listing A.3.1
Binary operations on integer numbers.
1 a = 10 # 0000 1010
2 b = 25 # 0001 1001
3
4 # AND
5 c = a & b
6 print(c) # 0000 1000 (8)
7
8 # OR
9 c = a | b
10 print(c) # 0001 1011 (27)
11
12 # XOR
13 c = a ˆ b
14 print(c) # 0001 0011 (19)
15
16 # Ones Complement
17 # Numbers are in 2’s complement representation
18 c = a
19 print(c) # 1111 0101 (-11)
20
21 # Right Shift
22 c = a >> 2
23 print(c) # 0000 0010 (2)
24
25 # Left Shift
26 c = a << 2
27 print(c) # 0010 1000 (40)
Overview of Python 239
Listing A.3.2
Handling unsigned binary numbers.
1 # Convert an integer to a binary string
2 b = bin(5)
3 print(b) # 0b_101
4
A.4 LISTS
A list is a collection of zero or more elements. Elements of a list are
enclosed between two square brackets and they are separated by com-
mas. Elements of a list do not have to be of the same type. Listing
A.4.1 gives different examples of lists and some of the operations that
can be performed on them. More information on lists can be found at
http://docs.python.org/3/tutorial/datastructures.html.
240 Computer Simulation: A Foundational Approach Using Python
Listing A.4.1
Lists and some of their operations.
1 >>> a = [] # The empty list
2 >>> a
3 []
4 >>> b = [1, 2.3, "Arrival", False] # Elements of different
types
5 >>> b[0] # Accessing the fist element in
6 1 # the list
7 >>> b[2] # Accessing the third element
8 "Arrival"
9 >>> b[0:2] # Extract a part of the list
10 [1, 2.3] # Two is the size of the new list
11 >>> c = [0] * 6 # Creating and initializing
12 >>> c # a list of size 6
13 [0, 0, 0, 0, 0, 0]
14 >>> len(c) # Returns the size of a list
15 6
16 >>> "Arrival" in b # Check if an element is in the list
17 True
18 >>> d = b + b # Combining two lists into one list
19 >>> d
20 [1, 2.3, "Arrival", False, 1, 2.3, "Arrival", False]
Listing A.5.1
Transposing a matrix using the zip function. Matrix is first unpacked using
the start (*) operator.
1 matrix = [ [1, 2], [3, 4] ]
2 matrix_transposed = list(zip( *matrix ))
3 # *matrix => [1, 2] [3, 4]
4 print(matrix_transposed) # [(1, 3), (2, 4)]
Listing A.6.1
Importing the random module and calling some of the functions inside it.
1 >>> import random
2 >>> random.random() # Returns a floating-point number in
3 0.8545672259166788 # the range (0,1)
4
16
Listing A.7.1
Implementing the event list using the queue module.
1 import queue
2 from queue import Queue
3
5 Event_List = queue.PriorityQueue()
6
Listing A.7.2
Implementing the event list using the hqueue module.
1 import heapq
2 from heapq import *
3
4 Event_List =[]
5
Listing A.7.3
Implementing the event list by sorting a list.
1 # The first field is always the time
2 e1 = (10, "Arrival")
3 e2 = (5, "Departure")
4 e3 = (2, "Fully_Charged")
5
6 Event_List = []
7
8 Event_List += [e1]
9 Event_List += [e2]
10 Event_List += [e3]
11 Event_List.sort()
12
13 print(Event_List)
244 Computer Simulation: A Foundational Approach Using Python
Listing A.8.1
The name of the function can be stored in a list and then used to call the
function.
1 def add():
2 print ( "Add" )
3
4 def sub():
5 print ( "Sub" )
6
7 a = [add, sub]
8
9 for i in range(len(a)):
10 a [i] ( ) # Add two parentheses and include arguments
,
11 # if any
Listing A.8.2
The name of the function can be passed as an argument to another function.
1 def doIt (func, x, y):
2 z = func (x, y)
3 return z
4
11 print ("Addition:")
Overview of Python 245
15 print ("Subtraction:")
16 print ( doIt (sub, 2, 3) )
Listing A.9.1
A tuple can be used as a record that represents an item in the event list.
1 def Handle_Event_1():
2 print ( "Event_1" )
3
4 def Handle_Event_2():
5 print ( "Event_2" )
6
10 for ev in Event_List:
11 (time , event_handler) = ev
12 event_handler ( ) # Add two parentheses and include
13 # arguments, if any
A.10 PLOTTING
Listing A.10.1
Code for generating Figure 4.12(b).
1 from random import *
2 from math import *
246 Computer Simulation: A Foundational Approach Using Python
20
21 a = 1
22 b = 10
23 c = 7
24
28 xlabel("X", fontsize=15)
29 ylabel("f(x)", fontsize=15)
30
33 for x in X:
34 Y.append( pdf(x, a, b, c) )
35
36 plot(X, Y, linewidth=2)
Overview of Python 247
37
Listing A.10.2
Code for generating Figure 10.6(a).
1 from random import *
2 from math import *
3 from matplotlib.pyplot import *
4 from numpy import *
5
6 def pdf(x):
7 k = 10
8 theta = 1.0
9 return (x**(k-1) * theta**k * exp(-1 * theta * x)) /
factorial(k-1)
10
16 xlabel("Y")
17 ylabel("P(y)")
18
24 plot(X, Y, linewidth=2)
25 savefig("erlang_plot_pdf.pdf", format="pdf", bbox_inches="
tight")
26
Listing A.10.3
Code for generating Figure 10.6(b).
1 from random import *
2 from math import *
3 from matplotlib.pyplot import *
4 from statistics import *
5
6 def Erlang():
7 k = 10
8 theta = 1.0
9 y = 0
10 for i in range(k):
11 u = random()
12 x = (-1 / theta) * log(u) # Exponential variate
13 y = y + x
14
15 return y
Overview of Python 249
16
17 N = 100000
18 v = []
19 for i in range(N):
20 v.append( Erlang() )
21
22 bins = 100
23
24 w = [1 / len(v)] * len(v)
25
28 xlabel("Y")
29 ylabel("P(y)")
30
An Object-Oriented
Simulation Framework
Listing B.1
Event.
1 class Event:
2 def __init__(self, _src, _target, _type, _time):
3 self.src = _src
4 self.target = _target
5 self.type = _type
6 self.time = _time
7
251
252 Computer Simulation: A Foundational Approach Using Python
Listing B.2
Simulation Entity.
1 from event import Event
2
3 class SimEntity:
4
Listing B.3
Event list and scheduler.
1 # The variable self.count_events counts events generated.
The number of events executed will also be equal to this
2 # number.
3 #
4 # You insert event based on its time. If there are two
events occurring at the same time, the second sorting
5 # criterion is to use the id of the target. The third
sorting criterion is to use the id of the source.
6
9 class Scheduler:
10
31 def head(self):
32 ev = self.evList.get()
33 self.time = ev[2].time
34 return ev[2]
35
36 def run(self):
37 count = 0
38 while( not self.empty() ):
39 ev = self.head()
40 self.time = ev.time
254 Computer Simulation: A Foundational Approach Using Python
41 count += 1
42 ev.target.evHandler(ev)
43
44 def empty(self):
45 return self.evList.empty()
46
47 def reset(self):
48 self.evList = None
49 self.time = 0.0
50 self.count_events = 0
Listing B.4
Example 1.
1 from scheduler import Scheduler
2 from simEntity import SimEntity
3
5 class Node(SimEntity):
6 def __init__(self, _scheduler, _id):
7 super(Node, self).__init__(_scheduler, _id)
8 self.schedule(self, "Self_Message", self.scheduler.
time + 2.0)
9
14
15 scheduler = Scheduler(3)
16
An Object-Oriented Simulation Framework 255
17 Node(scheduler, 1)
18
19 scheduler.run()
Listing B.5
Example 2.
1 from scheduler import Scheduler
2 from simEntity import SimEntity
3
4 class Node(SimEntity):
5
18
19
20
21 scheduler = Scheduler(4)
22
256 Computer Simulation: A Foundational Approach Using Python
23 n1 = Node(scheduler, 1)
24 n2 = Node(scheduler, 2)
25
26 n1.setNeighbor(n2)
27 n2.setNeighbor(n1)
28
29 scheduler.run()
Listing B.6
Example 3.
1 # Remove n1.setNeighbor(n2) && n2.setNeighbor(n1)
2 # Use a link to connect the two nodes
3 # A link has two ends: a & b
4
8 class Node(SimEntity):
9
21
22 class Link(SimEntity):
23
38
39
40 scheduler = Scheduler(6)
41
42 n1 = Node(scheduler, 1)
43 n2 = Node(scheduler, 2)
44 l = Link(scheduler, 3)
45
46 n1.setNeighbor(l)
47 n2.setNeighbor(l)
48 l.setNeighbors(n1, n2)
258 Computer Simulation: A Foundational Approach Using Python
49
50 scheduler.run()
Listing B.7
Example 4.
1 from scheduler import Scheduler
2 from simEntity import SimEntity
3 from event import Event
4
5 class Node(SimEntity):
6 def __init__(self, _scheduler, _id):
7 super(Node, self).__init__(_scheduler, _id)
8 self.schedule(self, "Self_Message", self.scheduler.
time + 5.0)
9 self.schedule(self, "Self_Message", self.scheduler.
time + 3.0)
10 self.schedule(self, "Self_Message", self.scheduler.
time + 4.0)
11 self.schedule(self, "Self_Message", self.scheduler.
time + 1.0)
12 self.schedule(self, "Self_Message", self.scheduler.
time + 2.0)
13 ev = Event(self, self, "Self_Message", self.
scheduler.time + 1.0)
14 self.cancel(ev)
15
19
An Object-Oriented Simulation Framework 259
20
21 scheduler = Scheduler(5)
22
23 Node(scheduler, 1)
24
25 scheduler.run()
Listing B.8
M/M/1.
1 # IAT = Average Inter-Arrival Time
2 # ST = Average Service Time
3 # Size of packet is its service time (in time units not bits
)
4 # Station contains a queue (Q) and server (S)
5
11 class TrafficGen(SimEntity):
12
28
29 class Packet:
30 def __init__(self, _size):
31 self.size = _size
32 self.Arrival_Time = 0.0
33 self.Service_At = 0.0
34 self.Departure_Time = 0.0
35
40 def info(self):
41 print("Arrival_Time = %.2f, Service_At = %.2f,
Service_Time = %.2f, Departure_Time = %.2f" % (self.
Arrival_Time, self.Service_At, self.size, self.
Departure_Time))
42
43
44 class Server(SimEntity):
45 busy = False
46
49 self.station = _station
50
54 if isinstance(ev.type, Packet):
55 pkt = ev.type
56 self.busy = True
57 pkt.Service_At = self.scheduler.time
58 pkt.Departure_Time = self.scheduler.time + pkt.
size
59 #pkt.info()
60 Num_Pkts = Num_Pkts + 1
61 Total_Delay = Total_Delay + pkt.delay()
62 self.schedule(self, "End_of_Service", self.
scheduler.time + pkt.size)
63 elif ev.type == "End_of_Service":
64 self.busy = False
65 self.schedule(self.station, "Server_Available",
self.scheduler.time)
66 else:
67 print("Server is supposed to receive packets!")
68
69 def isBusy(self):
70 return self.busy
71
72
73 class Station(SimEntity):
74
95 Num_Pkts = 0.0
96 Total_Delay = 0.0
97
98 scheduler = Scheduler(100000)
99 station = Station(scheduler, 1)
100 src = TrafficGen(scheduler, station, 2, 3.33, 2.5)
101 scheduler.run()
102
Listing B.9
State.
1 class State:
An Object-Oriented Simulation Framework 263
2 def action(self):
3 pass
4
Listing B.10
State Machine.
1 # http://python-3-patterns-idioms-test.readthedocs.org/en/
latest/StateMachine.html#the-table
2
4 class StateMachine:
5 def __init__(self, initialState):
6 self.currentState = initialState
7 self.currentState.action()
8
9 # Make transition
10 def applyEvent(self, event):
11 self.currentState = self.currentState.next(event)
12 self.currentState.action()
Listing B.11
Simple Protocol.
1 from state import State
2 from stateMachine import StateMachine
3 from event import Event
4
5
264 Computer Simulation: A Foundational Approach Using Python
6 class Bad(State):
7 def __init__(self):
8 super(Bad, self).__init__()
9
10 def action(self):
11 print("Bad State")
12
19
20 class Good(State):
21 def __init__(self):
22 super(Good, self).__init__()
23
24 def action(self):
25 print("Good State")
26
33
34 class Protocol(StateMachine):
35 def __init__(self, _initialState):
36 super(Protocol, self).__init__(_initialState)
37
38
39 p = Protocol(Bad())
An Object-Oriented Simulation Framework 265
k−1 χ20.005 χ20.010 χ20.025 χ20.050 χ20.100 χ20.900 χ20.950 χ20.975 χ20.990 χ20.995
1 0.000 0.000 0.001 0.004 0.016 2.706 3.841 5.024 6.635 7.879
2 0.010 0.020 0.051 0.103 0.211 4.605 5.991 7.378 9.210 10.597
3 0.072 0.115 0.216 0.352 0.584 6.251 7.815 9.348 11.345 12.838
4 0.207 0.297 0.484 0.711 1.064 7.779 9.488 11.143 13.277 14.860
5 0.412 0.554 0.831 1.145 1.610 9.236 11.070 12.833 15.086 16.750
6 0.676 0.872 1.237 1.635 2.204 10.645 12.592 14.449 16.812 18.548
7 0.989 1.239 1.690 2.167 2.833 12.017 14.067 16.013 18.475 20.278
8 1.344 1.646 2.180 2.733 3.490 13.362 15.507 17.535 20.090 21.955
9 1.735 2.088 2.700 3.325 4.168 14.684 16.919 19.023 21.666 23.589
10 2.156 2.558 3.247 3.940 4.865 15.987 18.307 20.483 23.209 25.188
11 2.603 3.053 3.816 4.575 5.578 17.275 19.675 21.920 24.725 26.757
12 3.074 3.571 4.404 5.226 6.304 18.549 21.026 23.337 26.217 28.300
13 3.565 4.107 5.009 5.892 7.042 19.812 22.362 24.736 27.688 29.819
14 4.075 4.660 5.629 6.571 7.790 21.064 23.685 26.119 29.141 31.319
15 4.601 5.229 6.262 7.261 8.547 22.307 24.996 27.488 30.578 32.801
16 5.142 5.812 6.908 7.962 9.312 23.542 26.296 28.845 32.000 34.267
17 5.697 6.408 7.564 8.672 10.085 24.769 27.587 30.191 33.409 35.718
18 6.265 7.015 8.231 9.390 10.865 25.989 28.869 31.526 34.805 37.156
19 6.844 7.633 8.907 10.117 11.651 27.204 30.144 32.852 36.191 38.582
20 7.434 8.260 9.591 10.851 12.443 28.412 31.410 34.170 37.566 39.997
21 8.034 8.897 10.283 11.591 13.240 29.615 32.671 35.479 38.932 41.401
22 8.643 9.542 10.982 12.338 14.041 30.813 33.924 36.781 40.289 42.796
23 9.260 10.196 11.689 13.091 14.848 32.007 35.172 38.076 41.638 44.181
24 9.886 10.856 12.401 13.848 15.659 33.196 36.415 39.364 42.980 45.559
25 10.520 11.524 13.120 14.611 16.473 34.382 37.652 40.646 44.314 46.928
26 11.160 12.198 13.844 15.379 17.292 35.563 38.885 41.923 45.642 48.290
27 11.808 12.879 14.573 16.151 18.114 36.741 40.113 43.195 46.963 49.645
28 12.461 13.565 15.308 16.928 18.939 37.916 41.337 44.461 48.278 50.993
29 13.121 14.256 16.047 17.708 19.768 39.087 42.557 45.722 49.588 52.336
30 13.787 14.953 16.791 18.493 20.599 40.256 43.773 46.979 50.892 53.672
40 20.707 22.164 24.433 26.509 29.051 51.805 55.758 59.342 63.691 66.766
50 27.991 29.707 32.357 34.764 37.689 63.167 67.505 71.420 76.154 79.490
60 35.534 37.485 40.482 43.188 46.459 74.397 79.082 83.298 88.379 91.952
70 43.275 45.442 48.758 51.739 55.329 85.527 90.531 95.023 100.425 104.215
80 51.172 53.540 57.153 60.391 64.278 96.578 101.879 106.629 112.329 116.321
90 59.196 61.754 65.647 69.126 73.291 107.565 113.145 118.136 124.116 128.299
100 67.328 70.065 74.222 77.929 82.358 118.498 124.342 129.561 135.807 140.169
267
APPENDIX D
269
Bibliography
[9] Christof Paar and Jan Pelzl. Understanding Cryptography. Springer, New
York City, 2010.
[10] S.K. Park and K.W. Miller. Random Number Generators: Good Ones
are Hard to Find. Communications of the ACM, 31(10):1192–1201, 1988.
271
Index
273
274 Index
Time, 23
actual time, 23
arrival, 5, 8, 15, 74, 75
departure, 5, 8, 15, 74, 75
increments, 124
inter-arrival time, 20
runtime, 23
service time, 21
simulated, 124, 126
simulated time, 24
slot, 59
step, 123, 126
waiting time, 21
Trajectory, 19