Introduction To Probability With Mathematica - Kevin J. Hasting PDF
Introduction To Probability With Mathematica - Kevin J. Hasting PDF
Probability with
Mathematica®
Second Edition
TEXTBOOKS in MATHEMATICS
Series Editor: Denny Gulick
PUBLISHED TITLES
INTRODUCTION TO
Probability with
Mathematica®
Second Edition
Kevin J. Hastings
Knox College
Galesburg, Illinois, U.S.A.
Chapman & Hall/CRC
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts
have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has
not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit-
ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microfilming, and recording, or in any information storage or retrieval system,
without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.
com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and
registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,
a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identification and explanation without intent to infringe.
QA273.19.E4H37 2010
519.20285’536--dc22 2009030737
Kevin J. Hastings
Note on the Electronic Version and the
KnoxProb7Utilities Package
The CD that accompanies this book contains a Mathematica notebook for each text
section. Copyright law forbids you from distributing these notebooks without the express
permission of the publisher, just as with the print version. The print text was made without
color, by publisher specification, but color effects are included into the electronic version.
Otherwise the electronic notebooks are identical to the print version. When a notebook is
first loaded you can select the Evaluate Initialization Cells command under the Evaluation
menu to reproduce the output. The look of the notebooks in the front end will depend on
what style sheet you are using; I have applied the Book/Textbook stylesheet to these files.
Also on the disk are two packages: the KnoxProb7 Utilities package for users of
version 7 of Mathematica, and KnoxProb6`Utilities` for users of version 6. Move the
appropriate folder into the AddOns\ExtraPackages subdirectory of your main Mathematica
directory in order to allow the electronic book to run. The electronic files are written for
version 7, so the Needs commands in the notebooks should be modified to load Knox-
Prob6`Utilities` instead of KnoxProb7`Utilities` for Mathematica 6 users. The main differ-
ences between the two Utilities versions are: (1) the Histogram command that I had written
for the version 6 package is eliminated in the version 7 package because Wolfram Research
included its own suitable Histogram command into the kernel in version 7; and (2) the
ProbabilityHistogram command had to be rewritten for version 7 due to the elimination of
the BarCharts package that it had relied on. Both packages are updates of those in the
KnoxProb`Utilities` package that was available for the first edition of the book, adapting to
the major changes in the structure of Mathematica that happened as it went into version 6.0.
Among the most important changes for our purposes was that many statistical commands
that had previously been contained in external packages were moved into the kernel, so that
these packages no longer needed to be loaded. The electronic book has been subjected to a
lot of testing, and it should operate well.
Table of Contents
Chapter 1 – Discrete Probability
1.1 The Cast of Characters 1
1.2 Properties of Probability 10
1.3 Simulation 23
1.4 Random Sampling 33
Sampling in Sequence without Replacement 34
Sampling in a Batch without Replacement 38
Other Sampling Situations 42
1.5 Conditional Probability 48
Multiplication Rules 51
Bayes' Formula 56
1.6 Independence 62
Chapter 2 – Discrete Distributions
2.1 Discrete Random Variables, Distributions,
and Expectations 73
Mean, Variance, and Other Moments 79
Two Special Distributions 85
2.2 Bernoulli and Binomial Random Variables 94
Multinomial Distribution 102
2.3 Geometric and Negative Binomial Random Variables 107
2.4 Poisson Distribution 115
Poisson Processes 120
2.5 Joint, Marginal, and Conditional Distributions 124
2.6 More on Expectation 136
Covariance and Correlation 140
Conditional Expectation 148
Chapter 3 – Continuous Probability
3.1 From the Finite to the (Very) Infinite 159
3.2 Continuous Random Variables and Distributions 171
Joint, Marginal, and Conditional Distributions 178
3.3 Continuous Expectation 192
Chapter 4 – Continuous Distributions
4.1 The Normal Distribution 207
4.2 Bivariate Normal Distribution 225
Bivariate Normal Density 229
Marginal and Conditional Distributions 235
4.3 New Random Variables from Old 246
C.D.F. Technique and Simulation 247
Moment–Generating Functions 252
4.4 Order Statistics 259
4.5 Gamma Distributions 273
Main Properties 273
The Exponential Distribution 277
4.6 Chi–Square, Student’s t, and F–Distributions 281
Chi–Square Distribution 282
Student's t–Distribution 289
F–Distribution 293
4.7 Transformations of Normal Random Variables 300
Linear Transformations 301
Quadratic Forms in Normal Random Vectors 306
Chapter 5 – Asymptotic Theory
5.1 Strong and Weak Laws of Large Numbers 313
5.2 Central Limit Theorem 320
Chapter 6 – Stochastic Processes and Applications
6.1 Markov Chains 329
Short Run Distributions 331
Long Run Distributions 334
Absorbing Chains 338
6.2 Poisson Processes 344
6.3 Queues 352
Long Run Distribution of System Size 353
Waiting Time Distribution 358
Simulating Queues 359
6.4 Brownian Motion 365
Geometric Brownian Motion 373
6.5 Financial Mathematics 379
Optimal Portfolios 380
Option Pricing 385
Appendices
A. Introduction to Mathematica 395
B. Glossary of Mathematica Commands for Probability 423
C. Short Answers to Selected Exercises 431
References 445
Index 447
CHAPTER 1
DISCRETE PROBABILITY
1
2 Chapter 1 Discrete Probability
yourself in a kind of workshop, with Mathematica and your own insight and love of
experimentation as your most powerful tools. I caution you against depending too much on
the machine, however. Much of the time, traditional pencil, paper, and brainpower are what
you need most to learn. What the computer does, however, is open up new problems, or let
you get new insights into concepts. So we will be judiciously blending new technology with
traditionally successful pedagogy to enable you to learn effectively. Consistent themes will
be the idea of taking a sample randomly from a larger universe of objects in order to get
information about that universe, and the computer simulation of random phenomena to
observe patterns in many replications that speak to the properties of the phenomena.
The cast of characters in probability whose personalities we will explore in depth in
the coming chapters include the following six principal players: (1) Event; (2) Sample
Space; (3) Probability Measure; (4) Random Variable; (5) Distribution; and (6) Expectation.
The rest of this section will be a brief intuitive introduction to these six.
Most of the time we are interested in assessing likelihoods of events, that is, things
that we might observe as we watch a phenomenon happen. The event that a national chain's
bid to acquire a local grocery store is successful, the event that at least 10 patients arrive to
an emergency room between 6:00 and 7:00 AM, the event that a hand of five poker cards
forms a straight, the event that we hold the winning ticket in the lottery, and the event that
two particular candidates among several are selected for jury duty are just a few examples.
They share the characteristic that there is some uncertain experiment, sampling process, or
phenomenon whose result we do not know "now". But there is a theoretical "later" at which
we can observe the phenomenon taking place, and decide whether the event did or didn't
occur.
An outcome (sometimes called a simple event) is an event that cannot be broken
down into some combination of other events. A composite event is just an event that isn't
simple, that is, it has more than one outcome. For instance the event that we are dealt a poker
hand that makes up a straight is composite because it consists of many outcomes which
completely specify the hand, such as 2 through 6 of hearts, 10 through ace of clubs, etc.
This is where the sample space makes its entry: the sample space is the collection of
all possible outcomes of the experiment or phenomenon, that is all possible indecomposable
results that could happen. For the poker hand example, the sample space would consist of
all possible five-card hands. In specifying sample spaces, we must be clear about assump-
tions; for instance we might assume that the order in which the cards were dealt does not
matter, and we cannot receive the same card twice, so that the cards are dealt without
replacement.
Activity 1 How can you characterize the sample space for the grocery store example?
Make up your own random phenomenon, informally describe the sample space, and give
examples of events.
1.1 The Cast of Characters 3
Activity 2 In the emergency room example above, is the event that was described an
outcome or a composite event? If that event is composite, write down an example of an
outcome relating to that phenomenon.
In random phenomena, we need a way of measuring how likely the events are to
occur. This may be done by some theoretical considerations, or by experience with past
experiments of the same kind. A probability measure assigns a likelihood, that is a number
between 0 and 1, to all events. Though we will have to be more subtle later, for now we can
intuitively consider the extremes of 0 and 1 as representing impossibility and complete
certainty, respectively.
Much of the time in probability problems, probability measures are constructed so
that the probability of an event is the "size" of the event as a proportion of the "size" of the
whole sample space. "Size" may mean different things depending on the context, but the two
most frequent usages are cardinality, that is number of elements of a set, and length (or area
in two dimensions). For example, if you roll a fair die once, since there are six possible faces
that could land on top, the sample space is 1, 2, 3, 4, 5, 6. Since three of the faces are odd,
it seems reasonable that the probability that an odd number is rolled is 3/6. Size means
length in the following scenario. Suppose that a real number is to be randomly selected in
the interval 0, 4. This interval is the sample space for the random phenomenon. Then the
probability that the random real number will be in the interval 1, 3 is 2 4 1 2, which is
the length of the interval 1, 3 divided by the length of the whole sample space interval.
The first two chapters will concentrate on discrete probability, in which counting the
number of elements in sets is very important. We will leave much of the detail for later
sections, except to remind you of the intuitively obvious multiplication principle for
counting. If outcomes of a random experiment consist of two stages, then the total number of
outcomes of the experiment is the number of ways the first stage can occur times the number
of ways that the second can occur. This generalizes to multiple stage experiments. For
instance, if a man has 3 suits, 6 shirts, and 4 ties, and he doesn't care about color coordina-
tion and the like, then he has 3 6 4 72 possible outfits.
It is possible to experimentally estimate a probability by repeating the experiment
over and over again. The probability of the event can be estimated by the number of times
that it occurs among the replications divided by the number of replications. Try this
exercise. In the KnoxProb7`Utilities` package that you should have installed is a Mathemat-
ica command called
DrawIntegerSample[a, b, n]
4 Chapter 1 Discrete Probability
which outputs a sequence of n numbers drawn at random, without reusing numbers previ-
ously in the sequence, from the positive integers in the range a, ... , b. (You should be sure
that the package has been loaded into the ExtraPackages subdirectory within the Mathemat-
ica directory you are using on your computer.) First execute the command to load the
package, then try repeating the experiment of drawing samples of two numbers from
1, ... , 5 a large number of times. I have shown one such sample below. Keep track of how
frequently the number 1 appears in your samples. Empirically, what is a good estimate of the
probability that 1 is in the random sample of size two? Talk with a group of your classmates,
and see if you can set up a good theoretical foundation that supports your conclusion. (Try to
specify the sample space of the experiment.) We will study this type of experiment more
carefully later in this chapter. Incidentally, recall that Mathematica uses braces { } to
delineate sequences as well as unordered sets. We hope that the context will make it clear
which is meant; here the integer random sample is assumed to be in sequence, so that, for
example, 1, 3 is different from 3, 1.
Needs"KnoxProb7`Utilities`"
DrawIntegerSample1, 5, 2
1, 3
We are halfway through our introduction of the cast of characters. Often the outc-
omes of a random phenomenon are not just numbers. They may be sequences of numbers as
in our sampling experiment, or even non-numerical data such as the color of an M&M that
we randomly select from a bag. Yet we may be interested in encoding the simple events by
single numbers, or extracting some single numerical characteristic from more complete
information. For example, if we are dealt a five card hand in poker, an outcome is a full
specification of all five cards, whereas we could be interested only in the number of aces in
the hand, which is a numerical valued function of the outcome. Or, the sample space could
consist of subjects in a medical experiment, and we may be interested in the blood sugar
level of a randomly selected subject. Such a function for transforming outcomes to numerical
values is called a random variable, the fourth member of our cast of characters. Since the
random variable depends on the outcome, and we do not know in advance what will happen,
the value of the random variable is itself uncertain until after the experiment has been
observed.
Activity 3 Use DrawIntegerSample again as below to draw samples of size five from a
population of size 50. Suppose we are interested in the random variable which gives the
minimum among the members of the sample. What values of this random variable do
you observe in several replications?
1.1 The Cast of Characters 5
The cell below contains a command I have called SimulateMinima for observing a
number m of values of the minimum random variable from Activity 3. Read the code
carefully to understand how it works. An integer sample is drawn from the set
{1,2,...,popsize}, and then its minimum is taken. The Table command assembles a list of m
of these minima. To get an idea of the pattern of those sample minima, we plot a graph
called a histogram which tells the proportion of times among the m replications of the
experiment that the sample minimum fell into each of a given number of categories. The
Histogram command is in the kernel in Mathematica 7. Its exact syntax is
where the latter two arguments are optional. (See the help documentation, the note in the
preface, and the appendix on Mathematica commands for version 6 of Mathematica and the
KnoxProb6`Utilities` package.) The binspecifications argument can be set to be a number
for a desired number of categories, or it can be set to a list d w for a desired category width.
The third argument can be set to "Count," "Probability," or "ProbabilityDensity," respec-
tively, to force the bar heights to be freqencies, relative frequencies, or relative frequencies
divided by category widths. One output cell is shown in Figure 1 using 100 replications, six
histogram rectangles, and using samples of size 5 from {1, ... , 50}. You should reexecute
several times to see whether your replications give similar histograms. Should they be
identical to this one? Why or why not? (If you want to suppress the list of minima and get
only the histogram, put a semicolon between the SimulateMinima and Histogram
commands.)
35
30
25
20
15
10
5
5 10 15 20 25 30
Figure 1.1 - Sample histogram of minimum of five values from 1, 2, ... , 50
The histogram for the sample minimum that we just saw, together with your experi-
ments, hints at a very deep idea, which is the fifth member of our cast. A random variable
observed many times will give a list of values that follows a characteristic pattern, with some
random variation. The relative frequencies of occurrence (that is the number of occurrences
divided by the number of replications) of each possible value of the random variable, which
are the histogram rectangle heights, will stabilize around some theoretical probability. Much
as a probability measure can be put on a sample space of simple events, a probability
distribution can be put on the collection of possible values of a random variable. To cite a
very simple example, if 1/3 of the applicants for a job are men and 2/3 are women, if one
applicant is selected randomly, and if we define a random variable
1 if applicant is a man
X
0 if applicant is a woman
then the two possible values of X are 1 and 0, and the probability distribution for this X gives
probability 1/3 to value 1 and 2/3 to value 0.
The last member of our cast is expectation or expected value of a random variable.
Its purpose is to give a single number which is an average value that the random variable can
take on. But if these possible values of the random variable are not equally likely, it doesn't
make sense to compute the average as a simple arithmetical average. For example, suppose
the number of 911 emergency calls during an hour is a random variable X whose values and
probability distribution are as in the table below.
1.1 The Cast of Characters 7
value of X 2 4 6 8
probability 1 10 2 10 3 10 4 10
What could we mean by the average number of calls per hour? The simple arithmetical
average of the possible values of X is (2+4+6+8)/4 = 5, but 6 and 8 are more likely than 2
and 4, so they deserve more influence. In fact, the logical thing to do is to define the
expected value of X, denoted by E[X], to be the weighted average of the possible values,
using the probabilities as weights:
1 2 3 4 60
EX 2 4 6 8 6
10 10 10 10 10
We will be looking at all of our characters again in a more careful way later, but I
hope that the seeds of intuitive understanding have been planted. You should reread this
introductory section from time to time as you progress to see how the intuition fits with the
formality.
Exercises 1.1
1. Most state lotteries have a version of a game called Pick 3 in which you win an amount,
say $500, if you correctly choose three digits in order. Suppose that the price of a Pick 3
ticket is $1. There are two possible sample spaces that might be used to model the random
experiment of observing the outcome of the Pick 3 experiment, according to whether the
information of the winning number is recorded, or just the amount you won. Describe these
two sample spaces clearly, give examples of outcomes for both, and describe a reasonable
probability measure for both.
2. A grand jury is being selected, and there are two positions left to fill, with candidates A,
B, C, and D remaining who are eligible to fill them. The prosecutor decides to select a pair
of people by making slips of paper with each possible pair written on a slip, and then
drawing one slip randomly from a hat. List out all outcomes in the sample space of this
random phenomenon, describe a probability measure consistent with the stated assumptions,
and find the probability that A will serve on the grand jury.
3. In an effort to predict performance of entering college students in a beginning mathemat-
ics course on the basis of their mathematics ACT score, the following data were obtained.
The entries in the table give the numbers of students among a group of 179 who had the
indicated ACT score and the indicated grade.
Grade
A B C D F
23 2 5 16 31 3
ACT 24 6 8 12 15 2
25 7 10 15 6 1
26 8 15 13 4 0
8 Chapter 1 Discrete Probability
(a) A student is selected randomly from the whole group. What is the probability that this
student has a 25 ACT and received a B? What is the probability that this student received a
C? What is the probability that this student has an ACT score of 23?
(b) A student is selected randomly from the group of students who scored 24 on the ACT.
What is the probability that this student received an A?
(c) A student is selected randomly from the group of students who received B's. What is the
probability that the student scored 23 on the ACT? Is it the same as your answer to the last
question in part (a)?
4. Find the sample space for the following random experiment. Four people labeled A, B, C,
and D are to be separated randomly into a group of 2 people and two groups of one person
each. Assume that the order in which the groups are listed does not matter, only the contents
of the groups.
5. Consider the experiment of dealing two cards from a standard 52 card deck (as in black-
jack), one after another without replacing the first card. Describe the form of all outcomes,
and give two examples of outcomes. What is the probability of the event "King on 1st card?"
What is the probability of the event "Queen on 2nd card?"
6. In the Pick 3 lottery example of Exercise 1, consider the random variable X that gives the
net amount you win if you buy one ticket (subtracting the ticket price). What is the probabil-
ity distribution of X? What is the expected value of X?
a X
b 0
c
d 1
e
f
Exercise 7
7. An abstract sample space has outcomes a, b, c, d, e, and f as shown on the left in the
figure. Each outcome has probability 1/6. A random variable X maps the sample space to the
set {0,1} as in the diagram. What is the probability distribution of X? What is the expected
value of X?
1.1 The Cast of Characters 9
8. Two fair coins are flipped at random and in succession. Write the sample space explicitly,
and describe a reasonable probability measure on it. If X is the random variable that counts
the number of heads, find the probability distribution of X and compute the expected value
of X.
9. Using the data in Exercise 3, give an empirical estimate of the probability distribution of
the random variable X that gives the ACT score of a randomly selected student. What is the
expected value of X, and what is its real meaning in this context?
10. (Mathematica) Use the DrawIntegerSample command introduced in the section to build
a command that draws a desired number of samples, each of a desired size, from the set
1, 2, ..., popsize. For each sample, the command appends the maximum number in the
sample to a list. This list can be used to create a histogram of the distribution of maxima.
Repeat the experiment of taking 50 random samples of size 4 from 1, 2, ... , 7 several
times. Describe in words the most salient features of the histograms of the maximum. Use
one of your replications to give an empirical estimate of the probability distribution of the
maximum.
11. (Mathematica) The command SimulateMysteryX[m] illustrated below will simulate a
desired number m of values of a mystery random variable X. First load the KnoxProb7`Utili-
ties` package, then run the command several times using specific numerical values for m.
Try to estimate the probability distribution of X and its expected value.
SimulateMysteryX 10
12. (Mathematica) Write a command in Mathematica which can take as input a list of pairs
pi , xi in which a random variable X takes on the value xi with probability pi and return the
expectation of X. Test your command on the random variable whose probability distribution
is in the table below.
value of X 0 1 2 3
probability 1 4 1 2 1 8 1 8
13. Two flights are selected randomly and in sequence from a list of departing flights
covering a single day at an airport. There were 100 flights in the list, among which 80
departed on time, and 20 did not. Write a description of the sample space of the experiment
of selecting the two flights, and use the multiplication principle to count how many elements
the sample space has. Now let X be the number of flights in the sample of two that were
late. Find the probability distribution of X .
10 Chapter 1 Discrete Probability
1
3
5
There are 5 possible outcomes, namely, the scores 1-5. We can think of the experi-
ment of randomly sampling a student score as spinning the arrow on the pie chart of Figure
2, which has been constructed so that the angles of the circular wedges are proportional to
the frequencies in the categories that were listed in the last paragraph.
This example illustrates several important features of the subject of probability.
First, there is an underlying physics that drives the motion and final resting position of the
arrow. But differences in the starting point, the angular momentum imparted to the arrow,
and even slight wind currents and unevenness of the surface of the spinner give rise to a non-
deterministic or random phenomenon. We cannot predict with certainty where the arrow
will land. It could be philosophically argued that if we knew all of these randomizing factors
perfectly, then the laws of physics would perfectly predict the landing point of the arrow, so
that the phenomenon is not really random. Perhaps this is true in most, if not all, phenomena
that we call random. Nevertheless, it is difficult to know these factors, and the ability to
measure likelihoods of events that we model as random is useful to have.
1.2 Properties of Probability 11
Second, this example shows the two main ways of arriving at an assignment of
probabilities. We might make simplifying assumptions which lead to a logical assignment of
probability. Or we might repeat the experiment many times and observe how frequently
events occur. We were fortunate enough here to have the background data from which our
spinner was constructed, which would indicate by simple counting that if the spinner was
truly spun "randomly" we should assign probabilities
74 293 480 254 130
P1 , P2 , P3 , P4 , P5 (1)
1231 1231 1231 1231 1231
to the five outcomes. But step back for a moment and consider what we would have to do
with our spinner if we weren't blessed with the numbers. We would have no recourse but to
perform the experiment repeatedly. If, say, 1000 spins produced 70 1's, then we would
estimate P[1] empirically by 70 1000 .07, and similarly for the other outcomes.
Third, we can attempt to generalize conclusions. This data set itself can be thought
of as a sort of repeated spinning experiment, in which each student represents a selection
from a larger universe of students who could take the test. The probabilities in formula (1),
which would be exact probabilities for the experiment of selecting one student from this
particular batch, become estimates of probabilities for a different experiment; that of
selecting 1231 students from the universe of all students, past and present, who could take
the test. Whether our particular data set is a "random" sample representing that universe
without bias is a very serious statistical question, best left for a statistics course with a good
unit on samples and surveys.
Using the probabilities in (1), how should probability be assigned to composite
events like "1 or 3?" The logical thing to do is to pool together the probabilities of all
outcomes that are contained in the composite event. Then we would have
74 480 554
P1 or 3 P1 P3 (2)
1231 1231 1231
So the probability of an event is the total of all probabilities of the outcomes that make up
the event.
Two implications follow easily from the last observation: the empty event which
contains no outcomes has probability zero. Also, the event that the spinner lands somewhere
is certain, that is, it has probability 1 (100%). Another way to look at it is that the sample
space, denoted by say, is the composite event consisting of all outcomes. In this case we
have:
P P1 or 2 or 3 or 4 or 5
P1 P2 P5
74 293 130 (3)
1231 1231 1231
1
12 Chapter 1 Discrete Probability
In summary, so far we have learned that outcomes can be given probabilities between
0 and 1 using theoretical assumptions or empirical methods, and probabilities for composite
events are the totals of the outcome probabilities for outcomes in the event. Also P 0
and P 1. Try the following activity, which suggests another property. You will learn
more about that property shortly.
Activity 1 For the spinner experiment we looked at P[1 or 3]. Consider now the two
events "at least 3" and "between 1 and 3 inclusive." What is the probability of the event
"at least 3" or "between 1 and 3 inclusive"? How does this probability relate to the
individual event probabilities? Can you deduce a general principle regarding the
probability of one event or another occurring?
Sometimes we are interested in the set theoretic complement of an event, i.e., the
event that the original event does not occur. Does the probability of the complementary
event relate to the probability of the event? For the spinner, look at the events "at least 3"
and "less than 3," which are complements. We have, using formula (1):
480 254 130 864
Pat least 3 P3 or 4 or 5
1231 1231 1231 1231
(4)
74 293 367
Pless than 3 P1 or 2
1231 1231 1231
Notice that the two probabilities add to one, that is for these two events E and Ec :
PE PEc 1 (5)
This is not a coincidence. Together E and Ec contain all outcomes, and they have no
outcomes in common. The total of their probabilities must therefore equal 1, which is the
probability of the sample space.
Example 2. Let's look at another random phenomenon. Recall from Section 1.1 the
experiment of selecting a sequence of two numbers at random from the set 1, 2, 3, 4, 5
with no repetition of numbers allowed. Such an ordered selection without replacement is
known as a permutation, in this case a permutation of 2 objects from 5. Mathematica can
display all possible permutations as follows. The command
KPermutations[list, k]
in the KnoxProb7`Utilities` package returns a list of all ordered sequences of length k from
the list given in its first argument, assuming that a list member, once selected, cannot be
selected again.
1.2 Properties of Probability 13
Needs"KnoxProb7`Utilities`"
KPermutations1, 2, 3, 4, 5, 2
1, 2, 2, 1, 1, 3, 3, 1, 1, 4, 4, 1,
1, 5, 5, 1, 2, 3, 3, 2, 2, 4, 4, 2, 2, 5,
5, 2, 3, 4, 4, 3, 3, 5, 5, 3, 4, 5, 5, 4
20
As the Length output reports, there are 20 such permutations. These 20 are the outcomes that
form the sample space of the experiment. The randomness of the drawing makes it quite
reasonable to take the theoretical approach to assigning probabilities. Each outcome should
have the same likelihood. Since the total sample space has probability 1, the common
likelihood of the simple events, say x, satisfies
x x x 20 times
1 20 x 1 x 1 20 (6)
In other words, our probability measure on the sample space is defined on outcomes as:
P1, 2 P2, 1 P1, 3 P5, 4 1 20 (7)
and for composite events, we can again total the probabilities of outcomes contained in them.
Activity 2 In the example of selecting two positive integers from the first five, find the
probability that 1 is in the sample. Explain why your result is intuitively reasonable.
For instance,
Peither 1 or 2 is in the sample
P1, 2, 2, 1, 1, 3, 3, 1, 1, 4, 4, 1, 1, 5, 5, 1,
2, 3, 3, 2, 2, 4, 4, 2, 2, 5, 5, 2
14 20
and
P2 is not in sample
P1, 3, 3, 1, 1, 4, 4, 1, 1, 5, 5, 1, 3, 4, 4, 3, 3, 5,
5, 3, 4, 5, 5, 4
12 20
14 Chapter 1 Discrete Probability
By counting outcomes you can check that P[2 is in sample] = 8/20, which is complementary
to P[2 is not in sample] = 12/20 from above. Also notice that
8
P1 in sample P2 in sample
20
however,
14 8 8
P1 in sample or 2 in sample
20 20 20
We see that the probability of the event "either 1 is in the sample or 2 is in the sample" is not
simply the sum of the individual probability that 1 is in the sample plus the probability that 2
is in it. The reason is that in adding 8 20 8 20 we let the outcomes 1, 2 and 2, 1
contribute twice to the sum. This overstates the true probability, but we now realize that it is
easy to correct for the double count by subtracting 2 20 which is the amount of probability
contributed by the doubly counted outcomes. (See Theorem 2 below.)
Activity 3 Use Mathematica to display the ordered samples of size 3 without replace-
ment from 1, 2, 3, 4, 5, 6. What is the probability that 1 is in the sample? What is the
probability that neither 5 nor 6 is in the sample?
Armed with the experience of our examples, let us now develop a mathematical
model for probability, events, and sample spaces that proceeds from a minimal set of axioms
to derive several of the most important basic properties of probability. A few other properties
are given in the exercises.
Definition 1. A sample space of a random experiment is a set whose elements Ω are
called outcomes.
Definition 2 below is provisional we will have to be more careful about what an
event is when we come to continuous probability.
Definition 2. An event A is a subset of the sample space , hence an event is a set of
outcomes. The collection of all events is denoted by .
Suppose for example we consider another sampling experiment in which three letters
are to be selected at random, without replacement and without regard to order from the set of
letters a, b, c, d. The Combinatorica` package in Mathematica (which is loaded automati-
cally when you load KnoxProb7`Utilities`) has the commands
which, respectively, return all subsets of size k taken from the given list of objects, and all
subsets of the given universal set. We can use these to find the sample space of the experi-
ment, and the collection of events :
1.2 Properties of Probability 15
KSubsetsa, b, c, d, 3
Subsets
Notice that the empty set is the first subset of that is displayed; next are the simple
events a, b, c, a, b, d etc.; next are the two element composite events; and so forth.
In all subsequent work we will continue a practice that we have done so far, which is
to blur the distinction between outcomes Ω, which are members of the sample space, and
simple events Ω which are sets consisting of a single outcome. This will enable us to speak
of the "probability of an outcome" when in fact probability is a function on events, as seen in
the next definition.
Definition 3. A probability measure P is a function taking the family of events to the real
numbers such that
(i) P 1;
(ii) For all A
, P[A] 0;
(iii) If A1 , A2 , is a sequence of pairwise disjoint events then
P[A1 A2 = PAi
i
Properties (i), (ii), and (iii) are called the axioms of probability. Our mathematical
model so far corresponds with our intuition: a random experiment has a set of possible
outcomes called the sample space, an event is a set of these outcomes, and probability
attaches a real number to each event. Axiom (i) requires the entire sample space to have
probability 1, axiom (ii) says that all probabilities are non-negative numbers, and axiom (iii)
says that when combining disjoint events, the probability of the union is the sum of the
individual event probabilities. For finite sample spaces this axiom permits us to just define
probabilities on outcomes, and then give each event a probability equal to the sum of its
(disjoint) outcome probabilities. We will encounter various ways of constructing probability
measures along the way, which we must verify to satisfy the axioms. Once having done so,
other properties of probability listed in the theorems below are ours for free. They follow
from the axioms alone, and not from any particular construction, which of course is the huge
advantage of basing a mathematical model on the smallest possible collection of axioms.
16 Chapter 1 Discrete Probability
Theorem 1 was anticipated earlier. It is generally used in cases where the comple-
ment of an event of interest is easier to handle than the original event. If we have the
probability of one of the two, then we have the other by subtracting from 1.
The next result is based on the idea that the total probability of a union can be found
by adding the individual probabilities and correcting if necessary by subtracting the probabil-
ity in the double-counted intersection region.
A B
AB
Figure 1.3 - The probability of a union is the total of the event probabilities minus the intersection probability
Proof. (See Figure 3.) Since A can be expressed as the disjoint union A = (A B)
A Bc
, and B can be expressed as the disjoint union B A B
B Ac
we have, by
axiom (iii),
PA PB PA B PA Bc PA B PB Ac
PA Bc PB Ac 2 PA B
Therefore,
1.2 Properties of Probability 17
Combining (10) and (11) proves the first assertion. The second assertion follows directly as
a specialization of axiom (iii) to two events. Alternatively, if A and B are disjoint, then
A B and by Exercise 11, P 0.
Activity 4 Try to devise an analogous thoerem to Theorem 2 for a three set union.
(Hint: if you subtract all paired intersection probabilities, are you doing too much?) See
Exercise 14 for a statement of the general result.
You may be curious that the axioms only require events to have non-negative
probability. It should also be clear that events must have probability less than or equal to 1,
because probability 1 represents certainty. In the exercises you are asked to give a rigorous
proof of this fact, stated below as Theorem 3.
Theorem 3. For any event A, P[A]
1.
Another intuitively obvious property is that if an event B has all of the outcomes in it
that another event A has, and perhaps more, then B should have higher probability than A.
This result is given as Theorem 4, and the proof is Exercise 12.
Theorem 4. If A and B are events such that A B, then P[A]
P[B].
Theorems 3 and 4 help to reassure us that our theoretical model of probability
corresponds to our intuition, and they will be useful in certain proofs. But Theorems 1 and 2
on complements and unions are more useful in computations. We would like to finish this
section by adding a third very useful computational result, usually called the Law of Total
Probability. Basically it states that we can "divide and conquer," that is break up the
computation of a complicated probability P[A] into a sum of easier probabilities of the form
P[AB]. The imposition of condition B in addition to A presumably simplifies the computa-
tion by limiting or structuring the outcomes in the intersection. (See Figure 4.) All of our
computational results will be used regularly in the rest of the text, and the exercise set for
this section lets you begin to practice with them.
18 Chapter 1 Discrete Probability
B1 B2 ... Bn
A
Proof. We will use induction on the number of events Bi beginning with the base case n =
2. In the base case, it is clear that B2 = Bc1 , and therefore A A B1
A Bc1
and the two
sets in this union are disjoint. By Theorem 2,
PA PA
B1 PA
Bc1 PA
B1 PA
B2
Then the Ci 's are also a partition of (see Activity 5 below). By the inductive hypothesis
PA ni1 PA Ci
n 1
i1 PA Bi PA Bn Bn1
n 1
i 1 PA Bi P A Bn
A Bn1
(13)
n 1
i1 PA Bi PA Bn PA Bn1
n1
i1 PA Bi
Activity 5 Why are the Ci 's in the proof of the Law of Total Probability a partition of
? Why is the fourth line of (13) true?
1.2 Properties of Probability 19
Exercises 1.2
1. Consider the experiment of rolling one red die and two white dice randomly. Observe the
number of the face that lands up on the red die and the sum of the two face up numbers on
the white dice.
(a) Describe the sample space, giving the format of a typical outcome and two specific
examples of outcomes.
(b) Define a probability measure on this sample space. Argue that it satisfies the three
axioms of probability.
red die
1 2 3 4 5 6
2 3b
3 2b 1b
4 1b
5 1b 2b
6 1b
white sum
7 1b HR
8 1b 1b
9 2b 1b
10 1b
11 1b 1b
12 2 b
Exercise 2
(b) Define a probability measure on this sample space. Argue that it satisfies the three
axioms of probability.
(c) What is the probability that the random guesser gets at least two questions right?
4. (Mathematica) Use Mathematica to display the sample space of all possible random
samples without order and without replacement of three names from the list of names {Al,
Bubba, Celine, Delbert, Erma}. How many events are in the collection of all events ?
Describe a probability measure for this experiment, and find the probability that either
Bubba or Erma is in the sample.
5. Explain why each of the following attempts P1 and P2 is not a good definition of a
probability measure on the space of outcomes = {a, b, c, d, e, f, g}.
P1 P2
a .12 .16
b .05 .20
c .21 .20
d .01 .12
e .04 .08
f .28 .15
g .13 .12
6. (Mathematica) Use Mathematica to display the sample space of all possible random
samples in succession and without replacement of two numbers from the set 1, 2, ..., 10.
Describe a probability measure, and find the probability that 10 is not in the sample.
7. A 1999 Internal Revenue Service study showed that 44.9 million U.S. taxpayers reported
earnings of less than $20,000, 20.4 million earned between $20,000 and $30,000, 16.2
million earned between $30,000 and $40,000, 41.9 million earned between $40,000 and
$100,000, and 10.5 million earned more than $100,000.
(a) Write the outcomes for the experiment of sampling one taxpayer at random, and define
a probability measure. In such a random sample of one taxpayer, what is the probability that
the reported earnings is at least $30,000?
(b) Write the outcomes for the experiment of sampling two taxpayers at random and with
replacement, that is, once a taxpayer has been selected that person goes back into the pool
and could be selected again. Define a probability measure. (Hint: for a succession of two
events which do not depend on one another, it makes sense to compute the probability that
they both occur as the product of their probabilities. You can convince yourself of this by
considering a very simple experiment like the flip of two coins.) Find the probability that at
least one of the two earned more than $100,000.
1.2 Properties of Probability 21
x,y
2 1 1 2
Exercise 8
15. The general result for the probability of a union, which has Theorem 2 and Exercise 14
as special cases, is called the law of inclusion and exclusion. Let A1 , A2 , A3 , ..., An be a
collection of n events. To simplify notation, let Aij stand for a typical intersection of two of
them Ai A j , let Aijk stand for a three-fold intersection Ai A j Ak , etc. The law states that
22 Chapter 1 Discrete Probability
PA1 A2 A3 ... An
PAi PAij PAijk
i i j i j k
n1
1
PA1 A2 A3 ... An
In other words, one alternately adds and subtracts all k-fold intersection probabilities as k
goes from 1 to n. Use this result in the following classical problem. Four men leave their
hats at a hat check station, then on departing take a hat randomly. Find the probability that
at least one man takes his own hat.
16. Recall Exercise 5 of Section 1.1 (the blackjack deal). Use the Law of Total Probability,
Theorem 5, to find P[queen on 2nd] in a well-justified way.
17. There are two groups of experimental subjects, one with 2 men and 3 women and the
other with 3 men and 2 women. A person is selected at random from the first group and
placed into the second group. Then a person is randomly selected from the second group.
Use the Law of Total Probability to find the probability that the person selected from the
second group is a woman.
In this book we will include as appropriate problems taken from the list of sample
questions for the Society of Actuaries/Casualty Actuarial Society qualifying exam P
(Probability). These problems have been obtained with permission from the societies at the
website www.soa.org/files/pdf/P-09-05ques.pdf.
Sample Problems from Actuarial Exam P
18. The probability that a visit to a primary care physician's (PCP) office results in neither
lab work nor referral to a specialist is 35%. Of those coming to a PCP's office, 30% are
referred to specialists and 40% require lab work. Determine the probability that a visit to a
PCP's office results in both lab work and referral to a specialist.
19. An urn contains 10 balls: 4 red and 6 blue. A second urn contains 16 red balls and an
unknown number of blue balls. A single ball is drawn from each urn. The probability that
both balls are the same color is 0.44. Calculate the number of blue balls in the second urn.
20. An auto insurance company has 10,000 policyholders. Each policyholder is classified as:
(i) young or old; (ii) male or female; and (iii) married or single. Of these policyholders, 3000
are young, 4600 are male, and 7000 are married. The policyholders can also be classified as
1320 young males, 3010 married males, and 1400 young married persons. Finally, 600 of
the policyholders are young married males. How many of the company's policyholders are
young, female, and single?
21. An insurer offers a health plan to the employees of a large company. As part of this plan,
the individual employees may choose exactly two of the supplementary coverages A, B, and
C, or they may choose no supplementary coverage. The proportions of the company's
employees that choose coverages A, B, and C are 1/4, 1/3, and 5/12 respectively. Determine
the probability that a randomly chosen employee will choose no supplementary coverage.
1.3 Simulation 23
1.3 Simulation
To simulate means to reproduce the results of a real phenomenon by artificial means.
We are interested here in phenomena that have a random element. What we really simulate
are the outcomes of a sample space and/or the states of a random variable in accordance with
the probability measure or distribution that we are assuming. Computers give a fast and
convenient way of carrying out these sorts of simulations.
There are several reasons to use simulation in problems. Sometimes we do not yet
have a good understanding of a phenomenon, and observing it repeatedly may give us
intuition about a question related to it. In such cases, closed form analytical results might be
derivable, and the simulation mostly serves the purpose of suggesting what to try to prove.
But many times the system under study is complex enough that analysis is either very
difficult or impossible. (Queueing problems see Section 6.3 are good illustrations of
both situations.) Then simulation is the only means of getting approximate information, and
the tools of probability and statistics also allow us to measure our confidence in our conclu-
sions. Another reason to simulate is somewhat surprising. In some inherently non-probabilis-
tic problems we can use a simulated random target shooting approach to gain information
about physical measurements of size and shape. A simple example of such a problem
solving method (traditionally called Monte Carlo simulation in reference to the European
gaming center) is the example on area of a region in Exercise 8.
I have chosen to give the subject emphasis and early treatment in this book for
another reason. There is more at stake for you intellectually than gaining intuition about
systems by observing simulations. Building an algorithm to solve a problem by simulation
helps to teach you about the subtle concepts of the subject of probability, such as probability
measures and random variables. Thus, here and elsewhere in this book I will be asking you
to carefully study the details of programs I have written, and to write some simulation
programs in Mathematica yourself. You will invest some time and mental effort, but the
payoff will be great.
We have already experienced simulation in the form of selecting ordered samples
without replacement of two numbers from the set 1, 2, 3, 4, 5. In that context we looked
at the empirical histogram of the minimum of the two numbers. We will treat simulations of
samples again in more detail in the next section. For the time being let us see how to use
Mathematica's tools for generating streams of random numbers to solve problems.
Most of the kind of simulations that we will do are adaptations of the simple process
of choosing a random real number in the interval 0, 1. The word "random" here could be
replaced by the more descriptive word "uniform," because the assumption will be that
probability is distributed uniformly through 0, 1, with sets getting large probability only if
their size is large, not because they happen to be in some favored location in 0, 1. More
precisely, we suppose that the sample space of the random experiment is = 0, 1, the
collection of events consists of all subsets of 0, 1 for which it is possible to define their
24 Chapter 1 Discrete Probability
length, and the probability measure P gives probability to a set equal to its length. Then for
instance:
1 3 1 1 2 1 1 7
P 0, 1 1, P , , P 0, , 1 ,
2 4 4 4 3 4 3 12
5
P
0
6
UniformDistribution[{a,b}]
is the syntax for the uniform distribution on the interval [a,b], and
return, respectively, one or a list of n random numbers from the given distribution. Here are
some examples of their use.
4.42891
2.00398
Notice that new random numbers in 2, 5 are returned each time that RandomReal is called.
Incidentally, RandomReal may be used with an empty argument when the distribution being
sampled from is the uniform distribution on 0, 1, which is what we will usually need.
How do Mathematica and other programs obtain such random numbers? There are
several means that are possible, but the most common type of random number generator is
the linear congruential generator which we describe now. The ironic fact is that our
supposedly random stream of numbers is only simulated in a deterministic fashion in such a
1.3 Simulation 25
where multiplier, increment, and modulus are positive integer constants. Then, using the
newly computed seed, return the number
rand seed
modulus (2)
Subsequent random numbers are found in the same way, updating seed by formula (1) and
dividing by modulus. Notice that because the initial seed is non-negative and all of the
constants are positive, seed will stay non-negative, which makes rand 0 as well. And since
the mod operation returns a value less than modulus, seed will be less than modulus, hence
rand will be less than 1. So each pseudo-random number that we produce will lie in [0, 1).
Now the larger the value of modulus, the more possible values of seed there can be;
hence, the more real numbers that can be returned when formula (2) is applied. In fact, there
would be exactly modulus number of possible seed values. But modulus, multiplier, and
increment must be carefully chosen to take advantage of this, otherwise, the sequence of
pseudo-random numbers could cycle with a very small period. As a simple example, if the
constants were as follows:
new seed 2 seed 2 mod 8
then possible seed values are 0, 1, 2, 3, 4, 5, 6, and 7, but it is easy to check that for these
seeds the new seeds are, respectively, 2, 4, 6, 0, 2, 4, 6, 0, so that after the first value of seed,
subsequent seeds can only take on one of four values. Even worse, should seed ever equal 6,
then new seeds will continue to be 6 forever, because (2 × 6 + 2) mod 8 is again 6.
Therefore, the choice of the constants is critical to making a stream of numbers that
has the appearance of randomness. There is a lot of number theory involved here which we
cannot hope to cover. The book by Rubinstein (see References) gives an introduction. It is
reported there that the values
multiplier 27 1, increment 1, modulus 235
have been successful, but these are not the only choices. We can build our own function
similar to the RandomReal command in Mathematica to generate a list of uniform[0,1]
numbers using these values as follows.
26 Chapter 1 Discrete Probability
MyRandomArrayinitialseed_, n_ :
Moduleseed, rand, thearray, mult, inc, modul, i,
mult 27 1;
inc 1;
modul 235 ;
thearray ;
seed initialseed;
Doseed Modmult seed inc, modul;
rand N seed modul;
AppendTothearray, rand, i, 1, n;
thearray
The module begins with the list of local variables used in the subsequent lines, and sets the
values above for the multiplier, here called mult, the increment inc, and the modulus modul.
After initializing an empty list of random numbers and setting the initial value of seed
according to the input initialseed, the module enters a loop. In each of the n passes through
the loop, three things are done. The seed is updated by the linear congruence relation (1), the
new random number rand is computed by formula (2), and then rand is appended to the
array of random numbers. Here is a sample run. You should try running the command again
with the same initial seed to see that you get the same sequence of pseudo-random numbers,
then try a different seed.
Activity 2 Revise the random number generating command above using the following
constants, taken from a popular computer science text.
SeedRandom[n]
which sets the initial seed value as the integer n. If you use an empty argument to SeedRan-
dom, it resets the seed using the computer's clock time. In the sequence of commands below,
the seed is set to 79, two random numbers are generated, then the seed is reset to 79, and the
same random number .624058 is returned as was returned the first time. We then reset the
seed "randomly" using the computer clock, and the first random number that is returned is
something other than .624058.
SeedRandom79;
RandomReal
RandomReal
0.624058
0.782677
SeedRandom79;
RandomReal
0.624058
SeedRandom;
RandomReal
0.295358
Activity 3 Perform a similar test with SeedRandom using a seed of 591 and requesting
a RandomReal[UniformDistribution[{0,1}], 5]. Reset the seed by computer clock time
and call for another random list of five numbers to see that different random numbers are
returned.
Example 1 It is time to build a simulation model for an actual system. A random walk on
the line is a random process in which at each time instant 0, 1, 2, … an object occupies a
point with integer coordinates shown as circled numbers in Figure 5.
28 Chapter 1 Discrete Probability
1 p 1 p 1 p 1 p 1 p
0 p
1 p
2 p
3 p
4 p
5
Figure 1.5 - Schematic diagram of a random walk
The position on the line cannot be foretold with certainty, but it satisfies the condition that if
the position is n at one time instant, then it will either be n 1 with probability p or n 1
with probability 1 p at the next time instant, as indicated by the annotated arrows in the
figure. Variations are possible, including allowing the object to wait at its current position
with some probability, enforcing boundary behavior such as absorption or reflection to
confine the random walk to a bounded interval, and increasing the dimension of the set on
which the random walk moves to 2, 3, or higher. Random walks have been used as models
for diffusion of heat, the motion of economic quantities such as stock prices, the behavior of
populations, and the status of competitive games among many other applications, and their
fascinating properties have kept probabilists busy for many decades.
For our purposes suppose we are interested in estimating the probability that a
random walk which starts at a point n 0 reaches a point M n before it reaches 0. This is a
famous problem called the gambler's ruin problem, which really asks how likely it is that a
gambler's wealth that follows a random walk reaches a target level of M before bankruptcy.
We have an initial position n followed by a sequence of random positions X1 , X2 , X3 ,
… . The underlying randomness is such that at each time a simple random experiment
determines whether the move is to the right (probability p) or the left (probability 1 p). So
to simulate the random walk we only need to be able to iteratively simulate the simple
experiment of adding 1 to the current position with probability p and subtracting 1 with
probability 1 p. We will stop when position 0 or position M is reached, and make note of
which of the two was the final position. By running such a simulation many times and
observing the proportion of times that the final state was M, we can estimate the probability
that the final state is M.
To implement the plan in the preceding paragraph, we need a way of randomly
returning +1 with probability p and 1 with probability 1 p. But since we know how to
simulate uniform real random numbers on [0,1], and the probability of an interval is equal to
its length, let's say that if a simulated uniform random number is less than or equal to p then
the result is +1, otherwise 1. Then we have the desired properties of motion, because
Pmove right P 0, p length 0, p p
Pmove left P p, 1 length p, 1 1 p
Here is a Mathematica function that does the job. A uniform random number is
selected and compared to p. If the comparison is true, then +1 is returned, else 1 is
returned. So that we can simulate random walks with different p values, we make p an
argument. One function call with p = .5 is displayed; you should try a few others.
1.3 Simulation 29
StepSizep_ : IfRandomReal p, 1, 1
StepSize.5
1
To simulate the process once, we let the initial position n be given, as well as the
right step probability p and the boundary point M. We repeatedly move by the random
StepSize amount until the random walk reaches 0 or M. The function below returns the
complete list of states that the random walk occupied. Look carefully at the code to see how
it works. The variable statelist records the current list of positions occupied by the random
walk, the variable done becomes true when either the boundary 0 or M is reached, and
variables currstate and nextstate keep track of the current and next position of the random
walk. (The || syntax means logical "or.")
Below is an example run of the command. We include a list plot of the sequence of
positions to help your understanding of the motion of the random walk. You should rerun
this command, changing parameters as desired to get more intuition about the behavior of
the random walk. A more systematic investigation is suggested in Exercise 6. We set a
random seed first for reproducibility, but if you are in the electronic version of the book and
want to see different paths of the random walk, just delete the SeedRandom command.
SeedRandom464 758;
thewalk SimulateRandomWalk0.5, 4, 8
ListPlotthewalk,
PlotStyle PointSize0.015, Black
30 Chapter 1 Discrete Probability
4, 5, 4, 5, 6, 7, 6, 5, 4, 5, 6, 5, 6,
5, 4, 5, 6, 5, 6, 5, 4, 3, 4, 3, 2, 1, 2, 3,
4, 5, 6, 5, 4, 3, 4, 3, 4, 3, 2, 3, 2, 1, 0
7
6
5
4
3
2
1
10 20 30 40
Activity 4 Roughly how would you describe the dependence of the probability that the
random walk reaches M before 0 on the value of p? How does it depend on the value of
n? How could you use SimulateRandomWalk to check your intuition about these
questions?
Continuing the random walk example, to estimate the probability of hitting M before
0 we must repeat the simulation many times, with new streams of random numbers. We
merely take note of the last element of the simulated list of positions, which will be either 0
or M, count the number of times that it is M, and divide by the number of repetitions to
estimate the probability. Here is a command that does this. Any input values we like for p, n,
and M can be used. We also let the number of replications of the simulation be a fourth
argument.
The variable absatM, initialized to zero, keeps track of how many times so far the walk has
been absorbed at M. The algorithm is simple: we repeat exactly numreps times the actions of
simulating the list of positions (called poslist) of the random walk, then pick out the last
1.3 Simulation 31
element lastpos, and increment absatM if necessary. For p = .4, n = 3, and M = 5 here are the
results of two runs of 100 simulated random walks each.
SeedRandom342 197;
AbsorptionPctAtM.4, 3, 5, 100
AbsorptionPctAtM.4, 3, 5, 100
0.39
0.42
There is good consistency in the results of our runs. About 40% of the time, for these
parameters, the random walk reaches M = 5 before it hits 0. There will be a way of finding
the probability exactly by analytical methods later. (See Exercise 13.)
Exercises 1.3
1. For a uniform distribution on the interval a, b of real numbers, find the open interval
probability P c, d , the closed interval probability P c, d , and the probability of a
single point P d , where a < c < d < b.
2. (Mathematica) Use the MyRandomArray command from the section with initial seeds of
2, 5, 11, and 25 to generate lists of 10 pseudo-random numbers. Look carefully at your lists.
(It may help to ListPlot them.) Do the sequences appear random? Where is the source of the
problem, and what might be done to correct it?
3. (Mathematica) Write a command to simulate an outcome from a finite sample space with
outcomes a, b, c, and d with probabilities p1 , p2 , p3 , p4 , respectively.
4. (Mathematica) Write a command to simulate a two-dimensional random walk starting at
0, 0. Such a random process moves on the grid of points in the plane with integer coordi-
nates such that wherever it is now, at the next instant of time it moves to one of the immedi-
ately adjacent grid points: right with probability r, left with probability l, up with probability
u, and down with probability d, where r + l + u + d = 1.
5. Consider a linear congruential pseudo-random number generator whose sequence of seed
values satisfies the recursive relation
Xn1 aXn c mod m
Assume that m = 2 p for some positive integer p. The generator is called full period if given
any initial seed all of the values 0, ... , m 1 will appear as seed values before repetition
occurs. Show that if the generator is full period, then c must be an odd number.
6. (Mathematica) For the random walk with p = .3 and M = 6, use the AbsorptionPctAtM
command in the section to try to estimate the functional dependence of the probability of
absorption at M on the starting state n.
32 Chapter 1 Discrete Probability
7. (Mathematica) Use Mathematica to simulate ten samples of size 50 from the uniform
distribution on 0, 10. Study the samples by plotting histograms with 5 rectangles of each of
the ten samples. How many observations do you expect in each category? Comment on the
consistency of the histograms from one sample to another.
8. (Mathematica) While simulation is most productively used with random systems such as
the random walk, it is interesting to note that certain numerical approximations of physical
quantities like area and volume can be done by Monte Carlo techniques. Develop a Mathe-
matica command to approximate the area between the function f(x) = ex and the x-axis on
2
the interval [0,1], which is graphed below. Compare the answers you get from repeated
simulations to a numerical integration using Mathematica's NIntegrate command. (Hint:
The square [0,1]×[0,1] contains the desired area A, and the area of the square is 1. If you
figuratively shoot a random arrow at the square, it should fall into the shaded region a
proportion A of the time. Simulate a large number of such random arrows and count the
proportion that fall into A.)
1.0
0.8
0.6
0.4
0.2
9. Referring to Exercise 8, why would you not be able to use Monte Carlo simulation to find
the area bounded by the x and y-axes, the line x = 1, and the curve y = 1/x? Could you do it
for the curve y = 1/ x ?
10. (Mathematica) A Galton-Watson board is a triangular grid of pegs such as the one
shown, encased in plastic or glass and stood on end. A large number of small marbles escape
from a hole in the top exactly in the middle of the board as shown, bounce from side to side
with equal probability as they fall down the board, and eventually land in the bins in the
bottom. Build a Galton-Watson simulator for the small five level board shown using
arbitrarily many marbles, run it several times, and comment on the shape that is typically
formed by the stacks of marbles that gather in the bottom bins.
11. (Mathematica) A dam on a 50-foot deep lake is operated to let its vents pass water
through at a constant rate so that 1 vertical inch of lake depth is vented per week. The water
begins at a level of 5 feet below the top of the dam. A rainy season begins in which each
1.3 Simulation 33
week a random and uniformly distributed depth of water between 2 and 5 inches is added to
the lake. Simulate this process, and study how long it takes for the water to overflow the top
of the dam. Do the simulations match your intuition and expectations about the situation?
12. (Mathematica) A device uses two components in such a way that the device will no
longer function if either of the components wears out. Suppose that each component lifetime
is random and uniformly distributed on the interval of real numbers 10, 20. Write a
command to simulate the lifetime of the device many times over and investigate the probabil-
ity law of the device lifetime empirically. About what is the average lifetime?
13. In the random walk problem it is possible to make progress toward an analytical solution
to the computation of f M i Pwalk reaches M before 0 beginning at state i using recur-
sion. First, solve the trivial case M 2, that is, compute intuitively f2 0, f2 1, f2 2.
Then try to extend to the case M 3 by expressing each f3 i in terms of f3 values for states
neighboring state i.
14. (Mathematica) Two small rental car establishments in a city are such that cars rented at
one location may be returned to the other. Suppose that as a simplified model we assume that
from one week to another, at each location, a random number of cars move from that
location to the other location. The two locations always keep at least one vehicle on hand, so
the number of vehicles moving out of each can be no greater than the initial inventory level
minus 1. Write a function in Mathematica that takes as arguments the initial car inventory
levels of each location and the number of time steps to simulate. This program should
simulate successive inventory levels of each location, and return the list of inventory levels
for location 1. Use it to examine both time trends and the distribution of inventory levels in
scenarios in which the initial inventory levels are equal and in which they are drastically
unequal. In the latter scenario, do the inventory levels seem to equalize, and if so how many
time steps does it take?
15. (Mathematica) Consider again the gambler's ruin problem with p .4, n 3, and M 5.
Use simulation to estimate the probability distribution of the number of steps it takes the
random walk to reach one of the boundary states.
There are several main types of sampling situations, depending on whether the
objects being sampled are taken in sequence or just in a batch, and whether the objects are
replaced as sampling proceeds, and so become eligible to be sampled again, or the sampling
is done without replacement.
Needs"KnoxProb7`Utilities`";
KPermutationsa, b, c, d, e, 2
a, b, b, a, a, c, c, a, a, d, d, a,
a, e, e, a, b, c, c, b, b, d, d, b, b, e,
e, b, c, d, d, c, c, e, e, c, d, e, e, d
RandomKPermutation[list, k]
which randomly selects a permutation of k elements from the given list, as shown below.
RandomKPermutationa, b, c, d, e, 3
e, b, c
Do the following activity to begin the problem of finding the number of permutations
of k objects from n.
sampled item, which can be done in n ways, and then the second, which can be done in n 1
ways. The result is therefore nn 1 by the multiplication principle. Here is the general
theorem, which can be proved similarly (see Exercise 4).
Theorem 1. The number of ways of selecting k objects in sequence and without replacement
from a population of n objects is
n
Pn,k nn 1 n 2 n k 1 (1)
n k
where n nn 1 n 2 2 1.
Example 1 In the example above of selecting 2 objects randomly in sequence and without
replacement from the set a, b, c, d, e, what is the probability that the sample will not
include d nor e?
The size of the sample space is n 5 4 20 by Theorem 1. To count the number
of outcomes in the event that the sample includes neither d nor e, note that in this case the
sample must be drawn from the subset a, b, c. There are 3 2 6 ways of drawing such a
sample, again by Theorem 1. Therefore, the probability of this event is 6 20 .3.
Example 2 A hand of blackjack is dealt in sequence and without replacement so that the
first card is face down and the second is face up. The hand is referred to as a blackjack hand
if it contains an ace and a face card, that is, a king, queen, or jack. Let us find the probability
of being dealt a blackjack.
The sample space is x, y : x, y are cards and x y. Since there are 52 cards
in an ordinary deck, the size of the sample space is
n 52 51
by Theorem 1. There are two disjoint cases making up the event A hand is a blackjack,
namely, the cards are dealt in the order ace, face card or they are in the order face card, ace.
The number of outcomes in the event A is therefore the total
nA 4 12 12 4
since there are 4 aces and 12 face cards in the deck. Hence the probability of a blackjack is
about .036, as below.
4 12 12 4
N
52 51
0.0361991
36 Chapter 1 Discrete Probability
Below is one run of the command for 100 replications of a sample of 10 integers from the
first 50. You should try sampling again a few times. You will see that the histogram is
rather consistent in its shape: it is asymmetrical with a longer left tail than right tail, it has a
tall peak in the mid 40's, which is probably about where the average range is located, and a
very high percentage of the time the range is at least 30.
SeedRandom124;
ranges SimulateRange100, 1, 50, 10
1.4 Random Sampling 37
45, 41, 44, 48, 48, 44, 46, 43, 48, 39, 46, 39, 43, 44, 30,
36, 42, 45, 40, 41, 46, 38, 45, 43, 45, 43, 46, 45, 40, 45,
47, 31, 43, 41, 48, 33, 45, 38, 32, 39, 38, 39, 46, 37,
46, 39, 31, 39, 46, 47, 36, 41, 42, 45, 44, 31, 45, 45,
36, 40, 47, 43, 41, 42, 41, 48, 47, 47, 41, 45, 42, 47,
38, 45, 44, 38, 43, 38, 47, 48, 36, 33, 31, 37, 45, 41,
39, 40, 39, 43, 35, 46, 33, 32, 31, 43, 46, 47, 42, 44
0.20
0.15
0.10
0.05
30.9 32.7 34.5 36.3 38.1 39.9 41.7 43.5 45.3 47.1
Example 4 In a random sample of given size, with what likelihood does each individual
appear in the sample? Recall we asked this question in Section 1.1 about a sample of two
numbers from the set 1, 2, 3, 4, 5. If the universe being sampled from is
U 1, 2, 3, ..., n, and a sample of k individuals is drawn in sequence and without
replacement, then the sample space is
x1 , x2 , , xk : xi U for all i, and xi x j for all i j
We suppose that the randomness assumption means that all outcomes are equally likely and
hence the probability of an event E is its cardinality n(E) divided by this n .
Consider the event A that individual 1 is in the sample. That individual could have
been sampled first, or second, or third, etc., so that A can be broken apart into k disjoint
subsets
A B 1
B2
Bk
where Bi = "1 occurs on ith draw." Therefore, by the third axiom of probability, P[A] =
k
PBi n 1 n 2 n 1 k 1 1 n
since once individual 1 is known to be in position i, the rest of the positions are filled by
selecting a permutation of k 1 individuals from the remaining n 1. Therefore,
kn 1 n 2 n k 1 k
PA
nn 1 n 2 n k 1 n
The same reasoning holds for every other individual in the population. So we have shown
the rather intuitive result that if a sample in sequence and without replacement of k objects
from n is taken, every individual in the universe has an equal chance of k/n of appearing in
the sample.
Activity 2 Try to carry through similar reasoning to that of Example 4 if the sample is
drawn in sequence with replacement. Does the argument break down? If it does, try
another approach (such as complementation).
KSubsetsa, b, c, d, e, 3
a, b, c, a, b, d, a, b, e, a, c, d, a, c, e,
a, d, e, b, c, d, b, c, e, b, d, e, c, d, e
Notice that there are 10 combinations of 3 elements chosen from 5. The same package has a
command
RandomKSubset[list, k]
which returns one such combination selected at random. Below are a couple of instances,
and as we would anticipate, we get a different randomly selected subset the second time than
the first time.
1.4 Random Sampling 39
SeedRandom67 653;
RandomKSubseta, b, c, d, e, 3
RandomKSubseta, b, c, d, e, 3
b, d, e
a, b, e
If at first k objects are selected in a batch from n, then it is possible to think of subsequently
putting them into a sequence. By doing so we would create a unique permutation of k
objects from n. The number of possible permutations is known from Theorem 1. And it is
fairly easy to see that there are k! = kk 1 k 2 2 1 ways of sequencing the k selected
objects, by the multiplication principle. Therefore, by the multiplication principle again,
permutations combinations ways of sequencing the combination
Pn,k Cn,k k
If we now bring in the formula for Pn,k from Theorem 1, we have the following result.
C6choose3 Binomial6, 3
20
40 Chapter 1 Discrete Probability
Activity 3 Check to see whether empirically the probability that a particular element of
the universe is sampled is dependent on the element. For a sample of k from n in a batch
without replacement, what does this probability seem to be? (See also Exercise 10.)
You may use the command below, which inputs the population size n (assuming the
population is coded as 1, 2, ..., n), the sample size k, the population member m
between 1 and n to be checked, and the number of replications numreps. It outputs the
proportion of the replications for which m was in the sample.
1000
This is because there are equally likely samples of size 100 in the sample space, and
100
200
there are ways of selecting k erroneous returns from the group of 200 of them, and
k
800
ways of selecting the remaining correct returns from the 800 correct returns in the
100 k
population.
We can calculate the probability of 40 or more erroneous returns by summing these
numbers from 40 to 100, or equivalently by complementation, summing from 0 to 39 and
subtracting from 1. We use the latter approach to reduce computation.
8.85353 107
Since the probability is only on the order of 106 , the hypothesized 20% error rate in the
population of tax returns is in very serious doubt.
Example 6 As indicated earlier, Theorem 2 has implications of importance for card players.
In poker, a hand of 2-pair means that there are two cards of one rank (such as 4's or 7's), two
cards of a different rank, and a single card that doesn't match either of the first two ranks. A
hand of 3-of-a-kind consists of three cards of one rank (such as 6's), and two unmatching
singletons (such as one king and one ace). Which hand is rarer, and hence more valuable?
For the experiment of drawing a batch of 5 cards without replacement from a
standard deck of 52 cards we want to compare the probabilities of the 2-pair event and the 3-
of-a-kind event. The sample space has C52,5 elements. For the 2-pair event, to determine a
unique hand we must first select in no particular order two ranks for the pairs. This can be
done in C13,2 ways, since there are 13 possible ranks 2, 3, ... , Ace. For each rank we must
select two of the suits (hearts, diamonds, clubs, or spades) from among the four possible
suits to determine the exact content of the pairs. Then we must select a final card to fill the
hand from among the 52 4 4 = 44 cards left in the deck which do not match our selected
ranks. Thus, the following is the number of 2-pair hands:
num2pairhands
Binomial13, 2 Binomial4, 2 Binomial4, 2 44
123 552
42 Chapter 1 Discrete Probability
For 3-of-a-kind, we must pick a rank from among the 13 for our 3-of-a-kind card,
then we select 3 from among the 4 suits for that rank, and then two other different ranks
from the 12 remaining ranks, and a suit among the four possible for each of those unmatched
cards. Therefore, this expression gives the number of 3-of-a-kind hands:
num3ofakindhands
Binomial13, 1 Binomial4, 3 Binomial12, 2 4 4
54 912
Below are the two probabilities. Since the 2-pair hand is over twice as likely to be dealt as
the 3-of-a-kind hand, the 3-of-a-kind hand is more valuable.
Ptwopair, P3ofkind
num2pairhands Binomial52, 5,
num3ofakindhands Binomial52, 5
198 88
,
4165 4165
possible such random samples. For example, if 1, 2, ..., n is the population, the probabil-
ity that the first sample object is a 3 is
1nk1 1
nk n
since the restriction that 3 is in position 1 means that there is just one way of filling that
position, and still there are n ways of filling each of the other k 1 positions. There is
nothing special about object 3, nor is there anything special about position 1. Each popula-
tion member has a probability of 1/n of appearing in each position.
1.4 Random Sampling 43
The special case n 2 comes up frequently, and will in fact be the subject of a
section in Chapter 2. If there are two possibilities, call them 0 and 1, for each of the k
sample positions, then there are 2k possible sequences of length k of these 0's and 1's. Try
this activity, which will be very useful later.
Activity 4 How many sequences of 0's and 1's are there such that there are exactly m 1's
in the sample, for m between 0 and k? First try writing down all sequences with k 3
and counting how many of them have m 1's for m 0, 1, 2, 3. Next move to k 4, and
then try to generalize. (Hint: Think what must be done to uniquely determine a
sequence of k terms with m 1's in it.)
This suggests a way of counting the elements in . First we must pick a set of 5 subjects
from 15 in a batch and without replacement to form the placebo group. Then from the
remaining 10 subjects we sample 5 more for the nicotine patch group, and finally we are
forced to use the remaining 5 subjects for the nicotine gum group. By the multiplication
principle there are
44 Chapter 1 Discrete Probability
15 10 5 15 10 5 15
5 5 5 5 10 5 5 0 5 5 5 5
elements of . (Notice that the rightmost factor in the product of binomial coefficients is just
equal to 1.) Mathematica tells us how many this is:
Binomial15, 5 Binomial10, 5
756 756
If a partition is selected at random, then each outcome receives probability 1/756,756 and
events have probability equal to their cardinality divided by 756,756. Now the number of
partitions in the set of all partitions for which the placebo group is fixed at 1, 2, 3, 4, 5 can
be found similarly: we must sample 5 from 10 for the patch group, and we are left with a
unique set of 5 for the gum group. Thus, the probability that the placebo group is
1, 2, 3, 4, 5 is
10
5 1
15 10 15
5 5 5
1 Binomial15, 5
1
3003
There is less than a one in 3000 chance that this placebo group assignment could have arisen
randomly, but we should not be too hasty. Some placebo group subset must be observed,
and we could say the same thing about the apparently random subset 3, 5, 8, 12, 13 as we
did about 1, 2, 3, 4, 5: it is only one in 3000 likely to have come up. Should we then doubt
any partition we see? The answer is not a mathematical one. Since 1, 2, 3, 4, 5 did come
up, we should look at the sampling process itself. For instance if slips of paper were put into
a hat in order and the hat wasn't sufficiently shaken, the low numbers could have been on top
and therefore the first group would tend to contain more low numbers. If numbers are
sampled using a reputable random number generator on a computer, one would tend to be
less suspicious.
For a population of size n and k groups of sizes n1 , n2 , ... , nk , you should check
that the reasoning of Example 7 extends easily to yield the number of partitions as
1.4 Random Sampling 45
n n
n1 n2 nk x1 x2 ... xk
This expression is called the multinomial coefficient, and Mathematica can compute it using
the function:
Multinomial[x1, x2, ..., xk]
The computation below gives the same result as in Example 7.
Multinomial5, 5, 5
756 756
In Exercise 18 you are asked to try to code the algorithm for a function RandPartition-
[poplist, numgroups, sizelist], which returns a random partition of the given population, with
the given number of groups, whose group sizes are as in the list sizelist. Again you will find
the reasoning of Example 7 to be helpful.
Exercises 1.4
1. A police lineup consists of five individuals who stand in a row in random order. Assume
that all lineup members have different heights.
(a) How many possible lineups are there?
(b) Find and justify carefully the probability that the tallest lineup member is in the middle
position.
(c) Find the probability that at least one lineup member is taller than the person in the first
position.
2. A bank machine PIN number consists of four digits. What is the probability that a
randomly chosen PIN will have at least one digit that is 6 or more? (Assume that digits can
be used more than once.)
3. A pushbutton combination lock entry to a vehicle consists of a five digit number, such as
94470 or 53782. To gain entry to the vehicle, one pushes a sequence of five buttons on the
door in which two digits are assigned to each button, specifically the first button is 0-1, the
second is 2-3, etc. How many distinct lock combinations are there? (Assume that digits can
be used more than once in a combination.)
4. Use mathematical induction to prove Theorem 1.
5. (Mathematica) (This is a famous problem called the birthday problem.) There are n
people at a party, and to liven up the evening they decide to find out if anyone there has the
same birthday as anyone else. Find the probability that this happens, assuming that there are
365 equally likely days of the year on which each person at the party could be born. Report
your answer in the form of a table of such probabilities for values of n from 15 to 35.
46 Chapter 1 Discrete Probability
(a) If a particular carton has 3 bad packages, what is the probability that it will be sold as
generic?
(b) (Mathematica) Under the current inspection plan, how many bad packages would be
necessary in the carton in order that the probability of rejecting the carton should be at least
90%?
(c) (Mathematica) For a carton with 3 bad packages, what should the sample size be so
that the probability of rejecting the carton is at least 75%?
15. A fair die is rolled 10 times. Find the probability that there will be at least 2 6's among
the 10 rolls.
16. For a partition of n population subjects into k groups of sizes n1 , n2 , ... , nk , find and
carefully justify the probability that subject number 1 is assigned to the first group.
17. In how many distinguishable ways can the letters of the word Illinois be arranged?
18. (Mathematica) Write the Mathematica command RandPartition described at the end of
the section.
19. We looked at pseudo-random number generation in Section 1.3, in which a sequence of
numbers x1 , x2 , x3 , x4 , ... is generated which simulates the properties of a totally random
sequence. One test for randomness that is sensitive to upward and downward trends is to
consider each successive pair, (x1 , x2 ), (x2 , x3 ), (x3 , x4 ), ... and generate a sequence of
symbols U (for up) or D (for down) according to whether the second member of the pair is
larger or smaller than the first member of the pair. A sequence such as UUUUUUUUD-
DDDDDD with a long upward trend followed by a long downward trend would be cause for
suspicion of the number generation algorithm. This example sequence has one so-called run
of consecutive U's followed by one run of D's for a total of only 2 runs. We could diagnose
potential problems with our random number generator by saying that we will look at a
sequence such as this and reject the hypothesis of randomness if there are too few runs. For
sequences that are given to have 8 U symbols and 7 D symbols, if our random number
generator is working well, what is the probability that there will be 3 or fewer runs?
20. In large sampling problems, such as polling registered voters in the United States, it is
not only impractical to take an exhaustive census of all population members, it is even
impractical to itemize all of them in an ordered list {1, 2, ... , n} so as to take a simple
random sample from the list. In such cases a popular approach is to use stratified sampling,
in which population members are classified into several main strata (such as their state of
residence), which may not be equally sized, and then further into one or more substrata (such
as their county, township, district, etc.) that are nested inside the main strata. A stratified
sample is taken by sampling at random a few strata, then a few substrata within it, etc. until
the smallest strata level is reached at which point a random sample of individuals is taken
from that (cont.).
48 Chapter 1 Discrete Probability
Here suppose there are three main strata, each with three substrata. Each of the nine
substrata will also have the same number of individuals, namely 20. Two main strata will be
sampled at random in a batch without replacement, then in each of the sampled strata two
substrata will be sampled in a batch without replacement, and then four individuals from
each substratum will be sampled in a batch without replacement to form the stratified
sample. Find the probability that the first individual in the first substrata of the first stratum
will be in the stratified sample. Is the probability of being in the sample the same for all
individuals in this case?
The main idea is that if an event A is known to have happened, the sample space
reduces to those outcomes in A, and every other event B reduces to its part on A, that is
A B. This motivates the following definition.
Definition 1. If A is an event with P[A] > 0, and B is an event, then the conditional
probability of B given A is
PA B
PB A (1)
PA
In the Peotone airport example, B is the event that the sampled individual opposes
the Peotone airport, A is the event that he or she is from the south suburbs, and A B is the
event that the individual both opposes the proposal and is from the south suburbs. Thus,
P[A B] = 49/563, P[A] = 66/563, and the conditional probability is
49 563 49
66 563 66
as if the group of survey respondents in the south suburbs were the only ones in a reduced
sample space for the experiment. Be aware though that there is nothing about the definition
expressed by formula (1) that requires outcomes to be equally likely.
Example 1 Among fifty gas stations in a city, there are twenty that raised their prices
yesterday. A small sample of ten of the fifty stations is taken in sequence and without
replacement. Find the conditional probability that there were at least three stations in the
sample that increased their price given: (a) the first two in the sample increased; and (b) at
least two in the sample increased. Do you expect the two answers to be the same?
The universe being sampled from is the entire collection of 50 gas stations, 20 of
which raised the price and 30 did not. For part (a), given that the first two stations increased
their price, the full sample space reduces to the samples of size 8 from the remaining gas
stations other than the two that were picked. In that sample space, the population being
sampled from includes 18 stations who raised the price and 30 who did not. The conditional
probability of the event that there are at least three stations in the whole sample that inc-
reased is therefore the probability in the reduced sample space that at least one more station
increased. By complementation, this is:
Pat least 3 increases 1 st 2 increased 1 Pno increases in final 8 samples
3029282726252423
1
4847464544434241
30 29 28 27 26 25 24 23
N1
48 47 46 45 44 43 42 41
0.984489
50 Chapter 1 Discrete Probability
For part (b) we want to compute the probability of at least 3 increases in the sample given
that at least 2 stations in the sample increased. By the definition of conditional probability,
this is
Pat least 3 increases at least 2 increases
Pat least 3 increases at least 2 increases
Pat least 2 increases
Pat least 3 increases
Pat least 2 increases
1P0,1, or 2 increases
1P0 or 1 increases
There are no increases in the sample of size 10 if and only if all stations in the sample
belonged to the set of 30 non-increasing stations. This occurs with probability
30 29 28 27 26 25 24 23 22 21
P0 increases
50 49 48 47 46 45 44 43 42 41
There is exactly 1 increase in 10 equally likely cases in which the station that increased is in
position i in the sample, and all other positions are filled by stations that did not increase
prices. Note that the station that increased can be selected in 20 different ways and the rest of
the sample is a permutation of 9 stations selected from 30 that didn't increase, hence this
event has probability
20 30 29 28 27 26 25 24 23 22
P1 increase 10
50 49 48 47 46 45 44 43 42 41
We also need the probability that there are exactly 2 stations in the sample that increased
10
prices. First, there are 45 pairs of positions in the ordered sample where the
2
increasing stations could appear, then there are 20˙19 pairs of increasing stations that could
be in those positions, and lastly the rest of the sample is a permutation of 8 stations selected
from the 30 that didn't increase. The probability of 2 increases is therefore
20 19 30 29 28 27 26 25 24 23
P2 increases 45
50 49 48 47 46 45 44 43 42 41
Using Mathematica, and defining a utility function Perm[n,k] to be the number of permua-
tions of k objects taken from n, the desired conditional probability P[at least 3 increases | at
least 2 increases] comes out to be the following
1.5 Conditional Probability 51
0.888304
Comparing part (a) to part (b), the event that the first two are increasing stations is more
specific than the event that at least two in the sample are increasing, and it is more favorable
to the event of at least 3 increases. There is no reason to suspect that the conditional
probabilities would come out equal. In the latter case, there are still many more remaining
outcomes in the reduced sample space, which serves to lower the conditional probability.
Multiplication Rules
The definition of conditional probability can be rewritten as
PA B PA PB A (2)
which can be used to compute intersection probabilities. One thinks of a chain of events in
which A happens first; the chance that B also happens is the probability of A times the
probability of B given A. For instance, if we draw two cards in sequence and without
replacement from a standard deck, the probability that both the first and second are aces is
4 3
52
51 , that is, the probability that the first is an ace times the conditional probability that the
second is an ace given that the first is an ace. For three aces in a row, the probability would
4 3 2
become
52
51 50 , in which the third factor is the conditional probability that the third card
is an ace given that both of the first two cards are aces.
The example of the three aces suggests that formula (2) can be generalized as in the
following theorem. You are asked to prove it in Exercise 7. Notice that from stage to stage
one must condition not only on the immediately preceding event but all past events.
Theorem 1. Let B1 , B2 , ... , Bn be events such that each of the following conditional
probabilities is defined. Then
PB1 B2 Bn
(3)
PB1 PB2 B1 PB3 B1 B2 PBn B1 Bn1
Example 2 Exercise 5 in Section 1.4 described the following famous problem. Suppose
you ask a room of n people for their birthdays, one by one. What is the probability that there
52 Chapter 1 Discrete Probability
365 i
fn_ : 1 Product , i, 0, n 1
365
Nf20, Nf50
0.411438, 0.970374
If you have not done the exercise, you might be surprised at how large these probabilities
are. Here is a plot that shows how the probability depends on the number of people n. The
halfway mark is reached at about 23 people. You should try finding the smallest n for which
there is a 90% probability of a match.
0.8
0.6
0.4
0.2
n
10 20 30 40 50
Figure 1.9 - The probability of matching birthdays as a function of n
Activity 2 Try this similar problem. Suppose the n people are each asked for the hour
and minute of the day when they were born. How many people would have to be in the
room to have at least a 50% chance of matching birth times?
The next proposition, though quite easy to prove, is very powerful. It goes by the
name of the Law of Total Probability. We saw an earlier version of it in Section 2 which
only dealt with intersection probabilities. The proof simply takes that result and adds on the
multiplication rule (2).
Theorem 2. If A1 , A2 , ... , An is a partition of the sample space, that is, the A's are
pairwise disjoint and their union is , then
n
PB PB Ai PAi (4)
i1
A1 A2 ... An
Proof. The events Ai B are pairwise disjoint, and their union is B (see Figure 10). Thus,
by the Law of Total Probability for intersections and the multiplication rule (2),
n n
PB PAi B PB Ai PAi (5)
i1 i1
54 Chapter 1 Discrete Probability
(Note: It is easy to argue that the conclusion still holds when the A's are a partition of the
event B but not a partition of all of . Try it.)
The Law of Total Probability is very useful in staged experiments where B is a
second stage event and it is difficult to directly find its probability; however, conditional
probabilities of B given first stage events are easier. Formula (4) lets us condition B on a
collection of possible first stage events Ai and then "un-condition" by multiplying by P[Ai ]
and adding over all of these possible first stage cases. The next example illustrates the idea.
Example 3 Each day a Xerox machine can be in one of four states of deterioration labeled
1, 2, 3, and 4 from the best condition to the worst. The conditional probabilities that the
machine will change from each possible condition on one day to each possible condition on
the next day are shown in the table below.
tomorrow
1 2 3 4
1 34 18 18 0
today 2 0 3 4 1 8 1 8
3 0 0 34 14
4 0 0 0 1
For instance, P[machine in state 2 tomorrow | machine in state 1 today] = 1/8. Given that the
machine is in state 1 on Monday, let us find the probability distribution of its state on
Thursday.
A helpful device to clarify the use of the law of total probability is to display the
possible outcomes of a stage of a random phenomenon as branches on a tree. The next stage
emanates from the tips of the branches of the current stage. The branches can be annotated
with the conditional probabilities of making those transitions. For the Monday to Tuesday
transition, the leftmost level of the tree in Figure 11 illustrates the possibilities. The leftmost
1 indicates the known state on Monday. From state 1 we can go to state 1 on Tuesday with
probability 3/4, and the other two states each with probability 1/8. From each possible
machine state 1, 2, and 3 on Tuesday we can draw the possible states on Wednesday at the
next level. Then from Wednesday, the Thursday states can be drawn to complete the tree.
(We omit labeling the transition probabilities on the Wednesday to Thursday branches to
reduce clutter in the picture; they can be found from the table.)
1.5 Conditional Probability 55
1
2
1 3
34 2
18 3
1 2 4
18 3
3 4
2
34 3
2 4
34 3
1
18 2 18 3 4
18 4
4
18 3
34 3 4
4
3 14 4
Figure 1.11 - Monday to Thursday transition probabilities
To see how the law of total probability works in this example, consider first the
simpler problem of finding the probability that the state on Wednesday is 2. We condition
and uncondition on the Tuesday state, which can be 1, 2, or 3 (however, if it is 3 then
Wednesday's state cannot be 2).
PWed 2 PWed 2 Tues 1 PTues 1
PWed 2 Tues 2 PTues 2
1 3 3 1 3
8
4 48 16
Notice that this is the sum of path probabilities in Figure 11 for paths ending in 2 on
Wednesday. A path probability is the product of the branch probabilities on that path.
Applying the same reasoning for Thursday,
3 3 3
ThursdayProb1
4 4 4
3 3 1 3 1 3 1 3 3
ThursdayProb2
4 4 8 4 8 4 8 4 4
27
64
27
128
Similarly you can check that the probability that the state on Thursday is 3 equals 63/256,
56 Chapter 1 Discrete Probability
and the probability that it is 4 equals 31/256. Notice that these four numbers sum to 1 as
they ought to do.
Activity 3 For the previous example, finish the computation of the Wednesday probabili-
ties. Look closely at the sum of products in each case, and at the table of transition
probabilities given at the start of the problem. Try to discover a way of using the table
directly to efficiently calculate the Wednesday probabilities, and check to see if the idea
extends to the Thursday probabilities. For more information, see Section 6.1 on Markov
chains, of which the machine state process is an example.
Bayes' Formula
The law of total probability can also be used easily to derive an important formula
called Bayes' Formula. The setting is as above, in which a collection of events
A1 , A2 , ... , An partitions . Then P[B] can be expressed in terms of the conditional
probabilities P[B | Ai ], which we suppose are known. The question we want to consider now
is: is it possible to reverse these conditional probabilities to find P[Ai | B] for any i = 1, 2, ...,
n? The importance of being able to do so is illustrated by the next example. Roughly the
idea is that diagnostic tools such as lie detectors, medical procedures, etc. can be pre-tested
on subjects who are known to have a condition, so that the probability that they give a
correct diagnosis can be estimated. But we really want to use the diagnostic test later on
unknown individuals and conclude something about whether the individual has the condi-
tion given a positive test result. In the pre-testing phase the event being conditioned on is
that the individual has the condition, but in the usage phase we know whether or not the test
is positive, so the order of conditioning is being reversed.
All we need to derive Bayes' formula, which is (6) below, is the definition of the
conditional probability PAi B and the law of total probability:
PAi B PB Ai PAi
PAi B (6)
PB nj1 PB A j PA j
for people who do not have HIV. If a randomly selected individual has a positive screening
result, what is the probability that this person actually has HIV? If the screening could be
redesigned to improve performance, which would be the most important quantity to
improve: q or r?
We introduce the following notation, suggested by the problem statement:
p Prandomly selected person has HIV
q Ptest positive person has HIV
r Ptest positive person does not have HIV
By Bayes' formula,
Pperson has HIV test positive
Pperson has HIV and test positive q p
Ptest positive q p r1 p
(Make sure you understand the last formula.) We would like to see how best to increase this
probability: by improving q (making it closer to 1) or r (making it closer to 0). We should
also investigate whether the decision is different for high p vs. low p. Let us study some
graphs of the function above for values of p near 0 (due to the relative rarity of the disease),
q near 1, and r near 0.
pq
fp_, q_, r_ :
p q r 1 p
f f
0.06
0.090
0.05
0.085 0.04
0.03
0.080
0.02
q r
0.85 0.90 0.95 1.00 0.05 0.10 0.15 0.20
(a) (b)
Figure 1.12 - (a) Probability of HIV given positive test, as function of q; (b) Probability of HIV given positive test,
as function of r ( p .001 in both cases)
58 Chapter 1 Discrete Probability
The results are very striking for a small p value of .001, and would be more so for even
smaller p's: increasing q toward 1 produces only roughly linear increases in our objective f,
to a level even less than .1. In other words, even if the test was made extremely accurate on
persons with HIV, it is still unlikely that someone with a positive HIV screening actually is
infected. This may be due to the very small p in the numerator, which is overcome by the
term r1 p in the denominator. By contrast, the rate of increase of f as r decreases toward
0 is much more rapid, at least once the screening reaches an error rate of less than .05.
When the probability of an incorrect positive test on a person without HIV is sufficiently
reduced, the probability that truly HIV is present when the screening test reads positive
dramatically increases toward 1. A 3D plot shows this effectively. (The code to produce it is
in the closed cell above Figure 13.) Change in f due to increasing q is almost imperceptible
in comparison to reducing r, at least when r is small enough.
0.90
q
0.95
1.00
1.0
0.5f
0.0
0.10
0.05
r
0.00
Figure 1.13 - Probability of HIV given positive test, as function of q and r ( p .001)
When p is not quite so small, as for instance in the case where the disease under
study is a cold or flu passing quickly through a small population instead of HIV, the
preference for decreasing r to increasing q is not quite as dramatic. Try repeating these
graphs for some other parameter values, such as p .2 to continue the investigation. (See
also Exercise 14.)
Activity 4 Does our function f shown in Figure 12(b) reach a finite limit as r 0? If
so, what is it?
Exercises 1.5
1. In the example on the Peotone airport proposal at the start of the section, find
(a) P[in favor | Chicago]
(b) P[downstate | don't care]
(c) P[north suburbs or Chicago | in favor or don't care]
1.5 Conditional Probability 59
2. Under what condition on A B will P[A | B] = P[A]? Try to interpret the meaning of this
equation.
3. In the Peotone airport example, a sample of size 4 in sequence and without replacement is
taken from the group of 563 survey respondents. Find the conditional probability that all
four sample members oppose the proposal given that at least three of them are from the south
suburbs.
4. Find P[Ac Bc ] if P[A] = .5, P[B] = .4, and P[A | B] = .6.
5. In a small group of voters participating in a panel discussion, four are Republicans and
three are Democrats. Three different people will be chosen in sequence to speak. Find the
probability that
(a) all are Republicans
(b) at least two are Republicans given that at least one is a Republican
6. In a family with 3 children, what is the probability of at least 2 boys given that at least 1
child is a boy?
7. Prove Theorem 1.
8. Show that if A is a fixed event of positive probability, then the function Q[B] = P[B | A]
taking events B into satisfies the three defining axioms of probability.
9. Blackjack is a card game in which each of several players is dealt one card face down
which only that player can look at, and one card face up which all of the players can see.
Players can draw as many cards as they wish in an effort to get the highest point total until
their total exceeds 21, at which point they lose automatically. (An ace may count 1 or 11
points at the discretion of the player, all face cards count 10 points, and other cards of ranks
2-10 have point values equal to their rank.)
Suppose that you are in a game of blackjack with one other person. Your opponent's
face up card is a 7, and you are holding an 8 and a 5. If you decide to take exactly one more
card and your opponent takes no cards, what is the probability that you will beat your
opponent's total?
3
2
1
3 2 1 1 2 3
1
2
3
Exercise 10
60 Chapter 1 Discrete Probability
10. A two-dimensional random walk moves on the integer grid shown in the figure. It
begins at (0,0) and on each move it has equal likelihood of going up, down, right, or left to
points adjacent to its current position. What is the probability that after three moves it is at
the point (0,1)?
11. (Mathematica) Write a conditional probability simulator for the following situation.
The sample space has six outcomes a, b, c, d, e, and f, with probabilities 1/6, 1/12, 1/12,
1/6, 1/3, and 1/6, respectively. Define event A as
a, b, c and event B as
b, c, d, e. For a
given number of trials n, your simulator should sample an outcome from according to the
given probabilities, and it should return the proportion of times, among those for which the
outcome was in B, that the outcome was also in A. What should this proportion approach as
the number of trials becomes very large?
12. (Mathematica) Recall the random walk example from Section 1.3 in which we used the
command AbsorptionPctAtM to empirically estimate the probability that the random walk,
starting at state 3, would be absorbed at state 5 for a right step probability of .4. Let f(n) be
the probabilty of absorption at 5 given that the random walk starts at n, for n = 0, 1, 2, 3, 4,
5. Use the law of total probability to get a system of equations for f(n), and solve that system
in Mathematica. Check that the solution is consistent with the simulation results.
1 2 3 4 5
1 2 3 4
1 2 3 4 5
1 2 3 4
Exercise 13
13. A coin drops down a pegboard as shown in the figure, bouncing right or left with equal
probability. If it starts in postion 3 in row 1, find the probability that it lands in each of the
slots 1, 2, 3, and 4 in the fourth row.
14. (Mathematica) In the Bayes' Theorem investigation of HIV testing at the end of this
section, suppose that instead our goal is to minimize the probability that a person who tests
negative actually has the disease. Now what is the most important quantity to improve: q or
r?
15. I am responsible for administering our college mathematics placement exam for first-
year students. Suppose that students from some past years broke down into precalculus
grade and placement exam score categories as follows:
1.5 Conditional Probability 61
grade
A B C D F
below 10 2 3 14 8 5
placement
10 20 7 20 41 5 1
above 20 9 15 18 4 2
Assuming that the next incoming class behaves similarly, use the table directly, then use
Bayes' formula to estimate the probability that someone who scores between 10 and 20 gets
at least a C. Do you get the same result using both methods?
16. A credit card company studies past information on its cardholders. It determines that an
index of riskiness on a scale from 1 to 5, based on factors such as number of charge
accounts, total amount owed, family income, and others, might help identify new card
applicants who will later default on their debts. They find that among their past defaulters,
the distribution of index values was: 1:2%, 2: 15%, 3: 22%, 4: 28%, 5: 33%. Among non-
defaulters the distribution of index values was: 1:28%, 2: 25%, 3: 18%, 4: 16%, 5: 13%. It
is known that about 5% of all card holders default. What is the probability that a new card
applicant with an index of 5 will default? Repeat the computation for the other index values
4 through 1.
Sample Problems from Actuarial Exam P
17. A public health researcher examines the medical records of a group of 937 men who
died in 1999 and discovers that 210 of the men died from causes related to heart disease.
Moreover, 312 of the 937 men had at least one parent who suffered from heart disease, and
of these 312 men, 102 died from causes related to heart disease. Determine the probability
that a man randomly selected from this group died of causes related to heart disease, given
that neither of his parents suffered from heart disease.
18. A doctor is studying the relationship between blood pressure and heartbeat abnormalities
in her patients. She tests a random sample of her patients and notes their blood pressures
(high, low, or normal) and their heartbeats (regular or irregular). She finds that: (i) 14% have
high blood pressure; (ii) 22% have low blood pressure; (iii) 15% have an irregular heartbeat;
(iv) of those with an irregular heartbeat, one-third have high blood pressure; (v) of those
with normal blood pressure, one-eighth have an irregular heartbeat. What portion of the
patients selected have a regular heartbeat and low blood pressure?
19. An insurance company issues life insurance policies in three separate categories:
standard, preferred, and ultra-preferred. Of the company's policyholders, 50% are standard,
40% are preferred, and 10% are ultra-preferred. Each standard policyholder has probability
.010 of dying in the next year, each preferred policyholder has probability .005 of dying in
the next year, and each ultra-preferred policyholder has probability .001 of dying in the next
year. A policyholder dies in the next year. What is the probability that the deceased policy-
holder was ultra-preferred?
62 Chapter 1 Discrete Probability
20. The probability that a randomly chosen male has a circulation problem is .25. Males who
have a circulation problem are twice as likely to be smokers as those who do not have a
circulation problem. What is the conditional probability that a male has a circulation
problem, given that he is a smoker?
1.6 Independence
The approach to probability that we have been taking so far draws heavily on random
sampling and simulation. These concepts are also very helpful in understanding the subject
of this section: independence of events in random experiments. For example, what is the
crucial difference between sampling five integers in sequence from 1, 2, ..., 10 without
replacement as compared to sampling them with replacement? In the no replacement
scenario, once a number, say 4, is sampled, it cannot appear again, and other numbers are a
little more likely to appear later in the sequence because 4 is no longer eligible. When
sampling with replacement, 4 is just as likely to appear again later as it was the first time,
and other numbers also have the same likelihood of appearing, regardless of whether 4 was
sampled. In this case, the occurrence of an event like: "4 on 1st" has no effect on the
probability of an event like: "7 on 2nd." This is the nature of independence, and sampling
with replacement permits it to happen while sampling without replacement does not.
Another way to look at independence involves the computation of intersection
probabilities. Take the example above and use combinatorics to compute the probability that
the first two sampled numbers are 4 and 7. Since we still must account for the other three
members of the sample, the probability of this event, assuming no replacement, is
11876 1 1 1
P4 on 1 st, 7 on 2 nd
10 9 8 7 6 10 9 90
The probability of the same event, assuming replacement, is
1 1 10 10 10 1 1 1
P4 on 1 st, 7 on 2 nd
10 10 10 10 10 10 10 100
which differs from the answer obtained under the assumption of no replacement. Moreover,
assuming again that the sample is drawn with replacement,
1 10 10 10 10 1 10 1 10 10 10 1
P4 on 1 st , P7 on 2 nd
10 10 10 10 10 10 10 10 10 10 10 10
hence
1 1
P4 on 1 st, 7 on 2 nd P4 on 1 st P7 on 2 nd
10 10
1.6 Independence 63
You should check that for sampling without replacement, the joint probability does not
factor into the product of individual event probabilities.
Factorization can occur when three or more events are intersected as well. For
instance when the sample is taken with replacement,
1 1 1 10 10 1 1 1
P4 on 1 st, 7 on 2 nd, 4 on 3 rd
10 10 10 10 10 10 10 10
1 10 10 10 10 1
P4 on 1 st P7 on 2 nd P4 on 3 rd
10 10 10 10 10 10
Therefore,
P4 on 1 st, 7 on 2 nd, 4 on 3 rd P4 on 1 st P7 on 2 nd P4 on 3 rd
These three events are independent, and in fact it is easy to show that any subcollection of
two of the events at a time satisfies this factorization condition also.
Having laid the groundwork for the concept of independence in the realm of sam-
pling, let us go back to the general situation for our definition.
Definition 1. Events B1 , B2 , ... Bn are called mutually independent if for any subcollection
of them Bi1 , Bi2 , ... Bik , 2 k n,
PBi1 Bi2 ... Bik PBi1 PBi2 PBik (1)
In particular for two independent events A and B, PA B PA PB, which means that
PA B PA PB
PB A PB (2)
PA PA
In words, if A and B are independent, then the probability of B does not depend on whether
A is known to have occurred.
Activity 1 For a roll of two fair dice in sequence, check that the events B1 = "1 on 1st"
and B2 = "6 on 2nd" are independent by writing out the sample space and finding
PB1 B2 , P[B1 ], and P[B2 ]. Compare P[B2 | B1 ] to P[B2 ].
Example 1 Four customers arrive in sequence to a small antique store. If they make their
decisions whether or not to buy something independently of one another, and if they have
individual likelihoods of 1/4, 1/3, 1/2, and 1/2 of buying something, what is the probability
that at least three of them will buy?
The problem solving strategy here is of the divide and conquer type; divide the large
problem into subproblems, and conquer the subproblems using independence. First, the
event that at least three customers buy can be expressed as the disjoint union of five
subevents, from which we can write:
64 Chapter 1 Discrete Probability
Pat least 3 buy Pcustomers 1, 2, 3 buy and not 4 P1, 2, 4 buy and not
P1, 3, 4 buy and not 2 P2, 3, 4 buy and not 1
P1, 2, 3, 4 buy
Each subevent is an intersection of independent events, so by formula (1) and the given
individual purchase probabilities,
Pat least 3 buy
1 1 1 1 1 1 1 1 1 1 1 2
4 3 2 2 4 3 2 2 4 2 2 3
1 1 1 3 1 1 1 1 1
3 2 2 4 4 3 2 2 6
Example 2 One frequently sees the notion of independence come up in the study of two-
way tables called contingency tables, in which individuals in a population are classified
according to each of two characteristics. Such a table appeared in Section 1.5 in reference to
the Peotone airport proposal. The first characteristic was the region in which the individual
lived (4 possible regions), and the second characteristic was the individual's opinion about
the airport issue (3 possible opinions). If one person is sampled at random from the 563 in
this group, you can ask whether such events as "the sampled person is from the north
suburbs" and "the sampled person is opposed to the new airport" are independent. Using the
counts in the table, we can check that Pnorth and opposed .0604, Pnorth .2469,
Popposed .3250, and Pnorth Popposed .0803. So the joint probability P[north and
opposed] is not equal to the product of the individual probabilities P[north]˙P[opposed].
Hence we conclude that these two events are not independent. There may be something
about being from the north suburbs which changes the likelihood of being opposed to the
airport. (You can conjecture that north suburban people might be more inclined to favor the
airport, since it is not in their own backyard.)
However, this is only a small sample from a much larger population of Illinois
residents. If you want to extend the conclusion about independence to the whole population
of Illinois, you have the problem that probabilities like .0604, .2469, etc. are only estimates
of the true probabilities of "north and opposed," "north," etc. for the experiment of sampling
an individual from the broader universe. Sampling variability alone could account for some
departures from equality in the defining condition for independence PA B PA PB.
We pursue this idea further in the next example.
Example 3 Suppose that each individual in a random sample of size 80 from a population
can be classified according to two characteristics A and B. There are two possible values or
types for each characteristic. Assume that there are 30 people who are of type 1 for character-
istic A, 50 of type 2 for A, and 40 each of types 1 and 2 for characteristic B. Draw one
individual at random from the sample of 80. Find the unique frequencies x, y, w, and z in the
1.6 Independence 65
table below which make events of the form "individual is type i for A", i = 1,2, independent
of events of the form "individual is type j for B", j = 1,2.
char B
1 2 total
char A 1 x y 30
2 w z 50
total 40 40 80
The interesting thing in this example is that the answer is unique; the sample of 80
can only come out in one way in order to make characteristic A events independent of
characteristic B events. Without the independence constraint, x could take many values, from
which y, w, and z are uniquely determined. If x = 2 for instance, y must be 28, w must be 38,
and z must be 12 in order to produce the marginal totals in the table. In general, letting x be
the free variable, the table entries must be:
char B
1 2 total
char A 1 x 30 x 30
2 40 x 10 x 50
total 40 40 80
right below; for example, the expected number of A1B1 individuals in the sample is 80˙(.3)
= 24.
char B char B
1 2 total 1 2 total
char A 1 60 60 120 char A 1 24 24 48
2 40 40 80 2 16 16 32
total 100 100 200 total 40 40 80
universe sample
The next Mathematica function simulates a sample of a desired size (such as 80), in
sequence and with replacement, from the population categorized as in the universe table,
tallies up the frequencies in each category for the sample, and presents the results in tabular
format. After initializing the counter variables to zero, it selects a random number between 0
and 1, and based on its value, increments one of the counters. The cutoffs used in the Which
function are chosen so that the category probabilities of .3, .3, .2, and .2 are modeled. The
marginal totals are then computed for the tabular display, and the output is done.
SimContingencyTablesampsize_ :
ModuleA1, A2, B1, B2, A1B1, A1B2, A2B1, A2B2,
nextsample, i, outtable, A1B1 0; A1B2 0;
A2B1 0; A2B2 0; Donextsample RandomReal;
Whichnextsample 0.3`, A1B1 A1B1 1,
0.3` nextsample 0.6`, A1B2 A1B2 1,
0.6` nextsample 0.8`, A2B1 A2B1 1,
0.8` nextsample 1, A2B2 A2B2 1,
i, 1, sampsize; A1 A1B1 A1B2;
A2 A2B1 A2B2; B1 A1B1 A2B1; B2 A1B2 A2B2;
outtable "AB", 1, 2, "Total",
1, A1B1, A1B2, A1, 2, A2B1, A2B2, A2,
"Total", B1, B2, sampsize; TableFormouttable
Here are three runs of the command. For replicability, we set the seed of the random
number generator.
SeedRandom34 567;
SimContingencyTable80
SimContingencyTable80
SimContingencyTable80
1.6 Independence 67
AB 1 2 Total
1 25 27 52
2 14 14 28
Total 39 41 80
AB 1 2 Total
1 21 30 51
2 14 15 29
Total 35 45 80
AB 1 2 Total
1 27 25 52
2 13 15 28
Total 40 40 80
The expected numbers under the independence assumption (24's in the first row and 16's in
the second) are not always very close to what is observed. Try running the commands again,
and you will regularly find discrepancies in the frequencies of at least 3 or 4 individuals. But
the more important issue is this. The characteristics are independent in the population, but
one must be rather lucky to have perfect independence in the sample. The statistical test for
independence of categories that we referred to above must acknowledge that contingency
tables from random samples have sampling variability, and should reject independence only
when the sample discrepancies from independence are "large enough."
Activity 2 What configuration of the universe in the above example would lead to
independence of characteristics A and B and totals of 100 in each of A1, A2, B1, and B2?
Revise the SimContingencyTable command accordingly, and do a few runs using sample
sizes of 100 to observe the degree of variability of the table entries.
Example 4 Electrical switches in a circuit act like gates which allow current to flow or not
according to whether they are closed or open, respectively. Consider a circuit as displayed
in Figure 14 with three switches arranged in parallel.
A B
Current coming from point A may reach point B if and only if at least one of the switches is
closed. If the switches act independently of one another, and they have probabilities p1 , p2 ,
and p3 of being closed, what is the probability that current will flow?
This is an easy one. The event that current will flow is the complement of the event
that it will not flow. The latter happens if and only if all three switches are open. Switch i is
open with probability 1 pi . Thus, by independence,
Pcurrent flows 1 Pcurrent doesn ' t 1 1 p1
1 p2
1 p3
Example 5 Our last example of this section again deals with simulation and contingency
tables. A good pseudo-random number generator as described in Section 3 ought to have the
property that each pseudo-random number that is generated appears to be independent of its
predecessors (even though it is of course a deterministic function of its immediate predeces-
sor). Checking for validity of the generator therefore requires a check for independence of
each member from each other in a stream of numbers. There are several ways to do this, and
here is one that is sensitive to unwanted upward or downward trends in the stream of
numbers.
We can be alerted to a possible violation of independence if some property that
independence implies is contradicted by the data. For example, in a stream of 40 numbers
we can classify each number according to two characteristics: whether it is in the first group
of 20 or the second, and whether it is above some fixed cutoff or below it. If the numbers
are truly random and independent of one another, the group that a number is in should have
no effect on whether that number is above or below the cutoff. If the two-by-two contin-
gency table that we generate in this way shows serious departures from the tallies that we
would expect under independence, then the assumption of independence is in doubt.
Here is a stream of 40 random integers between 1 and 20, simulated using DrawInte-
gerSample.
Needs"KnoxProb7`Utilities`";
SeedRandom11 753;
datalist
DrawIntegerSample1, 20, 40, Replacement True
Among the first 20 numbers, 12 are less than or equal to 10 and 8 are greater than 10.
Among the second group of 20 numbers, 10 are less than or equal to 10 and 10 are greater.
We therefore have the following observed contingency table.
1.6 Independence 69
size
10 10 total
group 1 12 8 20
2 10 10 20
total 22 18 40
Activity 3 Repeat the drawing of a sample of 40 from 1, 2, ..., 20 a few more times.
Do you observe many tables that are far from expectations? Try doubling the sample
size to 80, and using four groups of 20 instead of two. What category counts are
expected? Do you see striking departures from them in your simulations?
Exercises 1.6
1. A student guesses randomly on ten multiple choice quiz questions with five alternative
answers per question. Assume that there is only one correct response to each question, and
the student chooses answers independently of other questions. What is the probability that
the student gets no more than one question right?
2. Consider the experiment of randomly sampling a single number from 1, 2, ..., 10. Is
the event that the number is odd independent of the event that it is greater than 4?
3. If A and B are independent events, prove that each of the following pairs of events are
independent of one another: (a) A and Bc ; (b) Ac and B; (c) Ac and Bc .
4. If events A and B have positive probability and are disjoint, is it possible for them to be
independent?
5. Prove that if events A, B, C, and D are independent of one another, then so are the groups
of events (a) A, B, and C; (b) A and B.
6. Prove that both
and are independent of every other event A
.
7. (Mathematica) Referring to Example 3, run the SimContingencyTable command a few
times each for sample sizes of 40, 80, and 120. Compare the variabilities for the three
sample sizes.
8. A sample space has six outcomes: a, b, c, d, e, and f. Define the event A as a, b, c, d
and the event B as c, d, e, f . If each of a, b, e, and f has probability 1/8, find probabilities
on c and d that make A and B independent events, or explain why it is impossible to do so.
70 Chapter 1 Discrete Probability
A B
Exercise 10
10. In the figure is a part of an electrical circuit as in Example 4, except that two parallel
groups of switches are connected in series. Current must pass through both groups of
switches to flow from A to B. If each switch in the first group has probability p of being
closed, each switch in the second group has probability q of being closed, and the switches
operate independently, find the probability that current will flow from A to B.
11. (Mathematica) A system has three independent components which work for a random,
uniformly distributed amount of time in 10, 20 and then fail. Each component can take on
the workload of the other, so the system fails when the final component dies. Simulate 1000
such systems, produce a histogram of the system failure times, and estimate the probability
that the system fails by time 15. What is the exact value of that probability?
12. In Example 4 we were able to find the probability that current flows from A to B easily
by complementation. Compute the same probability again without complementation.
13. (Mathematica) In Example 5 we set up the integer sampling process so that replace-
ment occurs. If the pseudo-random number generator behaves properly we would not expect
any evidence against randomness. But the question arises: is this test sensitive enough to
detect real independence problems? Try drawing integer samples again, this time of 40
integers from 1, 2, ..., 80 without replacement. Using characteristics of group (1st 20, 2nd
20) and size (40 or below, more than 40), simulate some contingency tables to see whether
you spot any clear departures from independence. Would you expect any?
14. If events B1 , B2 , B3 , and B4 are independent, show that
(a) P[B1 | B3 B4 ] = P[B1 ]
(b) P[B1 B2 | B3 B4 ] = P[B1 B2 ]
(c) P[B1 B2 | B3 ] = P[B1 B2 ]
1.6 Independence 71
15. A stock goes up or down from one day to another independently. Each day the probabil-
ity that it goes up is .4. Find the probability that in a week of five trading days the stock
goes up at least four times.
16. Two finite or countable families of events A = A1 , A2 , ... and B = B1 , B2 , ... are
called independent if every event Ai
A is independent of every event B j
B. Suppose
that two dice are rolled one after the other. Let A consist of basic events of the form "1st die
= n" for n = 1, 2, ..., 6, together with all unions of such basic events. Similarly let B be the
events that pertain to the second die. Show that each basic event in A is independent of each
basic event in B, and use this to show that the families A and B are independent.
Sample Problem from Actuarial Exam P
17. An actuary studying the insurance preferences of automobile owners makes the follow-
ing conclusions: (i) an automobile owner is twice as likely to purchase collision coverage as
disability coverage; (ii) the event that an automobile owner purchases collision coverage is
independent of the event that he or she purchases disability coverage; (iii) the probability
that an automobile owner purchases both collision and disability coverages is .15. What is
the probability that an automobile owner purchases neither collision nor disability coverage?
CHAPTER 2
DISCRETE DISTRIBUTIONS
So a random variable X maps outcomes to states, which are usually numerical valued. The
p.m.f. of the random variable gives the likelihoods that X takes on each of its possible
values. Notice that by Axiom 2 of probability, f(x) 0 for all states x E, and also by
Axiom 1, xE f x 1. These are the conditions for a function f to be a valid probability
mass function.
a X
b 0
c
d 1
e
f
73
74 Chapter 2 Discrete Distributions
Example 1 Exercise 7 in Section 1.1 illustrates the ideas well. We repeat the diagram of
that exercise here for convenience as Figure 1. Assuming that all outcomes a, b, c, d, e, f
in are equally likely, and X maps outcomes to states 0, 1 as shown, then the probability
mass function of X is
4 2
f 0 PX 0 Pa, b, c, d
6 3
2 1
f 1 PX 1 Pe, f
6 3
Example 2 Suppose that a list of grades for a class is in the table below, and the experiment
is to randomly draw a single student from the class.
grade A B C D F total
number of students 4 8 6 2 1 21
The sample space is the set of all students, and the random variable X operates on by
returning the grade of the student selected. Then the state space of X is E = A, B, C, D, F
and the probability mass function is
4 8 6 2 1
f A ; f B ; f C ; f D ; f F (1)
21 21 21 21 21
The probability that the randomly selected student has earned at least a B for instance is
12
PX A or X B PX A PX B f A f B
21
The last computation in Example 2 illustrates a general rule. If X has p.m.f. f, and S
is a set of states, then by the third axiom of probability,
PX S PX x f x
(2)
xS xS
The p.m.f. therefore completely characterizes how probability distributes among the states,
so that the probability of any interesting event can be found. We often use the language that
f characterizes (or is) the probability distribution of X for this reason.
Activity 1 Encode the letter grades in Example 2 by the usual integers 4, 3, 2, 1, and 0
for A-F, respectively. Write a description of, and sketch a graph of, the function F(x) =
P[X x] for x .
Example 3 If you did Exercise 3 of Section 1.3 you discovered how to simulate values of a
random variable with a known p.m.f. on a finite state space. Let us apply the ideas here to
write a command to repeatedly simulate from the grade distribution in Example 2.
2.1 Discrete Random Variables 75
SimulateGradesnumreps_ :
Moduletherand, i, gradelist, nextgrade,
gradelist ;
Dotherand RandomReal;
4
nextgrade Whichtherand , 4,
21
4 12 12 18
therand , 3, therand ,
21 21 21 21
18 20 20
2, therand , 1, therand
, 0;
21 21 21
AppendTogradelist, nextgrade, i, 1, numreps;
gradelist
Needs"KnoxProb7`Utilities`";
SeedRandom98 921;
HistogramSimulateGrades100, 5, ChartStyle Gray
0.3
0.2
0.1
0 1 2 3 4
Figure 2.2 - Simulated grade distribution
76 Chapter 2 Discrete Distributions
Since the probabilities of states 0-4 are .05, .09, .29, .38, and .19, this particular empirical
histogram fits the theoretical distribution fairly well. You might have noticed that although
we are interested in simulating the selection of a random student, what we actually did in the
command was to simulate a random variable Y taking 0, 1 to 4, 3, 2, 1, 0 as in Figure 3,
with the same distribution as X. To produce the picture we have used Mathematica's Which
function to define the step function, and a utility contained in the KnoxProb7`Utilities`
package called
which plots a right continuous step function on an interval with the given domain, with
jumps at the given list of x values. It has a DotSize option as shown to control the size of the
dots, and also accepts the AxesOrigin and PlotRange options, as does the Plot command.
4 4 12
Fx_ : Whichx , 4,
, 3, x
21 21 21
12 18 20 18 20
x , 2, x , 1, x , 0;
21 21 21 21 21
4 12 18 20
PlotStepFunctionFx, x, 0, 1, , , , ,
21 21 21 21
AxesOrigin 0, 0.1, PlotRange 0.1, 4.1,
DotSize 0.02, AspectRatio .5
4 4 6 20
0 1
21 7 7 21
Definition 3. A discrete random variable X with state space E is said to have cumulative
distribution function (abbr. c.d.f.) F(x) if for all x ,
Fx PX x PΩ : X Ω x (3)
The c.d.f. accumulates total probability lying to the left of and including point x. It also
characterizes the complete distribution of probability, because one can move back and forth
from it to the probability mass function. For instance, suppose that a random variable has
state space E = {.2, .6, 1.1, 2.4} with probability masses 1/8, 1/8, 1/2, and 1/4, respectively.
Then for the four states,
1
F .2 PX .2 f .2
8
1 1 1
F .6 PX .6 f .2 f .6 8
8
4
1 1 1 3
F 1.1 PX 1.1 f .2 f .6 f 1.1 8
8
2
4
1 1 1 1
F 2.4 PX 2.4 f .2 f .6 f 1.1 f 2.4 8
8
2
4
1
1
Fx_ : Whichx 0.2, 0, 0.2 x 0.6, ,
8
1 3
0.6 x 1.1, , 1.1 x 2.4,
, x 2.4, 1;
4 4
PlotStepFunctionFx, x, 0, 2.6,
0.2, 0.6, 1.1, 2.4, DotSize 0.02,
AxesOrigin 0, 0.01, PlotRange 0.01, 1.01
78 Chapter 2 Discrete Distributions
1.0
0.8
0.6
0.4
0.2
It is clear that if we know f we can then construct F, but notice also that the probability
masses on the states are the amounts by which F jumps; for example
f .6 1 8 1 4 1 8 F.6 F.6 , where F(x ) denotes the left-hand limit of F at
the point x. Hence, if we know F we can construct f. In general, the relationships between
the p.m.f. f(x) and the c.d.f. F(x) of a discrete random variable are
Fx f t for all x ; f x Fx Fx for all states x. (5)
tx
Figure 4 displays the c.d.f. F for this example. Notice that it begins at functional value 0,
jumps by amount f(x) at each state x, and reaches 1 at the largest state.
Activity 2 Consider a p.m.f. on a countably infinite state space E = 0, 1, 2, ... of the
1
form f(x) = . Argue that f is a valid mass function and find an expression for its
2x1
associated c.d.f. F. What is the limit of F(x) as x
?
Example 4 The c.d.f. is sometimes a better device for getting a handle on the distribution
of a random variable than the p.m.f. For instance, remember that in Chapter 1 we simulated
random samples, for each of which we calculated a maximum or minimum sample value.
We then used a list of such extrema to give a histogram of their empirical distributions. Here
we will find the exact theoretical distribution of a maximum sample value.
Suppose that the random experiment is to draw five numbers randomly in sequence
and with replacement from 1, 2, ..., 20. Let X be the maximum of the sample values, e.g.,
if Ω = {5, 2, 10, 4, 15} then X Ω 15. Though it is not clear immediately what is the
p.m.f. of X, we can compute the c.d.f. easily. In order for the largest observation X to be less
than or equal to a number x, all five sample values (call them X1 , X2 , X3 , X4 , X5 ) must be
less than or equal to x. Since sampling is done with replacement, we may assume that the
five sample values are mutually independent; hence, for x 1, 2, ..., 20,
2.1 Discrete Random Variables 79
The last line follows because for each sampled item, the chance that it is less than or equal to
x is the number of such numbers (x) divided by the total number of numbers (20) that could
have been sampled. The p.m.f. of X is therefore
f x PX x PX x PX x 1
x 5 x1 5 (6)
20 20
Activity 3 Use formula (6) above to make a list plot in Mathematica of the values of f
with the plot points joined. Superimpose it on a histogram of 500 simulated values of X
using the DrawIntegerSample command with option Replacement->True in the Knox-
Prob7`Utilities` package to observe the fit of the sample to the theoretical distribution.
where the sum is taken over all states. Similarly, the expected value of a function g of X is
In the case of the expectation of a function g(X) note that the values of the function
g(x) are again weighted by the state probabilities f(x) and summed, so that E[g(X)] is a
weighted average of the possible values g(x). Often E[X] is called the mean of the distribu-
tion of X (or just the mean of X), and is symbolized by Μ.
Example 5 Suppose that on a particular day, a common stock price will change by one of
the increments: 1/4, 1/8, 0, 1/8, 1/4, or 3/8, with probabilities .06, .12, .29, .25, .18, and
.10, respectively. Then the expected price change is
80 Chapter 2 Discrete Distributions
Μ
1 4
.06
1 8
.12
0
.29
1 8
.25
1 4
.18
3 8
.10
0.08375
Let us also find the expected absolute deviation of the price change from the mean. Symboli-
cally, this is E X Μ , and it is of importance because it is one way of measuring the
spread of the probability distribution of X. It is calculated as the weighted average
xE x Μ f x. In particular for the given numbers in this example,
Abs
1 4 Μ
.06 Abs
1 8 Μ
.12
Abs0 Μ
.29 Abs1 8 Μ
.25
Abs1 4 Μ
.18 Abs3 8 Μ
.10
0.138725
The so-called moments of X play a key role in describing its distribution. These are
expectations of powers of X. We have already met the mean Μ = E[X], which is the first
moment about 0. In general, the rth moment about a point a is E[X ar ]. Next to the
mean, the most important moment of a distribution is the second moment about Μ, i.e.,
Σ2 VarX EX Μ2 (9)
which is called the variance of the distribution of X (or just the variance of X). The variance
gives a weighted average of squared distance between states and the average state Μ;
therefore, it is a commonly used measure of spread of the distribution. Its square root
Σsquared
1 4 Μ2
.06
1 8 Μ2
.12
0 Μ2
.29
1 8 Μ2
.25
1 4 Μ2
.18
3 8 Μ2
.10
Σ Σsquared
0.0278297
0.166822
2.1 Discrete Random Variables 81
Besides the mean and the variance, the moment of most importance is the third
moment about the mean EX Μ3 . It is similar to the variance, but because of the cube it
measures an average signed distance from the mean, in which states to the right of Μ
contribute positively, and states to the left contribute negatively. So for example, if there is a
collection of states of non-negligible probability which are a good deal greater than Μ, which
are not offset by states on the left side of Μ, then we would expect a positive value for the
third moment. On the left side of Figure 5 is such a probability distribution. Its states are
1, 2, 10, 20, with probabilities .25, .50, .125, .125. Because of the long right tail we say
that this distribution is skewed to the right. To produce the plot we have used the command
ProbabilityHistogram[statelist, problist]
in KnoxProb7`Utilities`, which requires the list of states and the corresponding list of
probabilities for the states. It accepts the options of Graphics together with its own BarColor
option to control the look of the diagram.
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
12 10 20 12 15 20
Μ
.25
1
.50
2
.125
10
.125
20
thirdmoment
.25
1 Μ3
.50
2 Μ3
.125
10 Μ3
.125
20 Μ3
5.
408.
The third moment about the mean is positive, as expected. Do the next activity to convince
yourself that when the distribution has a long left tail, the third moment about the mean will
be negative, in which case we call the distribution skewed to the left.
Activity 4 Compute the mean and the third moment about the mean of the distribution
shown on the right of Figure 5. (The states are 1, 2, 15, 20 and their probabilities are
1 8, 1 8, 1 2, 1 4.)
Below are some important results about expectation and variance. The first two parts
will require some attention to joint probability distributions of two random variables before
we can prove them. We postpone these until later in this chapter, but a preview is given in
Exercise 19. We will supply proofs of the other parts.
Theorem 1. Let X and Y be random variables (with finite means and variances wherever
they are referred to). Then
(a) E[X + Y] = E[X] + E[Y]
(b) For any constants a and b, E[a X + b Y] = a E[X] + b E[Y]
(c) If a is a constant then E[a] = a
(e) First, the mean of the random variable c X d is c Μ d, by parts (b) and (c) of this
theorem. Therefore,
Varc X d Ec X d c Μ d2
Ec X c Μ2
c2 EX Μ2 c2 VarX
This theorem is a very important result which deserves some commentary. Properties
(a) and (b) of Theorem 1 make expectation pleasant to work with: it is a linear operator on
random variables, meaning that the expected value of a sum (or difference) of random
variables is the sum (or difference) of the expected values, and constant coefficients factor
out. The intuitive interpretation of part (c), as indicated in the proof, is that a random
variable that is constant must also have that constant as its average value. Part (d) is a
convenient computational formula for the variance that expresses it in terms of the second
moment about 0. For example if the p.m.f. of a random variable X is f 1 1 8, f 2 1 3,
f 3 1 4, f 4 7 24, then
1 1 1 7 65
Μ EX 1 2 3 4
8 324 4
24
1 1 7 1 201
EX 2 12 22 32 42
8 3 4 24 24
201 65 2 599
VarX
24 24 576
The last computation is a bit easier than computing the variance directly from the definition
Var(X) = E[X 65 242 ], though it is not a major issue in light of the availability of
technology. The formula in part (d) is actually more useful on some occasions when for
various reasons we know the variance and the mean and want to find the second moment.
Finally with regard to the theorem, part (e) shows that the variance is not linear as the mean
is: constant coefficients factor out as squares, which happens because variance is defined as
an expected square. Also, the variance measures spread rather than central tendency, so that
the addition of a constant like d in part (e) of the theorem does not affect the variance.
Example 6 A discrete probability distribution on the non-negative integers that we will
study later is the Poisson distribution. It has a parameter called Μ, and is referred to in
Mathematica as PoissonDistribution[mu]. The probability mass function for this distribu-
tion is
eΜ Μk
f k , k 0, 1, 2, …
k
84 Chapter 2 Discrete Distributions
We do not know the mean and variance of this distribution yet, but let us simulate large
random samples from the Poisson distribution and use the empirical distribution to estimate
them. Can we develop a conjecture about how the mean and variance relate to the parameter
Μ?
As we will be seeing later, the mean Μ and variance Σ2 of a distribution can be
estimated by the sample mean X and sample variance S 2 calculated as below from a random
sample X1 , X2 , … , Xn from that distribution.
1 n 1 n 2
X Xi ; S 2 Xi X (10)
n i1 n1 i1
These are sensible estimators of the distributional mean and variance, since X is a simple
arithmetical mean of the data values, and S 2 is a version of an average squared distance of
the data values from the sample mean. It is a major result in probability theory that these
sample estimators will converge to Μ and Σ2 as the sample size goes to infinity.
To simulate random samples, note that there is a discrete version of the RandomReal
command called RandomInteger that applies to any named discrete distribution object in
Mathematica. Also, the commands Mean and Variance, when applied to a list, will return
the sample mean and variance, respectively. Here is one simulated random sample of size
100 from the Poisson(2) distribution, a histogram, and a report of the sample mean and
variance. Results will differ with the simulation, but it appears as if both the mean and
variance are close to the parameter Μ 2.
P2 PoissonDistribution2;
SeedRandom65 476;
datalist2 RandomIntegerP2, 100
Histogramdatalist2, 6, ChartStyle Gray
NMeandatalist2, NVariancedatalist2
3, 3, 5, 2, 4, 0, 3, 0, 4, 2, 5, 2, 2, 3, 3, 0, 4, 0, 2, 3, 1,
3, 2, 2, 2, 0, 4, 1, 2, 4, 1, 4, 2, 3, 0, 1, 1, 0, 1, 3, 2,
3, 1, 0, 1, 1, 3, 1, 1, 3, 2, 1, 6, 0, 1, 1, 3, 2, 1, 5, 0,
1, 4, 2, 2, 2, 3, 1, 1, 4, 2, 2, 1, 6, 1, 4, 3, 1, 1, 1, 2,
2, 0, 4, 0, 1, 2, 0, 1, 3, 2, 2, 1, 0, 1, 3, 3, 2, 1, 3
2.1 Discrete Random Variables 85
0.25
0.20
0.15
0.10
0.05
To check to see whether this happens again for another value of Μ, observe the simulation
below for Μ 5. We suppress the actual sample values and the histogram this time.
P5 PoissonDistribution5;
SeedRandom10 237;
datalist5 RandomIntegerP5, 100;
NMeandatalist5, NVariancedatalist5
4.81, 5.18576
In the electronic version of the text, you should delete the SeedRandom command and try
rerunning these simulations several times. Try also increasing the sample size. You should
find convincing empirical evidence that the Poisson distribution with parameter Μ has mean
equal to Μ and also has variance equal to Μ.
gy_, n_, Θ_ :
y Θn
y 1 Θn
ExpectedHighTankn_, Θ_ :
NSumy
gy, n, Θ, y, 1, Θ
Figure 6 shows the graph of the expected value in terms of Θ for a sample size of n 40.
2.1 Discrete Random Variables 87
ListPlot
TableΘ, ExpectedHighTank40, Θ, Θ, 125, 130,
AxesLabel "Θ", "Expected High Tank",
PlotStyle Black, PointSize0.02`
127
126
125
124
Θ
126 127 128 129 130
Figure 2.6 - Expected value of highest tank number as a function of Θ
The two values of Θ of 127 and 128 have the closest expected high tank number to 125. By
evaluating ExpectedHighTank at each of these choices, you will find that Θ = 128 is the
closest estimate.
Activity 5 Show that the expected value of the discrete uniform distribution on
1, 2, ..., n is n 1 2, and the variance is n2 1 12.
Example 8 Suppose that each member of a finite population can be classified as either
having a characteristic or not. There are numerous circumstances of this kind: in a batch of
manufactured objects each of them is either good or defective, in a deck of cards each card is
either a face card or not, etc. The particular application that we will look at in this example
is the classical capture-recapture model in ecology, in which a group of animals is captured,
tagged, and then returned to mix with others of their species. Thus, each member in the
population is either tagged or not.
Suppose that there are N animals in total, M of which are tagged. A new random
sample of size n is taken in a batch without replacement. Let the random variable X be the
number of tagged animals in the new sample. Reasoning as in Section 1.4, the probability
that there are x tagged animals in the sample, and hence n x untagged animals, is
M NM
x nx
PX x ,
N (12)
n
for integers x such that 0 x M and 0 n x N M
88 Chapter 2 Discrete Distributions
The p.m.f. in formula (12) is called the hypergeometric p.m.f. with parameters N, M, and n.
The restrictions on the state x guarantee that we must sample a non-negative number of
tagged and untagged animals, and we cannot sample more than are available of each type.
For instance, if N = 20 animals, M = 10 of which were tagged, the probability of at
least 4 tagged animals in a sample of size 5 is
Binomial10, 4
Binomial10, 1
Binomial20, 5
Binomial10, 5
Binomial10, 0
Binomial20, 5
49
323
Let us try to compute the expected value of the hypergeometric distribution with
general parameters N, M, and n, and then apply the result to the animal tagging context.
First we need to check that the assumption that the sample was taken in a batch can be
changed to the assumption that the sample was taken in sequence, without changing the
distribution of the number of tagged animals. By basic combinatorics, if the sample was
taken in sequence,
sequences with exactly x tagged
PX x
sequences possible
orderings of a batch of n batches with x tagged and nx untagged
sequences possible
M NM M NM
n
x nx x nx
NN1 Nn1
N
n
Since the last quotient is the same as formula (12), we may dispense with the assumption of
batch sampling. This allows us to write X, the number of tagged animals in the sample, as
X X1 X2 Xn
where
n M M nM
EX 1 0 1 (13)
i1 N N N
The mean number of tagged animals in our sample of 5 is therefore 5(10)/20 = 2.5. We will
have to wait to compute the variance of the hypergeometric distribution until we know more
about dependence between random variables.
Mathematica's kernel contains objects that represent the distributions that we will
meet in this chapter, in addition to which are ways of using these objects to find mass
function values and cumulative distribution function values, and to produce simulated
observations of random variables. The two named distributions that we have worked with in
this subsection are written in Mathematica as:
and we introduced the PoissonDistribution object in Example 6. When you want a value of
the p.m.f. of a named distribution (including its parameter arguments) at a state x, you issue
the command
PDF[distribution, x]
(The "D" in "PDF" stands for "density," a word that will be motivated in Chapter 3.) For
example, the earlier computation of the probability of at least 4 tagged animals could also
have been done as follows. To save writing we give a temporary name to the distribution,
and then we call for the sum of the p.m.f. values:
49
323
For cumulative probabilities the function
CDF[distribution, x]
gives the value of the c.d.f. F(x) = P[X x]. For example, we can get a list of c.d.f. values at
the states of the uniform distribution on 1, 2, ..., 10 as follows:
90 Chapter 2 Discrete Distributions
0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0 1 2 3 4 5
Figure 2.7 - Histogram of the hypergeometric(5,10,20) distribution
RandomIntegerdist1, 10
2, 2, 2, 3, 2, 2, 4, 3, 3, 1
We will see more distributions as we go through this chapter. You may also be
interested in looking at other utility functions in the kernel, such as Quantile, which is
essentially the inverse of the c.d.f., and Mean which returns the mean of a given distribution.
2.1 Discrete Random Variables 91
Exercises 2.1
1. Below are three possible probability mass functions for a random variable whose state
space is 1, 2, 3, 4, 5. Which, if any, is a valid p.m.f.? For any that are valid, give the
associated c.d.f.
state 1 2 3 4 5
f1 x .01 .42 .23 .15 .10
f2 x .20 .30 .20 .15 .20
f3 x .16 .31 .18 .10 .25
2. Two fair dice are rolled in succession. Find the p.m.f. of (a) the sum of the two; (b) the
maximum of the two.
3. (Mathematica) Simulate 1000 rolls of 2 dice, produce a histogram of the sum of the two
up faces, and compare the empirical distribution to the theoretical distribution in part(a) of
Exercise 2. Do the same for the maximum of the two dice and compare to the theoretical
distribution in Exercise 2(b).
4. Devise an example of a random variable that has the following c.d.f.
0 if x 0
1 8 if 0x 1
F x 4 8 if 1x 2
7 8 if 2x 3
1 if x
3
5. Find the c.d.f., and then the p.m.f., of the random variable X that is the minimum of a
sample of five integers taken in sequence and with replacement from 1, 2, ..., 20.
6. Consider a probability distribution on the set of integers :
2 1 x 1
3 if x 1, 2, 3, ...
f x 9
1
3
if x 0
Verify that this is a valid p.m.f., and show that its c.d.f. F satisfies the conditions
9. Find the mean, variance, and third moment about the mean for the probability distribution
f 1.5 .3, f 2 .2, f 2.5 .2, f 3.5 .3.
10. Compare the variances of the two distributions with probability mass functions
f 0 .25, f 1 .5, f 2 .25 and g0 .125, g1 .75, g2 .125. Draw pictures of
the two p.m.f.'s that account for the differing variances.
11. Compare the third moments about the mean E[X Μ3 ] for the two distributions with
probability histograms below.
g1 ProbabilityHistogram0, 1, 2,
0.25, 0.5, 0.25, BarColor Gray;
g2 ProbabilityHistogram0, 2, 4,
0.5, 0.25, 0.25, BarColor Gray;
GraphicsRowg1, g2
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 1 2 0 2 4
Exercise 11
12. For a random variable with state space 0, 1, 2, ... and c.d.f. F, show that
EX 1 Fn
n0
(Hint: Start with the right side and write 1 Fn as a sum.)
13. Derive a computational formula for E[X Μ3 ] in terms of moments about 0.
14. Construct an example of a random variable whose mean is Μ 1 and whose variance is
Σ2 1
15. Some authors object to presenting the formula E[g(X)] = g(x) ˙ f(x) as a definition of
the expected value of a function of X. Rather, they prefer to say that Y = g(X) defines a new
random variable Y, which has its own p.m.f. f y (y), and expectation E[Y] = y˙ f y (y). Give an
argument that the two versions of expectation are the same in the simplest case where g is a
1-1 function.
2.1 Discrete Random Variables 93
16. Find the mean and variance of the discrete uniform distribution on the set of states
n, n 1, ... , 0, 1, ... , n 1, n.
17. If a discrete random variable X has mean 6.5 and second moment about 0 equal to 50,
find the mean and variance of Y = 2 X 6.
18. Two games are available. Game 1 returns $5 with probability 1/2, and -$1 with probabil-
ity 1/2. Game 2 returns $15 with probability 1/4, and -$2 with probability 3/4.
(a) Compute the mean and variance of return from the two games. Is one clearly better than
the other?
(b) Now suppose that the gambler values a monetary amount of x by the utility function
Ux x 2 . Which of the two games has the higher expected utility?
19. Here is a sneak preview of the proof of linearity of expectation for two discrete random
variables X and Y. First, X and Y have joint probability mass function f x, y if for all state
pairs x, y,
PX x, Y y f x, y
The expectation of a function of X and Y is defined analogously to the one variable case:
The mean and variance of the Bernoulli distribution are easy to find.
Μ EX 1 p 0 1 p p (2)
Σ2 E X p2
1 p2 p 0 p2 1 p p1 p (3)
Activity 1 Recompute the variance of the Bernoulli distribution using the computa-
tional formula for variance E[X 2 ] Μ2 .
2.2 Bernoulli and Binomial Random Variables 95
Example 1 In the game of roulette, the roulette wheel has 38 slots into which a ball can fall
when the wheel is spun. The slots are numbered 00, 0, 1, 2, ... , 36, and the numbers 0 and
00 are colored green while half of the rest are colored black and the other half red. A
gambler can bet $1 on a color such as red, and will win $1 if that color comes up and lose
the dollar that was bet otherwise. Consider a $1 bet on red. The gambler's winnings can then
be described as a random variable W 2 X 1, where X is 1 if the ball lands in a red slot
and X = 0 if not. Clearly X has the Bernoulli distribution. The probability of a success, that
is red, is p 18 38 if all slots are equally likely. By linearity of expectation and formula (2),
the expected winnings are:
18 2
EW E2 X 1 2 EX 1 2 1
38 38
Since the expected value is negative, this is not a good game for the gambler. (What do you
think are the expected winnings in 500 such bets?) The winnings are variable though,
specifically by the properties of variance discussed in Section 2.1 together with formula (3),
18 20 360
VarW Var2 X 1 4 VarX 4
38 38 361
Since the variance and standard deviation are high relative to the mean, for the single $1 bet
there is a fair likelihood of coming out ahead (exactly 18/38 in fact). We will see later that
this is not the case for 500 bets.
Mathematica's syntax for the Bernoulli distribution is
BernoulliDistribution[p]
The functions PDF, CDF, and RandomInteger can be applied to it in the way described at
the end of the last section. Here for example is a short program to generate a desired number
of replications of the experiment of spinning the roulette wheel and observing whether the
color is red (R) or some other color (O). We just generate Bernoulli 0-1 observations with
success parameter p = 18/38, and encode 0's as O and 1's as R.
SimulateRoulettenumreps_ :
TableIf
RandomIntegerBernoulliDistribution18 38 1,
"R", "O", numreps
SimulateRoulette20
R, O, O, R, O, O, O, R, O, R, O, R, O, R, O, O, O, O, O, R
96 Chapter 2 Discrete Distributions
The Bernoulli distribution is most useful as a building block for the more important
binomial distribution, which we now describe. Instead of a single performance of a dichoto-
mous success-failure experiment, consider n independent repetitions (usually called trials) of
the experiment. Let the random variable X be defined as the total number of successes in the
n trials. Then X has possible states 0, 1, 2, ..., n. To find the probability mass function of
X, consider a particular outcome for which the number of successes is exactly k. For
concreteness, suppose that there are n = 7 trials, and we are trying to find the probability that
the number of successes is k = 4. One such outcome is SSFFSFS, where the four successes
occur on trials 1, 2, 5, and 7. By the independence of the trials and the fact that success
occurs with probability p on any one trial, the probability of this particular outcome is
p p 1 p 1 p p 1 p p p4 1 p3
By commutativity of multiplication, it is easy to see that any such outcome with four
successes and 3 failures has the same probability. In general, the probability of any particu-
lar n trial sequence with exactly k successes is pk 1 pnk . How many different n trial
sequences are there that have exactly k successes? A sequence is determined uniquely by
sampling k positions from among n in which the S symbols are located. By combinatorics,
n
there are such sequences. Thus the overall probability of exactly k successes in n
k
Bernoulli trials is the following, which defines the binomial probability mass function.
n k
f k PX k p 1 pnk , k 0, 1, 2, ..., n (4)
k
The Bernoulli distribution is the special case of the binomial distribution in which
there is only n = 1 trial. Notice in (4) that in this case the state space is 0, 1, and the p.m.f.
formula reduces to
1 k k 1k
f k p 1 p pk 1 p , k 0, 1 (5)
k
BinomialDistribution[n, p]
where the meanings of the arguments are the same as described above. We will show the use
of this distribution object in the example below.
Example 2 In the roulette problem, let us try to compute the probability that the gambler
who bets $1 on red 500 times comes out ahead. This event happens if and only if there are
strictly more wins than losses among the 500 bets. So we want to compute PX 251, or
what is the same thing, 1 PX
250 , where X is the number of reds that come up. Since
X has the binomial distribution with n = 500 and p 18 38, we can compute:
0.110664
So our gambler is only around 11% likely to profit on his venture. Another way to have
made this computation is to add the terms of the binomial p.m.f. from 251 to 500:
0.110664
(If you execute this command you will find that it takes a good deal longer than the one
above using the CDF, because Mathematica has some special algorithms to approximate
sums of special forms like the binomial CDF.)
To recap a bit, you can identify a binomial experiment from the following properties:
it consists of n independent trials, and on each trial either a success (probability p) or a
failure (probability q 1 p) can occur. The binomial p.m.f. in formula (4) applies to the
random variable that counts the total number of successes. Shifting the focus over to
sampling for a moment, the trials could be repeated samples of one object from a population,
replacing the sampled object as sampling proceeds, and observing whether the object has a
certain special characteristic (a success) or not (a failure). The success probability would be
the proportion of objects in the population that have the characteristic. One of the exercises
at the end of this section invites you to compare the probability that k sample members have
the characteristic under the model just described to the probability of the same event
assuming sampling in a batch without replacement.
98 Chapter 2 Discrete Distributions
Activity 3 Use Mathematica to compute the probability that the gambler who bets 500
times on red in roulette comes out at least $10 ahead at the end. If the gambler plays
1000 times, what is the probability that he will not show a positive profit?
In Figure 8(a) is a histogram of 500 simulated values from the binomial distribution
with n = 10 trials and p = 1/2. Figure 8(b) is a histogram of the probability mass function
itself, which shows a similar symmetry about state 5, similar spread, and similar shape.
Needs"KnoxProb7`Utilities`"
datalist
RandomIntegerBinomialDistribution10, 0.5, 500;
pmfk_ : PDFBinomialDistribution10, 0.5, k;
states 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10;
probs Tablepmfk, k, 0, 10;
GraphicsRowHistogramdatalist, 11, "Probability",
ChartStyle Gray, ProbabilityHistogram
states, probs, BarColor Gray
0.25
0.20
0.20
0.15
0.15
0.10 0.10
0.05 0.05
1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 10
(a) (b)
The binomial mass function is always symmetric about its center when p 1 2 (see
Exercise 11). In the cases p 1 2 and p 1 2, respectively, the distribution is skewed left
and right. (Convince yourself of these facts analytically by looking at the formula for the
p.m.f., and intuitively by thinking about the binomial experiment itself.) The command
below allows you to manipulate the value of p between a low value of p .1 and a high
value of p .9 for the case n 10 and see the effect on the histogram of the binomial mass
function. Try to find the symmetric case. You should try adapting it to other values of n in
the electronic version of the text.
2.2 Bernoulli and Binomial Random Variables 99
We now turn to the moments of the binomial distribution. Think about these
questions: If you flip 4 fair coins, how many heads would you expect to get? What if you
flipped 6 fair coins? Analyze how you are coming up with your guesses, that is, of what
simpler things is your final answer composed? What does this mean about the expected
value of the number of heads in an odd number of flips, say 7?
After the last paragraph you probably can guess the formula for the mean of the
binomial distribution. Compare your hypothesis with the following brute force calculations
of means in Mathematica, done by summing the probability of a state, times the state, over
all states.
case n 20, p .2
SumBinomialPMFk, 20, .2 k, k, 0, 20
4.
case n 30, p .8
SumBinomialPMFk, 30, .8 k, k, 0, 30
24.
The binomial mean is of course n p, which we prove in the next theorem. We also include
the formula for the variance of the binomial distribution.
Theorem 1. If X b(n, p), then (a) EX n p ; (b) Var(X) = n p 1 p
Proof. (a) We may factor n and p out of the sum that defines EX , delete the k = 0 term
(which equals 0), and cancel the common factors of k to get
100 Chapter 2 Discrete Distributions
n k
EX
nk0 k p 1 pnk
k
n1
n p
nk1 k1 nk
pk1 1 pnk
Then we can change variables in the sum to l k 1, noting that in the exponent of 1 p
we can rewrite n k as n 1 k 1, which reduces the sum to that of all terms of the
bn 1, p probability mass function.
n1
EX n p
n1
l0 l nk
pl 1 pn1l
n p1
Since this sum equals 1, the desired formula n p for the mean results.
(b) We use the indirect strategy of computing E[X X 1] first, because this sum can be
simplified very similarly to the sum above for E[X].
n k
EX X 1
nk0 k k 1 p 1 pnk
k
n2
n n 1 p2
nk2 k2 nk
pk2 1 pnk
The sum on the right adds all terms of the bn 2, p p.m.f., hence it equals 1. (Use the
change of index l k 2 to see this.) Therefore, E[X X 1] = n n 1 p2 , and we can
write the left side as E X 2
EX . Finally,
Var X E X 2
EX 2
E X 2
EX EX EX 2
n n 1 p2 np np2
Μ N1000 .98
Σ N 1000 .98 .02
980.
4.42719
which is:
0.890067
Activity 4 We will see later that a very large proportion of the probability weight for
any distribution falls within 3 standard deviations (which comes out to about 13 in
Example 3) of the mean of the distribution. So the delivery company can be fairly
certain that between 967 and 993 of the packages will be delivered on time overnight.
Use the CDF command to find the probability of this event.
Example 4 Many attempts have been made to estimate the number of planets in the
universe on which there is some form of life, based on some very rough estimates of
proportions of stars that have planetary systems, average numbers of planets in those
systems, the probability that a planet is inhabitable, etc. These estimates use random
variables that are assumed to be modeled by binomial distributions. At the time of this
writing, scientists are estimating that 2% of stars have planetary systems. Suppose that only
1% of those systems have a planet that is inhabitable (assume conservatively that there is at
most one such planet per system), and that only 1% of inhabitable planets have life. For a
galaxy of, say 100 billion = 1011 stars, let us find the expected number of stars with a planet
that bears life.
Consider each star as a trial, and define a success as a star with a system that includes
a planet that has life. Then there are n = 1011 trials, and by the multiplication rule for
conditional probability, the success probability per star is
p Plife Phas system and has inhabitable planet and life
.02 .01 .01 2 106
Therefore, the expected value of the number of planets with life is n p = 2 × 105 = 200,000.
Incidentally, the standard deviation of the number of planets with life would be
102 Chapter 2 Discrete Distributions
447.213
As mentioned in the last activity, most of the probability weight for any distribution falls
within 3 standard deviations of the mean. Since 3Σ is around 1300, which is a great deal
less than the mean of 200,000, this means that it is very likely that the number of planets
with life is in the vicinity of 200,000 under the assumptions in this example.
Multinomial Distribution
A generalization of the binomial experiment consists of n independent trials, each of which
can result in one of k possible categories or values. Think of these values as k different types
of success. Category i has probability pi , and we must have that the sum of all pi for i = 1 to
k equals 1. This so-called multinomial experiment therefore involves k counting random
variables X1 , X2 , ... , Xk , where Xi is the number of trials among the n that resulted in value
i. Since all of the n trials must result in exactly one value, the total of all of these Xi 's must
come out to n. For example, consider the roll of a fair die n 12 times in succession. On
each roll, there are k 6 possible faces that could come up. Random variables associated
with this experiment are X1 1 ' s among the 12 rolls, X2 2 ' s among the 12 rolls, etc.
In addition to finding how these random variables behave individually, we might be
interested in probabilities of events involving intersections, such as the event that there are at
least 3 1's rolled and at most 2 6's.
Let us consider a somewhat smaller case with unequal outcome probabilities to see
the underlying idea. For a multinomial experiment with 10 trials and 4 possible categories on
each trial whose probabilities are 1/4, 1/2, 1/8, and 1/8, let us find an expression for the joint
probability mass function of the category frequencies X1 , X2 , X3 , and X4 defined by
f x1 , x2 , x3 , x4 PX1 x1 X2 x2 X3 x3 X4 x4
(It is common practice to make such intersections more readable by replacing the symbol
by a comma.) Reasoning as in the development of the binomial p.m.f., any sequence of 10
trials that contains x1 1's, x2 2's, x3 3's, and x4 4's has probability
x1 x2 x3 x4
1 1 1 1
4 2 8 8
The number of such sequences is the number of ways of partitioning 10 trials into x1 posi-
tions for 1's, x2 positions for 2's, x3 positions for 3's, and x4 positions for 4's. We observed
10
in Section 1.4 that such a partition can be done in ways. Hence,
x1 x2 x3 x4
2.2 Bernoulli and Binomial Random Variables 103
x1 x2 x3 x4
10 1 1 1 1
PX1 x1 , X2 x2 , X3 x3 , X4 x4
x1 x2 x3 x4 4 2 8 8
After the next example we will write the general formula for the joint probability mass
function of k multinomial random variables with success probabilities p1 , p2 , … , pk and n
trials.
Example 5 In a study of the on-time performance of a bus service at a particular stop, each
day the categories might be: early, on time, and late, say with probabilities .12, .56, and .32,
respectively. What is the probability that in an eight day period the bus is early once, on
time 4 times, and late 3 times? What is the probability that the bus is late at least 6 times
more than it is early?
Denoting the three daily categories as E, O, and L, two of the many configurations
that are consistent with the requirements are:
EOOOOLLL
OEOLOLLO
The probability of each such configuration is clearly .121 .564 .323 , by the independence
8 8
of the trials. There are 1 4 3 such configurations. Therefore the overall
14 3
probability of exactly 1 early arrival, 4 on time arrivals, and 3 late arrivals is
8
PX1 1, X2 4, X3 3 .121 .564 .323
143
Remember that Mathematica can compute the multinomial coefficient as a function of the
category frequencies as Multinomial[x1, x2, ..., xk]. Hence, the desired probability is
0.108278
For the second question, we must itemize disjoint cases that make up the event that
the bus is late at least 6 times more than it is early, and then add their probabilities. Since
there are only eight days in the study, the possible cases are X1 0, X2 2, X3 6,
X1 0, X2 1, X3 7, X1 1, X2 0, X3 7, and X1 0, X2 0, X3 8. The total
probability of the desired event is then:
104 Chapter 2 Discrete Distributions
0.0114074
In general, the multinomial distribution is the joint distribution of the numbers X1 ,
X2 , ..., Xk of items among n trials that belong to the categories 1, 2, ..., k, respectively. If the
category probabilities are p1 , p2 , ... , pk , where
i pi = 1, then the joint probability mass
function is
f x1 , x2 , ... , xk PX1 x1 , X2 x2 , ... , Xk xk
n
p x1 p 2 x2 p k xk ,
x1 x2 ... xk 1 (6)
xi n , xi 0 for all i
i
The individual distributions of random variables that have the joint multinomial
distribution are easy to find if we go back to first principles. Consider X1 for example. Each
of the n trials can either result in a category 1 value, or some other. Lump together all of the
other categories into one category thought of as a failure, and combine their probabilities
into one failure probability. Thus, X1 counts the number of successes in a binomial experi-
ment with success probability p1 . This reasoning shows that if X1 , ... , Xk have the multino-
mial distribution with n trials and category probabilities p1 , ..., pk , then each Xi has the
binomial distribution with parameters n and pi . In a roll of 12 fair dice in succession for
example, the number of 6's has the b12, 1 6 distribution. The final activity of this section
asks you to generalize this reasoning.
Exercises 2.2
1. Devise two examples other than the ones given at the start of the section of random
phenomena that might be modeled using Bernoulli random variables
2. Explain how a random variable X with the b(n, p) distribution can be thought of as a sum
of Bernoulli(p) random variables.
3. Find the third moment about the mean for the Bernoulli distribution with parameter p.
2.2 Bernoulli and Binomial Random Variables 105
4. The Bernoulli distribution can be generalized slightly by removing the restriction that the
state space is 0, 1. Consider Bernoulli distributions on state spaces a, b, where without
loss of generality a b. The associated probability mass function would be f k p, if
k b, and f k q 1 p if k a. Then there will be a unique Bernoulli distribution for
each fixed mean, variance pair. In particular, given p, Μ EX and Σ2 VarX , find states
a and b that produce the desired values of Μ and Σ2 .
5. (Mathematica) Referring to the roulette problem, Example 2, write a command that tracks
the gambler's net winnings among n spins betting $1 on red as n gets large. The command
should compute and list plot the net winnings per spin. Run the command several times for
large n. To what does the net winnings per spin seem to converge?
6. Suppose that the number of people among n who experience drowsiness when taking a
certain blood pressure medication has the binomial distribution with success probability
p .4. Find the probability that no more than 3 among 10 such recipients of the medication
become drowsy.
7. Find the second moment about 0 for the b(n, p) distribution.
8. (Mathematica) Suppose that in front of a hotel in a large city, successive 20 second
intervals constitute independent trials, in which either a single cab will arrive (with probabil-
ity .2), or no cabs will arrive (with probability .8). Find the probability that at least 30 cabs
arrive in an hour. What is the expected number of cabs to arrive in an hour?
9. (Mathematica) The 500 Standard & Poor's companies are often used to measure the
performance of the stock market. One rough index simply counts the numbers of
"advances," i.e., those companies whose stocks either stayed the same in value or rose. The
term "declines" is used for those companies whose stocks fell in value. If during a certain
bull market period 70% of the Standard & Poor's stocks are advances on average, what is the
probability that on a particular day there are at least 300 advances? What are the expected
value and variance of the number of advances?
10. For what value of the parameter p is the variance of the binomial distribution largest?
(Assume that n is fixed.)
11. Prove that when p = .5, the p.m.f. of the binomial distribution is symmetric about p.
12. (Mathematica) Write a command in Mathematica to simulate 500 at bats of a baseball
hitter whose theoretical average is .300 (that is, his probability of a hit on a given at bat is
.3). Use the command to simulate several such 500 at bat seasons, making note of the
empirical proportion of hits. What is the expected value of the number of hits, and what is
the standard deviation of the batting average random variable (i.e., hits/at bats)?
13. Use linearity of expectation to devise an alternative method of proving the formula
EX n p where X b(n, p). (Hint: See Exercise 2.)
106 Chapter 2 Discrete Distributions
19. A large pool of adults earning their first driver's license includes 50% low-risk drivers,
30% moderate-risk drivers, and 20% high-risk drivers. Because these drivers have no prior
driving record, an insurance company considers each driver to be randomly selected from the
pool. This month, the insurance company writes 4 new policies for adults earning their first
driver's license. What is the probability that these 4 will contain at least two more high-risk
drivers than low-risk drivers?
20. A study is being conducted in which the health of two independent groups of ten
policyholders is being monitored over a one-year period of time. Individual participants in
the study drop out before the end of the study with probability .2 (independently of the other
participants). What is the probability that at least 9 participants complete the study in one of
the two groups, but not in both groups?
It is fairly easy to derive the p.m.f.'s of these two distributions. Consider first the
geometric experiment which repeats dichotomous trials with success probability p until the
first success occurs. The event that the number of failures is exactly k is the event that there
are k failures in a row, each of probability q = 1 p, followed by a success, which has
probability p. By the independence of the trials, the number X of failures prior to the first
success has the following geometric p.m.f. with parameter p (abbr. geometric(p)):
108 Chapter 2 Discrete Distributions
You can think about the negative binomial experiment in a similar way, although
now the possible arrangements of successes and failures in the trials prior to the last one
must be considered. In order for there to be exactly k failures prior to the rth success, the
experiment must terminate at trial k r, and there must be exactly r 1 successes during the
first k + r 1 trials, and a success on trial k + r. The first r 1 successes may occur in any
positions among the first k + r 1 trials. Therefore, the probability of this part of the event
is given by a binomial probability with k + r 1 trials, r 1 successes and success probabil-
ity p. This binomial probability is multiplied by the probability p of success on trial k + r.
Therefore, the negative binomial p.m.f. with parameters r and p is
k r 1 r1 k kr1 r
f k PX k p q p p 1 pk ,
r1 r1 (2)
k 0, 1, 2, ...
where the random variable X counts the number of failures prior to the rth success.
The Mathematica distribution objects for these two distributions are
For example, the shape of the geometric(.5) p.m.f. is shown in Figure 9(a). A simulation of
500 observed values of a geometric(.5) random variable yields the histogram of the empirical
distribution in Figure 9(b). Both graphs illustrate the exponential decrease of probability
mass suggested by formula (1) as the state grows.
Needs"KnoxProb7`Utilities`";
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 9 10
(a) (b)
Figure 2.9 - (a) Histogram of geometric(.5) p.m.f.; (b) Histogram of 500 simulated observations from geometric(.5)
Activity 2 Show that the geometric p.m.f. in formula (1) defines a valid probability
mass function.
0.480021
Thus, our telemarketer can be about 48% sure that he will make at least 100 calls. Let us
look at a connected dot plot of the probability masses to get an idea of how variable the
number of unsuccessful calls X is. (We will talk about the variance of the distribution later.)
110 Chapter 2 Discrete Distributions
fx_ :
PDFNegativeBinomialDistribution20, 0.2, x;
pmf Tablex, fx, x, 20, 140;
ListPlotpmf, Joined True, PlotStyle Black
0.020
0.015
0.010
0.005
The distribution of X is nearly symmetric about a point that seems to be just less than
80, with a slight right skew. It appears that at least 2/3 of the probability weight is between
60 and 100, and there does not seem to be much weight at all to the right of 120. This tells
the telemarketer that on repeated sales campaigns of this kind, most of the time between 60
and 100 unsuccessful calls will be made, and very rarely will he have to make more than 120
unsuccessful calls.
Activity 3 Another kind of question that can be asked about the situation in Example 1
is: for what number c is it 95% likely that the number of unsuccessful calls X is less than
or equal to c? Use Mathematica to do trial and error evaluation of the c.d.f. in search of
this c.
Example 2 A snack machine dispenses bags of chips and candy. One of its ten dispenser
slots is jammed and non-functioning. Assume that successive machine customers make a
selection randomly. What is the probability that the first malfunction occurs strictly after the
20th customer? What is the probability that the third malfunction occurs on the 50th
customer's selection?
Since one among ten slots is defective, the number of normal transactions (failures)
prior to the first malfunction (success) has the geometric(.1) distribution. Consequently, the
probability that the first malfunction happens for customer number 21 or higher is the
probability that at least 20 normal transactions occur before the first malfunction, which is
the complement of the probability that 19 or fewer normal transactions occur. This is
computed as follows.
2.3 Geometric and Negative Binomial Random Variables 111
1 CDFGeometricDistribution.1, 19
0.121577
So it is highly probable, around 88%, that the first malfunction happens by the time the 20th
customer uses the machine.
Similarly the number of good transactions until the third malfunction has the
negative binomial distribution with parameters r = 3 and p = .1. For the second question we
want the probability that there are exactly 47 good transactions, which is:
0.00831391
probp_ : p4 1 p20
0.1 0.0000121577
0.2 0.0000184467
0.3 6.46317 106
0.4 9.35977 107
0.5 5.96046 108
0.6 1.42497 109
0.7 8.37177 1012
0.8 4.29497 1015
0.9 6.561 1021
From the table, we see that the p value which makes the observed data likeliest is the second
one, p = .2; in other words, the data are most consistent with the assumption that exactly two
slots are jammed.
Activity 4 Redo the last part of Example 2, assuming that the stream of malfunctions
came in on customers 1, 3, 6, and 8. Find the value of p that makes this stream most
likely. Do it again, assuming that the malfunctions were on customers 2, 7, 10, and 17.
We close the section by deriving the first two moments of these distributions.
Theorem 1. (a) The mean and variance of the geometric(p) distribution are
1 p 1 p
Μ , Σ2 (3)
p p2
(b) The mean and variance of the negative binomial(r, p) distribution are
r1 p r1 p
Μ , Σ2 (4)
p p2
EX
k
k0 k 1 p p
k 1
p 1 p
k0 k 1 p
The infinite series is of the form
k xk 1 for x 1 p. This series is the derivative of the
k0
series
xk with respect to x, and the latter series has the closed form 1 1 x. Hence,
k0
d 1 1
k xk 1
k0 dx 1 x 1 x2
2.3 Geometric and Negative Binomial Random Variables 113
Evaluating at x 1 p,
1 1 p
Μ EX p1 p
1 1 p2 p
(b) The number of failures X until the rth success can be thought of as the sum of the number
of failures X1 until the first success, plus the number of failures X2 between the first and
second successes, etc., out to Xr , the number of failures between successes r 1 and r. Each
of the Xi 's has the geometric(p) distribution; hence by linearity of expectation
EX r EX1 , which yields the result when combined with part (a).
The proof of the variance formula is best done using an important result that we have
not yet covered: when random variables are independent of one another the variance of their
sum is the sum of their variances. Then, reasoning as we did for the mean,
VarX r VarX1 , which yields the formula for the variance of the negative binomial
distribution when combined with the formula in part (a) for the variance of the geometric
distribution. We will prove this result on the variance of a sum in Section 2.6, but Exercise
13 gives you a head start.
Exercises 2.3
1. For the electrical switch discussed at the start of the section, how small must the switch
failure probability be such that the probability that the switch will last at least 1000 trials
(including the final trial where the switch breaks) is at least 90%?
2. Derive the c.d.f. of the geometric(p) distribution.
3. If X has the geometric(p) distribution, find P[X > m + n | X > n].
4. (Mathematica) On a long roll of instant lottery tickets, on average one in 20 is a winner.
Find the probability that the third winning ticket occurs somewhere between the 50th and
70th tickets on the roll.
5. (Mathematica) Simulate several random samples of 500 observations from the negative
binomial distribution with parameters r = 20 and p = .2, produce associated histograms, and
comment on the shape of the histograms as compared to the graph in Figure 10.
6. A certain home run hitter in baseball averages a home run about every 15 at bats. Find
the probability that it will take him (a) at least 10 at bats to hit his first home run; (b) at least
25 at bats to hit his second home run; (c) at least 25 at bats to hit his second home run given
that it took exactly 10 at bats to hit his first. (d) Find also the expected value and standard
deviation of the number of at bats required to hit the second home run.
7. Derive the formula for the variance of the geometric(p) distribution. (Hint: it will be
simpler to first find EX X 1.)
114 Chapter 2 Discrete Distributions
8. (Mathematica) A shoe store owner is trying to estimate the probability that customers
who enter his store will purchase at least one pair of shoes. He observes one day that the 3rd ,
6th , 10th , 12th , 17th , and 21st customers bought shoes. Assuming that the customers make
their decisions independently, what purchase probability p maximizes the likelihood of this
particular customer stream?
9. (Mathematica) Write a command that simulates a stream of customers as described in
Exercise 8. It should take the purchase probability p as one argument, and the number of
customers in total that should be simulated as another. It should return a list of coded
observations like {B,N,N,B,...} indicating "buy" and "no buy."
10. Show in the case r = 3 that the negative binomial p.m.f. in formula (2) defines a valid
probability mass function.
11. (Mathematica) A random walk to the right is a random experiment in which an object
moves from one time to the next on the set of integers. At each time, as shown in the figure,
either the object takes one step right to the next higher integer with probability p, or stays
where it is with probability 1 p. If as below the object starts at position 0, and p = .3, what
is the expected time required for it to reach position 5? What is the variance of that time?
What is the probability that it takes between 15 and 30 time units to reach position 5?
.7 .7 .7 .7 .7 .7
0 .3 1 .3 2 .3 3 .3 4 .3 5
Exercise 11
12. (Mathematica) Study the dependence of the negative binomial distribution with
parameter r = 3 on the success probability parameter p by comparing connected line graphs
of the p.m.f. for several values of p. Be sure to comment on the key features of the graph of
a p.m.f.: location, spread, and symmetry.
13. This exercise foreshadows the result to come later on the variance of a sum of indepen-
dent random variables which was needed in the proof of the theorem about the variance of
the negative binomial distribution. Suppose that two random variables X and Y are such that
joint probabilities involving both of them factor into the product of individual probabilities,
for example:
PX i, Y j PX i PY j
Let the expected value of any function gX , Y of the two random variables be defined by
the weighted average
2.3 Geometric and Negative Binomial Random Variables 115
EgX , Y
gi, j PX i, Y j
i j
2
Write down and simplify an expression for Var(X + Y) = E[X Y Μx Μ y
] and show
that it reduces to Var(X) + Var(Y).
14. (Mathematica) Consider a basketball team which is estimated to have a 60% chance of
winning any game, and which requires 42 wins to clinch a playoff spot. Find the expected
value and standard deviation of the number of games that are required to do this. Think
about the assumptions you are making when you do this problem, and comment on any
possible difficulties.
Sample Problems from Actuarial Exam P
15. In modeling the number of claims filed by an individual under an automobile policy
during a three-year period, an actuary makes the simplifying assumption that for all integers
1
n 0, pn1 pn , where pn represents the probability that the policyholder files n claims
5
during the period. Under this assumption, what is the probability that a policyholder files
more than one claim during the period?
16. An insurance policy on an electrical device pays a benefit of $4000 if the device fails
during the first year. The amount of the benefit decreases by $1000 each successive year
until it reaches 0. If the device has not failed by the beginning of any given year, the
probability of failure during that year is .4. What is the expected benefit under this policy?
that happens sporadically on a time axis, in such a way that the chance that it happens during
a short time interval is proportional to the length of the interval, and the events that it
happens in each of two disjoint time intervals are independent. Without loss of generality,
let the time axis be the bounded interval 0, 1, and also let Μ be the proportionality constant,
that is, P[event happens in a, b] = Μb a. Then, if the time interval [0, 1] is broken up
into n disjoint, equally sized subintervals [0, 1/n], 1/n, 2/n], ..., ((n 1)/n, 1],
Μ
p Pevent happens in interval j 1 n, j n
n
for all subintervals. The number of occurrences during the whole interval [0,1] is the sum of
the number of occurrences in the subintervals; hence it has the binomial distribution with
parameters n and p = Μ n. But what do the probability masses approach as n ? We
will proceed to argue that they approach the following Poisson(Μ) p.m.f.:
eΜ Μx
f x PX x , x 0, 1, 2, ... (1)
x
We can write out the binomial probabilities as
n
Pexactly x successes p x 1 pnx
x
n Μ x Μ nx
x nx
1 (2)
n n
Μx Μ n nn1 nx1 Μ x
1 nx
1
x n n
Μ n
By a standard result from calculus, the factor 1 approaches eΜ as n . In the
n
activity below you will analyze the other factors to finish the proof that the binomial
probabilities approach the Poisson probabilities as n becomes large and the success probabil-
ity p becomes correspondingly small.
Activity 1 Show that the third and fourth factors in the bottom expression on the right
side of (2) approach 1 as n . You will have to deal with the case x 0 separately.
Conclude that the binomial probability masses b(x; n, Μ/n) approach the Poisson(Μ)
masses in formula (1) as n approaches infinity.
PoissonDistribution[Μ]
which can be used in the usual way with the RandomInteger, PDF, and CDF functions. Let
us use it to see how close the binomial distribution is to the Poisson distribution for large n
2.4 Poisson Distribution 117
and small p. We take n = 100, Μ = 4, and p = Μ/n = 4/100. You will be asked to extend the
investigation to other values of the parameters in the exercises. First is a table of the first few
values of the Poisson and binomial p.m.f.'s, followed by superimposed connected list plots.
x Poisson binomial
0 0.0183156 0.0168703
1 0.0732626 0.070293
2 0.146525 0.144979
3 0.195367 0.197333
4 0.195367 0.199388
5 0.156293 0.159511
6 0.104196 0.105233
7 0.0595404 0.0588803
8 0.0297702 0.0285201
9 0.0132312 0.0121475
10 0.00529248 0.00460591
11 0.00192454 0.0015702
12 0.000641512 0.000485235
0.15
0.10
0.05
2 4 6 8 10 12
Figure 2.11 - List plots of b(100,.04) and Poisson(4) p.m.f.'s
The connected line graphs hardly differ at all and, in fact, the largest absolute difference
between probability masses is only around .004 which takes place at x = 4.
Activity 2 Show that the Poisson(Μ) probabilities in formula (1) do define a valid
probability mass function. Try to obtain a closed form for the c.d.f. of the distribution.
Can you do it? (In Exercise 2 you are asked to use Mathematica to form a table of the
c.d.f. for several values of Μ.)
Example 1 We will suppose for this example that the number of ant colonies in a small field
has the Poisson distribution with parameter Μ 16. This is reasonable if the field can be
broken up into a large number of equally sized pieces, each of which has the same small
probability of containing a colony, independent of the other pieces. Then the probability
that there are at least 16 colonies is
15 e16 16k
PX 16 1 PX
15 1
k0 k
Using Mathematica to evaluate this probability we compute
0.533255
Next, suppose that we no longer know Μ, but an ecology expert tells us that fields
have 8 or fewer ant colonies about 90% of the time. What would be a good estimate of the
parameter Μ? For this value of Μ, P[X
8] = .90. We may write P[X
8] as a function of Μ
as below, and then check several values of Μ in search of the one that makes this probability
closest to .90. I issued a preliminary Table command to narrow the search from Μ values
between 2 and 10 to values between 5 and 6. The Table command below shows that to the
nearest tenth, Μ = 5.4 is the parameter value we seek.
eightprobmu_ : CDFPoissonDistributionmu, 8
2.4 Poisson Distribution 119
The parameter Μ of the Poisson distribution gives complete information about its first
two moments, as the next theorem shows.
Theorem 1. If X Poisson(Μ), then EX Μ and Var X Μ.
Proof. First, for the mean,
eΜ Μk
EX
k0 k
k
Μk1
eΜ
Μ
k1 k1
eΜ
Μ
e Μ
Μ
The first line is just the definition of expected value. In the second line we note that the
k 0 term is just equal to 0 and can be dropped, after which we cancel k with the k in k! in
the bottom, and then remove two common factors eΜ
Μ. Changing variables to say j =
k 1 we see that the series in line 2 is just the Taylor series expansion of e Μ .
To set up the computation of the variance, we first compute E[X X 1] similarly to
the above computation.
eΜ Μk
EX X 1
k0 k
k 1 k
Μ Μk2
e
Μ 2
k2 k2
eΜ
Μ2
e Μ
Μ2
Activity 3 By expanding the expression E[X X 1] in the proof of Theorem 1 and
using the computational formula for the variance, finish the proof.
Since Μ is both the mean and the variance of the Poisson(Μ) distribution, as Μ grows
we would expect to see the probability weight shifting to the right and spreading out. In
Figure 12 we see a connected list plot of the probability mass function for three Poisson
120 Chapter 2 Discrete Distributions
distributions with parameters 2 (leftmost), 5 (middle), and 8 (rightmost), and this depen-
dence on Μ is evident. We also see increasing symmetry as Μ increases.
0.25
0.20
0.15
0.10
0.05
2 4 6 8 10
Figure 2.12 - Poisson p.m.f.'s, parameters 2 (dark), 5 (gray), and 8 (dashed)
In the electronic notebook, you can run the input cell below to produce an animation
of the Poisson mass functions with parameters ranging from 2 through 8 to observe the shift
to the right.
Poisson Processes
Probably the most important instance of the Poisson distribution is in the study of
what is called the Poisson process. A Poisson process records the cumulative number of
occurrences of some phenomenon as time increases. Therefore, such a process is not just a
single random variable, but a family of random variables (Xt ) indexed by a variable t usually
thought of as time. We interpret Xt as the number of occurrences of the phenomenon in
0, t. A family of random variables such as this is called generically a stochastic process.
We will cover stochastic processes in more detail, and also say more about the Poisson
process, in Chapter 6.
3.0
2.5
2.0
1.5
1.0
0.5
0 1.2 2. 3.4 5
Since occurrences happen singly and randomly, and Xt counts the total number of
them, if for a fixed experimental outcome Ω we plot Xt (Ω) as a function of t we get a non-
2.4 Poisson Distribution 121
decreasing step function starting at 0, which jumps by exactly 1 at the random times T1 (Ω),
T2 (Ω), T3 (Ω), ... of occurrence of the phenomenon. Such a function, called a sample path of
the process, is shown in Figure 13, for the case where the first three jumps occur at times
T1 1.2, T2 2.0, T3 3.4.
By our earlier discussion of the domain of application of the Poisson distribution, it
is reasonable to assume that as long as the probability of an occurrence in a short interval is
proportional to the length of the interval, and the numbers of occurrences on disjoint
intervals are independent, Xt should have a Poisson distribution.
What should be the parameter of this Poisson distribution? Let Λ be the expected
number of occurrences per unit time, i.e., Λ EX1 . In a time interval of length t such as
0, t we should expect Λ˙t occurrences. Thus, EXt Λ t and since the expected value of a
Poisson random variable is the same as its parameter, it makes sense that Xt Poisson(Λ t).
The constant Λ is called the rate of the process.
All of this can be put on a rigorous footing, but we will not do that until later. After
we study a continuous distribution called the exponential distribution, we will be able to
define a Poisson process constructively, by supposing that the times between successive
occurrences Tn Tn1 are independent and have a common exponential distribution. For
now though, we will simply assume that certain situations generate a Poisson process, which
means a family Xt of random variables for which Xt has the Poisson(Λt) distribution and
whose sample paths are as in Figure 13.
Example 2 Suppose that cars travelling on an expressway arrive to a toll station according
to a Poisson process with rate Λ = 5 per minute. Thus, we are assuming that the numbers of
cars arriving in disjoint time intervals are independent, and the probability of an arrival in a
short time interval is proportional to the length of the interval. Let Xt be the total number of
cars in [0, t]. Find: (a) PX2 8; (b) PX1 5 X.5 2; (c) assuming that Λ is no longer
known, find the largest possible arrival rate Λ such that the probability of 8 or more arrivals
in the first minute is no more than .1.
To answer question (a), we note that since Λ 5 and the time interval is 2 minutes,
X2 Poisson(10); hence by complementation of the c.d.f., the probability is
0.66718
122 Chapter 2 Discrete Distributions
It is therefore about 67% likely that more than 8 cars will come in two minutes, which is
intuitively reasonable given the average arrival rate of 5 per minute.
For question (b), we have that X1 Poisson(5) and X.5 Poisson(2.5). In order for
both X.5 2
and X1 5
to occur, either X.5 3 or X.5 4, in which case the number
of arrivals in [.5, 1] is determined. Thus,
PX1 5 X.5 2
PX1 5 X.5 2 P X.5 2
PX.5 3, X1 X.5
1 PX.5 4, X1 X.5 0
P X.5 2
Now the random variable X1 X.5 in the numerator of the last expression is the number of
arrivals in the time interval .5, 1, which is independent of X.5 , the number of arrivals in
0, .5. The intersection probabilities in the numerator therefore factor into the product of
the individual event probabilities. But also, we may think of restarting the Poisson process
at time .5, after which it still satisfies the Poisson process hypotheses so that X1 X.5 has
the same distribution as X.5 , which is Poisson(2.5). By these observations, we are left to
compute
PX.5 3 PX.5
1 PX.5 4 PX.5 0
PX1 5 X.5 2
P X.5 2
fx_ : PDFPoissonDistribution2.5, x
Fx_ : CDFPoissonDistribution2.5, x
f3 F1 f4 f0
1 F2
0.158664
For part (c), we now know only that X1 Poisson(Λ ˙ 1). The probability of 8 or
more arrivals is the same as 1 PX1
7, and we would like this probability to be less
than or equal to .1. We can define the target probability as a function of Λ, and then compute
it for several values of Λ. A preliminary Table command narrows the search to values of Λ in
4, 5. The Table command below indicates that the desired rate Λ is around 4.6.
problambda_ :
1 CDFPoissonDistributionlambda, 7;
Tablelambda, Nproblambda, lambda, 4, 5, .1
2.4 Poisson Distribution 123
Exercises 2.4
1. Assume that the number of starfish X in an ocean region has the Poisson(20) distribution.
Find (a) PX 20; (b) PX 20; (c) PΜ Σ X Μ Σ, where Μ and Σ are the mean
and standard deviation of the distribution of X.
2. (Mathematica) Use Mathematica to form a table of cumulative probabilities
Fx PX
x for each of the four Poisson distributions with parameters Μ = 1, 2, 3, and 4.
For all distributions, begin at x 0 and end at x 10.
3. (Mathematica) Suppose that the sample 3, 2, 4, 3, 4, 5, 6, 2 comes from a Poisson(Μ)
distribution. What value of Μ makes this sample most likely to have occurred? (Justify your
answer well.) Compare this value of Μ to the sample mean X .
4. Find the third moment about 0 for the Poisson(Μ) distribution.
5. If X Poisson(Μ), find an expression for the expectation: E[X X 1 X 2 X k].
6. (Mathematica) Write a function of n, p, and k that finds the largest absolute difference
between the values of the b(n,p) probability mass function and the Poisson(np) mass
function, among states x = 0, 1, 2, ... , k. Use it to study the change in the maximum absolute
difference for fixed p = 1/10 as values of n increase from 20 to 100 in multiples of 10.
Similarly, fix n = 200 and study the maximum absolute difference as p takes on the values
1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8, 1/9, 1/10.
7. Suppose that the number of traffic tickets issued during a week in a particular region has
the Poisson distribution with mean 10. Find the probability that the number of tickets next
week will be at least 2 standard deviations below the average.
8. If the number of defects in a length of drywall has the Poisson distribution, and it is
estimated that the proportion with no defects is .3, estimate the proportion with either 1 or 2
defects.
9. If (Xt ) is a Poisson process with rate 2/min., find the probability that X3 exceeds its mean
by at least 2 standard deviations.
10. Suppose that customer arrivals to a small jewelry store during a particular period in the
day form a Poisson process with rate 12/hr. (a) Find the probability that there will be more
than 12 customers in a particular hour; (b) Given that there are more than 12 customers in
the first hour, find the probability that there will be more than 15 customers in that hour.
11. If outside line accesses from within a local phone network form a Poisson process with
rate 1/min., find the joint probability that the cumulative number of accesses is equal to 2 at
124 Chapter 2 Discrete Distributions
Consider the experiment of picking a subject randomly from this group of 171
subjects. Let X and Y, respectively, denote the oral and visual scores of the subject. Then,
for instance,
4 6 10
PX 0, Y 0 , PX 0, Y 1 , PX 0, Y 2 ,
171 171 171
etc. The complete list of the probabilities
f x, y PX x, Y y (1)
over all possible values (x, y) is called the joint probability mass function of the random
variables X and Y. Joint mass functions for more than two discrete random variables are
defined similarly (see Exercise 14).
Activity 1. Referring to the memory experiment, what is PX 0? PX 1?
PX 2? PX 3? PX 4? What do these probabilities add to?
The activity above suggests another kind of distribution that is embedded in the joint
distribution described by f. Working now with Y using the column sums we find:
10 24 41
PY 0 ; PY 1 ; PY 2 ;
171 171 171
60 36
PY 3 ; PY 4
171 171
It is easy to check that these probabilities sum to 1; hence, the function qy PY y
formed in this way is a valid probability mass function. Notice that to compute each of these
probabilities, we add up joint probabilities over all x values for the fixed y of interest, for
example,
PY 0 PX 0, Y 0 PX 1, Y 0 PX 2, Y 0
PX 3, Y 0 PX 4, Y 0
4 2 1 2 1
171 171 171 171 171
10
171
126 Chapter 2 Discrete Distributions
where f is the joint p.m.f. of X and Y. Similarly, the marginal probability mass function of X
is obtained by adding the joint p.m.f. over all values of y:
px PX x PX x, Y y f x, y
(3)
y y
The idea of a conditional distribution of one discrete random variable given another
is analogous to the discrete conditional probability of one event given another. Using the
memory experiment again as our model, what is the conditional probability that a subject
gets 2 visual numbers correct given that the subject got 3 oral numbers correct? The
condition on oral numbers restricts the sample space to subjects in the line numbered 3 in the
table, which has 41 subjects. Among them, 9 got 2 visual numbers correct. Hence,
9
PY 2 X 3
41
It should be easy to see from this example that for jointly distributed discrete random
variables, P[Y = y | X = x] is a well defined conditional probability of one event given
another. This leads us to the definition of the conditional probability mass function of Y
given X = x:
PX x, Y y f x, y
qy x PY y X x (4)
PX x px
Reasoning as above, the full conditional p.m.f. of Y given X 3 in the memory experiment
is
2 6 9
PY 0 X 3 , PY 1 X 3 , PY 2 X 3 ,
41 41 41
14 10
PY 3 X 3 , PY 4 X 3
41 41
Similarly, the conditional probability mass function of X given Y = y is
PX x, Y y f x, y
px y PX x Y y (5)
PY y qy
Formulas (4) and (5) tie together the three kinds of distributions: joint, marginal, and
conditional. The conditional mass function is the quotient of the joint mass function and the
marginal mass function of the random variable being conditioned on.
2.5 Joint, Marginal, and Conditional Distributions 127
The concept of independence also carries over readily to two or more discrete
random variables. In the case of two discrete random variables X and Y, we say that they are
independent of each other if and only if for all subsets A and B of their respective state
spaces,
PX A, Y B PX A PY B (6)
(Exercise 15 gives the analogous definition of independence of more than two random
variables.) Alternatively, we could define independence by the condition
PY B X A PY B (7)
for all such sets A and B, with the added proviso that P[X A] 0. (How does (7) imply the
factorization criterion (6), and how is it implied by (6)?)
For example, from the memory experiment table, P[Y = 4 | X = 4] = 11/31, since
there are 31 subjects such that X = 4, and 11 of them remembered 4 numbers that were
visually presented. However, P[Y = 4] is only 36/171, so that the chance that 4 visual
numbers are remembered is increased by the occurrence of the event that 4 auditory numbers
were remembered. Even this one violation of condition (7) is enough to show that the
random variables X and Y are dependent (that is, not independent). But in general, do you
really have to check all possible subsets A of the X state space (25 = 32 of them here) with all
possible subsets B of the Y state space (again 32 of them) in order to verify independence?
Fortunately the answer is no, as the following important activity shows.
Activity 2 Show that X and Y are independent if and only if their joint p.m.f. f x, y
factors into the product of the marginal p.m.f.'s px qy.
Thus, in our example instead of checking 32 32 = 1024 possible combinations of sets A and
B, we need only check the factorization of f for 5 5 25 possible combinations of states x
and y.
Independence for more than two random variables is explored in the exercises. The
basic idea is that several random variables are independent if and only if any subcollection of
them satisfies factorization of intersection probabilities as in formula (6). This also follows if
and only if the joint p.m.f. factors into the product of the marginals.
We will now look at a series of examples to elaborate on the ideas of joint, marginal,
and conditional distributions and independence.
Example 1 In a roll of two fair 12-sided dice, let X and Y be the observed faces. What is
the joint distribution of X and Y? Find a general formula for PX Y z, z 2, 3, … , 24.
We assume that X and Y are independent random variables, each with marginal
p.m.f. f x 1 12, x 1, 2, … , 12. By the result of Activity 2, the joint p.m.f. of these two
random variables is the product of their marginals:
1 1 1
f x, y px qy , x, y 1, 2, …, 12
12 12 144
128 Chapter 2 Discrete Distributions
Now let Z = X + Y be the total of the two observed faces. In order for Z to equal z, X must
assume one of its possible values x, and Y must equal z x. Then the distribution of Z is, by
the law of total probability,
gz PZ z z1
x1 PX x, Y z x
When z
12, we need both 1 x 12 and 1 z x 12 x z 12. Therefore,
You should convince yourself that this is an appropriate generalization of the distribution of
the sum of the faces of two ordinary 6-sided dice.
Activity 3 In the previous example, find the conditional mass function of the sum Z
given that X x for each of the x values 2, 6, and 12.
Now the marginals may be found by using the definition in (2) of marginal p.m.f.'s
and summing out over the other random variable states. Try this in the activity that follows
this example. But we can also work by combinatorial reasoning. In order to have exactly i
liberals, we must select them from the subgroup of 45, and then select any other 8 i people
from the 55 non-liberals to fill out the rest of the committee. Therefore, the marginal p.m.f.
of the number of liberals on the committee is
45 55
i 8i
pi Pi liberals (8)
100
8
so that the numbers of the three political types in the sample are not independent random
variables. Intuitively, knowing the number of liberals, for instance, changes the distribution
of the number of centrists (see also Exercise 5).
Now the probability that the liberals will have a majority is the probability that there
are at least five liberals on the committee. It is easiest to use the marginal distribution in
formula (8) for the number of liberals, and then to let Mathematica do the computation:
130 Chapter 2 Discrete Distributions
45 55
8 i 8i
Pliberals have majority
i5
100
8
0.251815
We see that the probability of a liberal majority occurring by chance is only about 1/4.
Activity 4 In the above example, find p(i), the marginal p.m.f. of the number of liberals
by setting k 8 i j in the joint p.m.f. and then adding the joint probabilities over the
possible j values.
Example 3 If two random variables X and Y are independent and we observe instances of
them X1 , Y1 , X2 , Y2 , ... , Xn , Yn , what patterns would we expect to find?
We will answer the question by simulating the list of n pairs. For concreteness, let X
have the discrete p.m.f. p(1) = 1/4, p(2) = 1/4, p(3) = 1/2 and let Y have the discrete p.m.f.
q(1) = 5/16, q(2) = 5/16, q(3) = 3/8. The program below simulates two numbers at a time
from Mathematica's random number generator and converts them to observations of X and Y,
respectively. If the random number generator works as advertised, the values of X and Y that
are simulated should be (or appear to be) independent. We will then tabulate the number of
times the pairs fell into each of the 9 categories 1, 1, 1, 2, 1, 3, 2, 1, 2, 2, 2, 3,
3, 1, 3, 2, 3, 3, and study the pattern.
The first two commands simulate individual observations from the distributions
described in the last paragraph, the third uses them to simulate a list of pairs, and the last
tabulates frequencies of the nine states and presents them in a table.
SimXYPairsnumpairs_ :
TableSimX, SimY, numpairs
XYPairFrequenciesnumpairs_ :
Modulesimlist, freqtable, nextpair, x, y,
simlist SimXYPairsnumpairs;
generate data
freqtable Table0, i, 1, 3, j, 1, 3;
initialize table
Donextpair simlisti;
set up next x,y pair
x nextpair1; y nextpair2;
freqtablex, y freqtablex, y 1,
increment the frequency table
i, 1, Lengthsimlist;
output the table
TableForm" ", "Y1", "Y2", "Y3",
"X1", freqtable1, 1,
freqtable1, 2, freqtable1, 3,
"X2", freqtable2, 1,
freqtable2, 2, freqtable2, 3,
"X3", freqtable3, 1,
freqtable3, 2, freqtable3, 3
SeedRandom165 481;
XYPairFrequencies1000
Above is a sample run. There is some variability in category frequencies, even when
the sample size is 1000, but a few patterns do emerge. In each X row, the Y = 1 and Y = 2
frequencies are about equal, and the Y = 3 frequency is a bit higher. Similarly, in each Y
column, the X = 1 frequency is close to that of X = 2, and their frequencies are only around
half that of X = 3. If you look back at the probabilities assigned to each state, you can see
why you should have expected this. In general, if independence holds, the distribution of
132 Chapter 2 Discrete Distributions
data into columns should not be affected by which row you are in, and the distribution of
data into rows should not be affected by which column you are in.
Activity 5 What characteristics of simulated contingency tables like the one in the last
example would you expect in the case where the random variables are not independent?
Example 4 Suppose that a project is composed of two tasks that must be done one after the
other, and the task completion times are random variables X and Y. The p.m.f. of X is
discrete uniform on the set 3, 4, 5 , and the conditional probability mass functions for Y
given each of the three possible values for X are as in the table below. Find the probability
distribution of the total completion time for the project, and find the probability that the
project will be finished within 6 time units.
Y 1 Y 2 Y 3
X 3 12 14 14
X 4 13 13 13
X 5 14 14 12
Exercises 2.5
1. In the memory experiment example at the beginning of the section, find the conditional
p.m.f. of Y given X = 0, and the conditional p.m.f. of X given Y = 2.
2.5 Joint, Marginal, and Conditional Distributions 133
2. Suppose that X and Y have the joint distribution in the table below. Find the marginal
distributions of X and Y. Are X and Y independent random variables?
Y
1 2 3 4
1 1 16 1 16 1 16 1 16
X 2 1 32 1 32 1 16 1 16
3 1 16 1 16 1 32 1 32
4 1 8 1 8 1 16 1 16
3. Argue that two discrete random variables X and Y cannot be independent unless their
joint state space is a Cartesian product of their marginal state spaces, i.e., E = Ex E y =
x, y : x Ex , y E y .
3
1 2 3
Exercise 4
4. Suppose that a joint p.m.f. f x, y puts equal weight on all the integer grid points marked
in the diagram. Find: (a) the marginal distribution of X; (b) the marginal distribution of Y;
(c) the conditional distribution of X given Y = 1; (d) the conditional distribution of Y given
X = 1.
5. In Example 2, find the conditional distribution of the number of centrists on the commit-
tee given that the number of liberals is 3.
6. Show that if two discrete random variables X and Y are independent, then their joint
cumulative distribution function, defined by F x, y PX x, Y y, factors into the
product of the marginal c.d.f.'s F x (x) = PX x and F y (y) = PY y.
7. In the contingency table below, subjects were classified according to their age group and
their opinion about what should be done with a government budget surplus. One subject is
drawn at random; let X denote the age category and let Y denote the opinion category for
this subject.
(a) Compute the marginal distribution of X ;
(b) Compute the marginal distribution of Y;
(c) Compute the conditional distribution of Y given that X is the 21 35 age category;
134 Chapter 2 Discrete Distributions
(d) Compute the conditional distribution of X given that Y is the Lower Taxes opinion
category;
(e) Considering the data in the table to be a random sample of American voters, do the age
group and opinion variables seem independent?
Save Social Reduce National Lower Increase Defense
Security Debt Taxes Spending
21 35 22 10 63 15
36 50 46 20 85 60
50 65 89 54 70 41
over 65 106 32 15 20
13. The number of failures until the first win in a particular slot machine has the geometric
distribution with parameter 1/5. If successive plays on the machine are independent of one
another, use reasoning similar to Example 1 to compute the probability mass function of the
number of failures until the second win.
14. The joint probability mass function of many discrete random variables X1 , X2 , ... , Xk is
the function
f x1 , x2 , ..., xk PX1 x1 , X2 x2 , ... , Xk xk
Joint marginal distributions of subgroups of the X's can be found by summing out over all
values of the states x for indices not in the subgroup. If a joint mass function f x1 , x2 , x3
puts equal probability on all corners of the unit cube [0,1] [0,1] [0,1], find the joint
marginals of X1 and X2 , X2 and X3 , and X1 and X3 . Find the one variable marginals of X1 ,
X2 , and X3 .
15. Discrete random variables X1 , X2 , ... , Xk are mutually independent if for any subsets B1 ,
B2 , ... , Bk of their respective state spaces
PX1 B1 , X2 B2 , … , Xk Bk PX1 B1 PX2 B2 PXk Bk
(a) Argue that if the entire group of random variables is mutually independent, then so is any
subgroup.
(b) Show that if X1 , X2 , ... , Xk are mutually independent, then their joint p.m.f. (see
Exercise 14) factors into the product of their marginal p.m.f.'s.
(c) Show that if X1 , X2 , ... , Xk are mutually independent, then
PX1 B1 , X2 B2 X3 B3 … , Xk Bk PX1 B1 PX2 B2
For several random variables X1 , X2 , ... , Xn the joint p.m.f. is defined analogously:
f x1 , x2 , ..., xn PX1 x1 , X2 x2 , ... , Xn xn (1)
In the single variable case we defined the expectation of a function g of a random variable X
as the weighted average of the possible states gx, weighted by their probabilities f x. The
analogous definition for many random variables follows.
Definition 1. The expected value of a function g(X1 , X2 , ..., Xn ) of random variables whose
joint p.m.f. is as in (1) is:
EgX1 , X2 , ..., Xn gx1 , x2 , ..., xn f x1 , x2 , ..., xn (2)
x x 1 n
where the multiple sum is taken over all possible states x1 , x2 , ..., xn .
Example 1 Let X be the total number of heads in two flips of a fair coin and let Y be the
total number of heads in two further flips, which we assume are independent of the first two.
Then X Y is the total number of heads among all flips. Each of X and Y have the binomial
distribution with parameters n = 2 and p = 1/2, and by the independence assumption,
Notice that the result is just 1 + 1, that is, the expected value of X plus the expected value of
Y. This is not a coincidence, as you will see in the next theorem.
2.6 More on Expectation 137
Theorem 1. If X and Y are discrete random variables with finite means, and c and d are
constants, then
Ec X d Y c EX d EY (3)
Proof. Let f x, y be the joint p.m.f. of the two random variables and let px and qy be
the marginals of X and Y. By formula (2) the expectation on the left is
Ec X d Y x y c x d y f x, y
x c x y f x, y y d y x f x, y
c x x p x d y y q y
c EX d EY
Activity 1 What does Theorem 1 imply about the simple arithmetical average of two
random variables X and Y?
Theorem 2. If X and Y are independent discrete random variables with finite variances, and
c and d are constants, then
Varc X d Y c2 VarX d 2 VarY (4)
Group the terms involving X together, and group those involving Y to get
2
Varc X d Y EcX Μx dY Μ y
(5)
2 c d EX Μx Y Μ y
But the last term in the sum on the right equals zero (see Activity 2), which proves (4).
138 Chapter 2 Discrete Distributions
Activity 2 Use the independence assumption in the last theorem to show that
EX Μx Y Μ y
= 0. Generalize: show that if X and Y are independent, then
EgX hY EgX EhY.
Theorem 2 also extends easily to the case of linear combinations of many random
variables.
Example 2 Let X1 , X2 , ..., Xn be a random sample drawn in sequence and with replace-
ment from some population with mean Μ and variance Σ2 . The mean of the sample is the
simple arithmetical average of the sample values
X1 X2 ... Xn
X (6)
n
If the sample is in a Mathematica list then Mathematica can calculate the sample mean with
the command Mean[datalist] as follows.
In the study of statistics, one of the most fundamental ideas is that because a random
sample is subject to chance influences, so is a statistic based on the sample such as X . So,
X is a random variable, and you can ask what its mean and variance are. The previous
theorems allow us to answer the question. By linearity of expectation,
X1 X2 ...Xn
EX E
n
1
EX1 EX2 EXn (7)
n
1
nΜ Μ
n
So X has the same mean value as each sample value Xi . It is in this sense an accurate
estimator of Μ. Also, by Theorem 2,
X1 X2 ...Xn
Var X Var
n
1
Var X1 Var Xn (8)
n2
1 Σ2
n Σ2
n2 n
As the sample size n grows, the variance of X becomes small and, in this sense, it becomes a
more precise estimator of Μ as n increases.
2.6 More on Expectation 139
Let us use Mathematica to see these properties in action. First, we will build a
simulator of a list of sample means from a given discrete distribution. The arguments of the
next command are the number of sample means we want to simulate, the distribution from
which we are simulating, and the size of each random sample. The RandomInteger function
is used to create a sample of the given size, then Mean is applied to the sample to produce a
sample mean.
SimulateSampleMeansnummeans_,
distribution_, sampsize_ :
TableMeanRandomInteger
distribution, sampsize, nummeans
Now we will simulate and plot a histogram of 100 sample means of random samples of size
20 from the geometric distribution with parameter 1/2.
Needs"KnoxProb7`Utilities`";
SeedRandom4532;
sample SimulateSampleMeans
100, GeometricDistribution0.5, 20;
Histogramsample, 10, ChartStyle Gray
0.20
0.15
0.10
0.05
0.42 0.56 0.7 0.84 0.98 1.12 1.26 1.4 1.54 1.68
Figure 2.14 - Sample histogram of 100 means of samples of size 20 from geometric(.5)
This histogram is the observed distribution of the sample mean X in our simulation.
12
Recall that the mean of the underlying geometric distribution is 112
= 1, and we do see that
the center of the frequency histogram is roughly at x = 1.
To see the effect of increasing the sample size of each random sample, let's try
quadrupling the sample size from 20 to 80.
140 Chapter 2 Discrete Distributions
SeedRandom7531;
sample SimulateSampleMeans
100, GeometricDistribution.5, 80;
Histogramsample, 10, ChartStyle Gray
0.15
0.10
0.05
0.74 0.81 0.88 0.96 1.03 1.11 1.18 1.25 1.33 1.4
Figure 2.15 - Sample histogram of 100 means of samples of size 80 from geometric(.5)
Again the center point is around 1, but whereas the range of observed values in the case that
the sample size was 20 extended from around .4 to 1.8, now it only extends from around .7
to 1.4, about half as wide. The theory explains this phenomenon. According to Theorem 2,
112
since the variance of the geometric(.5) distribution is = 2, the standard deviation of X
122
Activity 3 Try simulating 100 sample means of samples of size 10 from the Poisson(4)
distribution. Predict the histogram you will see before actually plotting it. Then try
samples of size 90. What happens to the standard deviation of the sample mean?
Definition 2. If X and Y are discrete random variables with means Μx and Μ y , then the
covariance between X and Y is
Σxy CovX , Y EX Μx Y Μ y
(9)
Furthermore, if X and Y have finite standard deviations Σx and Σ y , then the correlation
between X and Y is
Cov X , Y
Ρxy CorrX , Y (10)
Σx Σ y
Example 3 To get an idea of how these two expectations measure dependence between X
and Y, consider the two simple discrete distributions in Figure 16.
1.0 1.0
0.5 0.5
(a) (b)
In part (a) of the figure, we put equal probability weight of 1/4 on each of the corners
(1,1), (1,1), (1,1), and (1,1). It is easy to check that both Μx and Μ y are zero. The
covariance for the distribution in (a) is therefore
1 1 1 1
Σxy EX Y 4
1 1 4
1 1 4
1 1 4
1 1 0
and so the correlation is 0 as well. If you look at the defining formula for covariance, you
see that it will be large and positive when Y tends to exceed its mean at the same time X
exceeds its mean. But for this joint distribution, when X exceeds its mean (i.e., when it has
the value 1), Y still will either equal 1 or 1 with equal probability, which is responsible for
the lack of correlation. However in Figure 16(b), suppose we put equal probability of 1/3 on
each of the states (1,1), (0,0), and (1,1). Again, it is easily checked that both Μx and Μ y
are zero. This time the covariance is
1 1 1 2
Σxy 1 1 0 0 1 1
3 3 3 3
142 Chapter 2 Discrete Distributions
The random variables X and Y actually have the same marginal distribution (check this)
which puts a probability weight of 1/3 on each of 1, 0, and 1, and so their common
variance is also easy to compute as Σ2 = 2/3. Thus, the correlation between X and Y is
Σxy 23
Ρ 1
Σx Σ y
23 23
For the distribution in Figure 16(b), X and Y are perfectly (positively) correlated. (Notice
that when X is bigger than its mean of 0, namely when X 1, Y is certain to be greater than
its mean; in fact it can only have the value 1.) It turns out that this is the most extreme case.
In Exercise 12 you are led through a proof of the important fact that:
If Ρ is the correlation between random variables X and Y,
(11)
then Ρ 1
and in addition it is true that Ρ equals positive 1 or negative 1 if and only if, with certainty, Y
is a linear function of X.
Remember also the result of Activity 2. If X and Y happen to be independent, then
EX Μx Y Μ y
EX Μx EY Μ y
0. This case represents the other end of
the spectrum. Statistical independence implies that the covariance (and correlation) are zero.
Perfect linear dependence implies that the correlation is 1 (or 1 if Y is a decreasing linear
function of X).
Another set of results in the exercises (Exercise 11) about covariance and correlation
is worth noting. If two random variables are both measured in a different system of units,
that is if X is transformed to aX b and Y is transformed to c Y d, where a and c are
positive, then
CovaX b, cY d a c CovX , Y,
(12)
CorraX b, cY d CorrX , Y
In particular, the covariance changes under the change of units, but the correlation does not.
This makes the correlation a more standardized measure of dependence than the covariance.
One way of understanding this is to note that
Cov X , Y EX Μx Y Μ y
X Μx Y Μ y
Ρxy E
Σx Σ y Σx Σ y Σx Σy
so that the correlation is the expected product of standardized differences of the two random
X Μx YΜ y
variables. You can check that each of Σx
and Σy
have mean zero and variance 1, and
each measures the number of standard deviation units by which the random variables differ
from their respective means.
Finally, expansion of the expectation that defines the covariance yields the useful
simplifying formula:
2.6 More on Expectation 143
Cov X , Y EX Μx Y Μ y
EX Y Μx EY Μ y EX Μx Μ y
EX Y Μx Μ y
Example 4 Recall the psychology experiment on memory from the beginning of Section
2.5. If once again we let X be the number of orally presented numbers that were remem-
bered, and Y be the number of visually presented numbers that were remembered, then we
can compute the covariance and correlation of X and Y. The arithmetic is long, so we will
make good use of Mathematica. First we define the marginal distributions of the two random
variables, which are the marginal totals divided by the overall sample size. We also intro-
duce the list of states for the two random variables
The marginal means are the dot products of the marginal distributions and the state lists:
39 430
,
19 171
The variances can be found by taking the dot product of the marginal mass lists with the
square of the state lists minus the means:
sigsqx, sigsqy
xmarginal.xstates mux xstates mux,
ymarginal.ystates muy ystates muy
2030 38 084
,
1083 29 241
By computational formula (13), it now suffices to subtract the product of marginal means
from the expected product. For the latter, we need the full dot product of the joint probabili-
ties with the product of X and Y states, which is below.
144 Chapter 2 Discrete Distributions
943
171
0.353032
covar
corr N
sigsqx sigsqy
0.225946
Var R E R ΜR 2
E.5 R1 .5 R2 .5 Μ1 .5 Μ2 2
E.5 R1 .5 Μ1 .5 R2 .5 Μ2 2
.52 ER1 Μ1 2
.52 ER2 Μ2 2
2 .5 .5 ER1 Μ1 R2 Μ2
.52 VarR1 .52 VarR2 2 .5 .5 CovR1 , R2
2.6 More on Expectation 145
Notice that we have computed in a similar way to the derivation leading to formula (5).
Since CovR1 , R2 Ρ Σ1 Σ2 , we compute that the variance and standard deviation of the
total rate of return are:
0.000475
0.0217945
It is a very interesting fact that the negative correlation between the asset rates of return has
allowed the overall rate of return to have a smaller standard deviation, about .022, than
either of the individual assets. This illustrates the value of diversification.
It is time to turn back to the distributional properties of the sample mean in the case
of sampling without replacement. To set up this study, look at the variance of a linear
combination of two random variables which are not assumed to be independent. By formula
(5),
Varc X d Y c2 VarX d 2 VarY 2 c d CovX , Y (14)
Extend this result to three random variables by doing the next activity.
The proof of Theorem 2 can be generalized (see Exercise 6) to give the formula
n n n i1
Var ci Xi c2i VarXi 2 ci c j CovXi , X j (15)
i1 i1 i1 j1
1
In particular, for the sample mean X = ni1 Xi each coefficient ci equals 1 n and, if the Xi 's
n
are identically distributed with variance Σ2 , then we obtain
1 1
Var X ni1 Σ2 2 ni1 i1
j1 CovXi , X j
n2 n2
(16)
Σ2 2
j1 Cov Xi , X j
ni1 i1
n n2
Since Σ2 n is the variance of X in the independent case, we can see that if the pairs (Xi , X j )
are all positively correlated (i.e., their covariance is positive), then the variance of X in the
dependent case is greater than the variance in the independent case and, if the pairs are all
negatively correlated, the dependent variance is less than the independent variance.
146 Chapter 2 Discrete Distributions
Example 6 Suppose that a population can be categorized into type 0 individuals and type 1
individuals. This categorization may be done on virtually any basis: male vs. female, at risk
for diabetes vs. not, preferring Coke vs. Pepsi, etc. We draw a random sample of size n in
sequence of these 0's and 1's: X1 , X2 , ... , Xn . Notice that in this case, since the X's only take
on the values 0 and 1, the sample mean X is also the sample proportion of 1's, which should
be a good estimator of the population proportion of 1's. This is because each Xi has the
Bernoulli(p) distribution, whose mean is p and whose variance is p1 p (formulas (2) and
(3) of Section 2.2). Thus, by formula (7),
EX p
and so X is an accurate estimate of p. If the sample is drawn with replacement, then the X's
will be independent, hence formula (8) gives in the independent case
p1 p
VarX
n
since we require Xi = 1, and once that is given, there are L 1 type 1's remaining to select
from the M 1 population members who are left which can make X j 1. Hence,
L L1
EXi X j
1 PXi 1, X j 1
M M 1
and by computational formula (13),
L L1 L L L LM
CovXi , X j
M M 1 M M M MM 1
after simplification. This can be rewritten as
p1 p
CovXi , X j
M 1
if we use the facts that p L M and 1 p M L M. Notice that we are in one of the
cases cited above, where all of the X pairs are negatively correlated. Finally, substitution
into (16) gives
2.6 More on Expectation 147
p 1 p 2 p1 p
Var X ni1 i1
j1 M1
n n2
p 1 p 2 n1 n p 1 p
2 M1
n n2
p 1 p n1
1 M1
n
One interesting thing about formula (17) is that as the sample size n approaches the popula-
tion size M, the variance of X decreases to zero, which is intuitively reasonable since
sampling is done without replacement, and the more of the population we sample, the surer
we are of X as an estimator of p.
0.0025
0.0020
0.0015
0.0010
Figure 2.17 - Variance of the sample mean as a function of sample size, with (thin) and without (thick) replacement
Conditional Expectation
We would like to introduce one more notion involving expectation: conditional expectation
of a random variable given (a) an event; and (b) the value taken on by another random
variable. Conditional expectation has come to occupy an important place in applications of
probability involving time-dependent random processes, such as queues and financial
objects, which are discussed in Chapter 6.
Just as the conditional probability of an event given another is a revised probability
that assumes the second event occured, the conditional expectation of a random variable is a
revised average value of that random variable given some information. Beginning with the
idea of conditional expectation given an event, recall that Exercise 8 of Section 1.5 estab-
lished that if A is an event of positive probability, then the function QB PB A is a
probability measure as a function of events B. If Y is a random variable on the same sample
space in which A and B reside, we can define the conditional expectation of Y given A as
follows:
Definition 3. Let Y be a discrete random variable mapping a sample space
to a state space
E, and let A be a fixed event of positive probability. Denote by P A the conditional probabil-
ity measure on
given A described above. Then the conditional expectation of Y given A is
EY A y P A Y y
(18)
yE
Suppose for example that Y is the random variable that we looked at near the
beginning of Section 2.1, which takes two possible values 0, 1 on six possible equally
likely outcomes a, b, c, d, e, f , as shown in Figure 18. Consider the conditional expecta-
tion of Y given the event A d, e, f . Since the original outcomes were equally likely, the
PAB PAB
conditional probability measure P A B will place equal likelihood of 1/3
PA 12
on each of the outcomes in A d, e, f . (Why?). Therefore,
1 2 2
EY A 0 1 .
3 3 3
Notice that the unconditional expectation of Y is 1/3, so knowledge that A occurred changes
the average value of Y. This computation should help to clarify the intuition that the
conditional expectation of a random variable given an event is the average value that the
random variable takes on over the outcomes that are in the event.
2.6 More on Expectation 149
a Y
b 0
c
d 1
e
f
Then,
EYI A
EY A PA
(20)
Activity 5 Use formula (20) to show that conditional expectation is linear, i.e.,
Ec1 Y1 c2 Y2 A c1 EY1 A c2 EY2 A.
In Exercise 16 you are asked to prove a new version of the law of total probability
for conditional expectation, as follows: If sets B1 , B2 , ... , Bn form a partition of the sample
space, then
n
EY EY Bi PBi (21)
i1
So the expected value of a random variable can be found by conditioning its expectation on
the occurrence of Bi , then un-conditioning by weighting by PBi and summing.
Most often we want to compute conditional expectations of random variables given
observed values of other random variables, that is EY X x. As long as both X and Y are
defined on the same sample space
, the set A Ω
: X Ω x is a perfectly good
event, and the definition of EY A above applies, so that we do not need to give a new
definition. But it is more common to give an equivalent definition using conditional mass
functions. The following theorem does that a bit more generally, giving a formula for the
conditional expectation of functions of Y.
Theorem 4. Let X and Y be discrete random variables with joint p.m.f. f x, y, and let
qy x be the conditional p.m.f. of Y given X x. Then the conditional expectation of a
function g of Y given X x is
EgY X x gy qy x
(22)
y
Reorder the terms in the last sum so that all outcomes for which YΩ y are grouped
together, and add over the possible states y of Y. For such outcomes, gYΩ gy, and
Ω:YΩy I A Ω PΩ X x PX x, Y y. Therefore,
gy
Eg Y X x y PX x
Ω:YΩy I A Ω PΩ X x
gy
y PX x
PX x, Y y
2
(24)
y y ΜY x q y x
Notice that it is the conditional mean ΜY x , not the marginal mean Μ y that appears in the
square in the conditional variance. The following is an important activity to check your
understanding of the definition of conditional variance.
2
Activity 6 Show that ΣY x 2 is not in general the same as EY Μ y X x
. Show
also the computational rule ΣY x 2 EY 2 X x
ΜY x 2 .
Example 7 Let random variables X and Y have the joint mass function f x, y shown in the
table below. Find the conditional mean and variance ΜY 0 EY X 0 and VarY X 0.
Compute also the conditional mean and variance of X given Y 0.
Y
0 1
X 1 1 6 0
0 13 13
1 16 0
2 2
1 1 1 1 1
VarY X 0 0 1
2 2 2 2 4
For the second part of the question, observe that the marginal probability
q0 PY 0 1 6 1 3 1 6 2 3. Therefore, the conditional mass function of X
given Y 0 is
f 1, 0 16 1
p1 0 ,
q0 23 4
f 0, 013 1
p0 0 ,
q0 23 2
f 1, 0 1 6 1
p1 0
q0 23 4
Thus,
1 1 1
EX Y 0 1 0 1 0
4 2 4
1 1 1 1
VarX Y 0 1 02 0 02 1 02 .
4 2 4 2
The activity below lists another important property of conditional expectation,
analogous to the relationship between independence and conditional probability.
Example 8 Recall Example 1.5-3, in which a Xerox machine could be in one of four states
of deterioration labeled 1, 2, 3, and 4 on each day. The matrix of conditional probabilities of
machine states tomorrow given today is reproduced here for your convenience.
tomorrow
1 2 3 4
1 34 18 18 0
today 2 0 34 18 18
3 0 0 34 14
4 0 0 0 1
Now that we know about random variables and conditional distributions, we can introduce
X1 , X2 , and X3 to represent the random states on Monday through Wednesday, respectively.
Now let us suppose that the state numbers are proportional to the daily running costs when
the machine is in that state. Thus, we can find for example the expected total cost through
Wednesday, given that X1 1 was the initial state on Monday. The conditional distribution
of X2 given X1 1 is, from the table,
2.6 More on Expectation 153
To prove this, note that if px denotes the marginal mass function of X , then by Theorem 4,
This property appears similar to, and is in fact referred to as, the law of total probability for
expectation. We can find the expected value of a random variable Y by conditioning on X ,
then "un-conditioning," which here means taking the expected value of the conditional
expectation with respect to X.
Example 9 Let us check formula (25) in the setting of the two random variables X and Y of
Example 7. Given X 1, Y can have only one value, namely 0, hence EY X 1 0.
Given X 0, the values Y 0 and Y 1 occur with equal probability of 1/2, so
EY X 0 1 2, as we already knew from Example 7. Given X 1, the only possible
value for Y is Y 0, hence EY X 1 0. The expression EEY X is
154 Chapter 2 Discrete Distributions
Exercises 2.6
1. Rederive the results for the mean and variance of the bn, p distribution using another
method, namely by observing that the number of successes Y is equal to the sum of random
variables X1 +X2 + +Xn where Xi 1 or 0, respectively, according to whether the ith trial is
a success or failure.
2. Suppose that the pair of random variables X , Y has a distribution that puts equal
probability weight on each of the points in the integer triangular grid shown in the figure.
Find (a) EX Y; (b) E2 X Y.
1 2 3
Exercise 2
6. Prove formula (15), the general expression for the variance of a linear combination
Vari ci Xi . To what does the formula reduce when the Xi 's are independent?
7. (Mathematica) Write Mathematica commands which take as input a finite list
x1 , x2 , ..., xm of states of a random variable X, a list y1 , y2 , ..., yn of states of a random
variable Y, and a matrix p11, p12, ..., p1n, ..., pm1, pm2, ..., pmn of joint probabili-
ties of states (i.e., pij = PX i, Y j), and return: (a) the mean of X; (b) the mean of Y;
(c) the variance of X; (d) the variance of Y; (e) the covariance of X and Y; (f) the correlation
of X and Y.
8. Find the covariance and correlation between the random variables X and Y in Exercise 2.
9. If X and Y, respectively, are the numbers of successes and failures in a binomial experi-
ment with n trials and success probability p, find the covariance and correlation between X
and Y.
10. (Mathematica) We saw in the section that the probability distribution of the sample
mean X of a random sample of size n clusters more and more tightly around the distribu-
tional mean Μ as n
. Here let us look at a different kind of convergence, studying a
time series of the sample mean as a function of n, X n X1 X2 Xn n as we add
more and more observations.
Write a Mathematica command that successively simulates a desired number of
observations from a discrete distribution, updating X each time m new observations have
been generated (where m is also to be an input parameter). The command should display a
connected list plot of that list of means. Run it several times for several distributions and
larger and larger values of n. Report on what the sequence of means seems to do.
11. Show that if a, b, c, and d are constants with a, c > 0, and if X and Y are random
variables, then
CovaX b, cY d a c CovX , Y
CorraX b, cY d CorrX , Y
12. Show that the correlation Ρ between two random variables X and Y must always be less
than or equal to 1 in magnitude, and furthermore, show that if X and Y are perfectly linearly
related, then Ρ 1. (The latter implication is almost an equivalence but we don't have the
tools to see that as yet. To show that Ρ 1, carry out the following plan. First reduce to
the case where X and Y have mean 0 and variance 1 by considering the random variables
X Μx Σx and Y Μ y Σ y . Then look at the discriminant of the non-negative valued
quadratic function of t: E[W t Z2 ], where W and Z are random variables with mean 0 and
variance 1.)
156 Chapter 2 Discrete Distributions
13. (Mathematica) If the population in Example 6 is of size 1000, and sampling is done
without replacement, use a graph to find out how large must the sample be to guarantee that
the standard deviation of X is no more than .03. (Hint: you do not know a priori what p is,
but at most how large can p1 p be?)
14. (Mathematica) Consider the problem of simulating random samples taken from the
finite population 1, 2, ..., 500, and computing their sample means. Plot histograms of the
sample means of 50 random samples of size 100 in both the cases of sampling with replace-
ment and sampling without replacement. Comment on what you see, and how it relates to
the theoretical results of this section.
15. Suppose that random variable X maps eight outcomes in a sample space to the integers
as follows: On outcomes Ω1 , Ω2 X has the value 0; on outcomes Ω3 , Ω4 , Ω5 X has the value
1; and on outcomes Ω6 , Ω7 , Ω8 X has the value 2. The probability measure on the sample
space gives equal likelihood to outcomes Ω1 , Ω2 , Ω3 , Ω4 , which are each twice as likely as
each of Ω5 , Ω6 , Ω7 , Ω8 . Find the conditional expectation of X given the event
B Ω1 , Ω2 , Ω8 .
16. As in Example 8, compute the expected total cost through Wednesday given that the
initial state on Monday was 2.
17. Find EY X 1 and EY X 2 for the random variables X and Y whose joint
distribution is given in Exercise 2.5-2.
18. Compute hx EY 2 X x
for each possible x, if X and Y have the joint distribution
in Exercise 2. Use this, together with formula (20), to find EY 2
.
19. Suppose that random variables X and Y have the joint density shown in the table below.
Compute the conditional mean and variance ΜY 1 EY X 1 and VarY X 1.
Demonstrate the equation EEY X EY for these random variables.
Y
0 1 2
0 16 0 0
1 16 18 0
X 2 1 6 1 8 1 24
3 0 1 8 1 24
4 0 0 1 24
20. Theorem 4 can be extended slightly to include the case where g is a function of both X
and Y; simply replace the random variable X by its observed value x as follows:
EgX , Y X x Egx, Y X x y gx, y qy x. Use this to show that
EhX gY X x hx Eg Y X x, that is, given the value of X , functions of X
may be factored out of the conditional expectation as if they were constants.
2.6 More on Expectation 157
21. Show that if Y is a discrete random variable and B1 , B2 , ... , Bn is a partition of the
sample space, then EY ni1 EY Bi PBi .
Sample Problems from Actuarial Exam P
22. An insurance policy pays a total medical benefit consisting of two parts for each claim.
Let X represent the part of the benefit that is paid to the surgeon, and let Y represent the part
that is paid to the hospital. The variance of X is 5000, the variance of Y is 10,000, and the
variance of the total benefit X Y is 17,000. Due to increasing medical costs, the company
that issues the policy decides to increase X by a flat amount of 100 per claim and to increase
Y by 10% per claim. Calculuate the variance of the total benefit after these revisions have
been made.
23. Let X denote the size of a surgical claim and let Y denote the size of the associated
hospital claim. An actuary is using a model in which EX 5, EX 2
27.4, EY 7,
EY 2
51.4, and VarX Y 8. Let C1 X Y denote the size of the combined claims
before the application of a 20% surcharge on the hospital portion of the claim, and let C2
denote the size of the combined claims after the application of that surcharge. Calculate
CovC1 , C2 .
24. An actuary determines that the annual numbers of tornadoes in counties P and Q are
jointly distributed as follows:
County Q
0 1 2 3
0 .12 .06 .05 .02
County P
1 .13 .15 .12 .03
2 .05 .15 .10 .02
Calculate the conditional variance of the annual number of tornadoes in county Q, given that
there are no tornadoes in county P.
CHAPTER 3
CONTINUOUS PROBABILITY
which has 9 elements. Each of the 29 512 subsets of the sample space is an event. The
analogous problem in the continuous domain of sampling two real numbers has all of 2 as
its sample space. The set of all subsets of 2 is a large breed of infinity indeed, and it also
includes some pathological sets whose properties are incompatible with the way we would
like to formulate continuous probability. So in the rest of this book we will restrict our
159
160 Chapter 3 Continuous Probability
attention to a smaller, more manageable class of events: the so-called Borel sets, which are
described shortly.
In working with events previously, we were accustomed to taking unions, intersec-
tions, and complements of events and expecting the resulting set to be an event. This is the
idea behind the following definition.
Definition 1. A Σ- algebra of a set is a family of subsets of which contains itself
and is closed under countable union and complementation; i.e., if A1 , A2 , A3 , ... then
i Ai , and if A then Ac .
The definition does not specifically refer to closure under countable intersections, but
the next activity covers that case.
Activity 1 Use DeMorgan's laws to argue that if is a Σ-algebra then is also closed
under countable intersection.
Example 1 Although the Σ-algebra concept is most appropriate for continuous probability
models, it sheds light on the idea to construct one in the discrete case. Consider a sample
space 1, 2, 3, 4, 5, 6. Find (a) the smallest Σ-algebra containing the event 1, 2; and
(b) the smallest Σ-algebra containing both of the events 1, 2 and 3, 4, 5.
For part (a) observe that the power set, that is the family of all subsets of , is
certainly closed under countable union and intersection, and it contains the whole space .
Therefore it is a Σ-algebra containing 1, 2, but it may not be the smallest one. Since and
1, 2 are in the Σ-algebra that we would like to construct, so must be their complements,
which are and 3, 4, 5, 6. Complementation of any of the four subsets of that we have
listed so far does not give us any new candidates, so we turn to unions. The empty set
unioned with any set is that set, and unioned with any set is just , so we find no new
candidates for the Σ-algebra that way. But also 1, 2 3, 4, 5, 6 , which is already in
our Σ-algebra, so no new subsets can be found. The smallest Σ-algebra containing 1, 2 is
therefore
a , , 1, 2, 3, 4, 5, 6.
In part (b) there is a second subset of to consider. Both 3, 4, 5 and its complement
1, 2, 6 must be adjoined to the previous family . Are any more subsets necessary to
include? To maintain closure under union, the set 1, 2 3, 4, 5 1, 2, 3, 4, 5 must be
included, as must be its complement 6. The smallest Σ-algebra containing both events
1, 2 and 3, 4, 5 must at least contain all of the following subsets:
b , , 1, 2, 3, 4, 5, 6, 3, 4, 5, 1, 2, 6, 1, 2, 3, 4, 5, 6.
Here we have listed out the subsets by pairs, with complementary sets grouped together, so
clearly the complement of each set in b is also in b . The following activity verifies that
b is indeed a Σ-algebra.
3.1 From the Finite to the (Very) Infinite 161
Activity 2 Check that the union of any pair of events in b above is again in b . Why
is this enough to show that b is closed under countable union for this example?
So, in cases where the sample space is the real line, the family of events will be
the collection of Borel subsets of . If the sample space is a subset of , then the events are
the Borel sets contained in that subset. The main advantage to limiting attention to the Borel
sets is that we can compute Riemann integrals over these sets in working with continuous
probability models.
Next we must look at how to characterize the probability of events in the continuous
case. Whatever we do should be consistent with the definition of a probability measure in
Section 1.2, which requires: (i) P
1; (ii) For all A , P[A]
0; and (iii) If
A1 , A2 , is a sequence of pairwise disjoint events then P[A1 A2
= P Ai
. If
i
these axioms are satisfied, Propositions 1-5 in Section 1.2, which give the main properties of
probability, still hold without further proof, because they were proved assuming only the
axioms, not the finiteness of . We will discover that many of the other properties of
probability, random variables, and distributions will carry over as well.
Continuous probabilities can often be thought of as limits of discrete probabilities
where outcomes are measured to higher and higher degrees of accuracy. To motivate the
idea, consider a sample space of numerical outcomes in 0, 1
rounded down to the
nearest tenth. The histogram might look as in Figure 1(a) if the distribution of probability
among outcomes is taken to be uniform for simplicity. Each of the 10 outcomes receives
equal probability of .1. Now suppose that outcomes are rounded down to the nearest .05.
The possible outcomes are now 0, .05, .1, .15, ... , .9, .95 of which there are 20, hence
each would receive probability 1 20 .05 under a uniform model. This is sketched in Figure
1(b). One can imagine continuing the process; if outcomes were rounded down to the nearest
.01, then each of the 100 resulting outcomes 0, .01, .02, ... , .99 would receive probability
.01.
0.10 0.05
0.08 0.04
0.06 0.03
0.04 0.02
0.02 0.01
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
(a) (b)
Figure 3.1 - Two discrete uniform probability measures
3.1 From the Finite to the (Very) Infinite 163
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
(a) (b)
Figure 3.2 - Two discrete probability density histograms
p
4
p
2 1 p
x 1 p p
5
3 1 p
6
Figure 3.3 - Binomial branch process
Suppose that a binomial branch process is normalized so that x 0, p 1 2, and there are n
time steps between time 0 and time 1. In each step the value either goes up by a step size of
1 n or down by the same amount. After n steps, if there have been k ups and n k
AssetPriceDistributionnumsteps_ :
Moduletimestep, stepsize,
stateprobs, statelist, densitylist,
timestep 1 numsteps;
stepsize Sqrttimestep;
stateprobs
TablePDFBinomialDistributionnumsteps, .5, k,
k, 0, numsteps;
densitylist stateprobs 2 stepsize;
statelist Table
2 k numsteps stepsize, k, 0, numsteps;
ProbabilityHistogramstatelist, densitylist,
Ticks statelist, Automatic, BarColor Gray
3.1 From the Finite to the (Very) Infinite 165
The Mathematica command above lets you plot the density histogram for this discrete
probability model. First the time step 1 n and the asset price step size 1 n are com-
puted. The binomial probabilities are assembled into a list, and divided by x 2 1n
to get the densities. The states 2 k n 1 n are assembled into a list, and the probability
density histogram is computed.Let us see how these density histograms behave as the
number of time steps n becomes large. The closed cell below contains the graph of a
continuous function called the standard normal density function (about which we will have
much to say later) that superimposes well onto the density histogram. The graphics output
for the plot is named g5. In the electronic file you can manipulate the value of the number of
steps (n 12 is shown here) in Figure 4 to see that the densities converge to this function.
The fact that probabilities of outcomes are areas of rectangles in the density histogram
suggests that as the number of steps approaches infinity, one should be able to find probabili-
ties involving the asset price at time 1 by finding areas under this standard normal density
function.
ManipulateShow
AssetPriceDistributionn, g5, BaseStyle 8,
n, 2, 20, 1, ControlPlacement Top
12
0.4
0.3
0.2
0.1
5 4 2 1 1 2 4 5
3 0 3 2 3
3 3 3 3 3 3 3 3
3
Example 4 Let f x 4
1 x2 and let the sample space be the interval 1, 1
. Let us
argue that the following defines a probability measure on the Borel subsets of and find
several example event probabilities.
P E
f x d x
(1)
E
P a, b
f x d x (2)
a
First, here is a Mathematica definition of our function f , and a computation and display of
the probability associated with the interval (0,.5], which is the shaded area under the density
function in Figure 5.
.5
fx_ : 0.75 1 x2
; fx x
0
0.34375
0.7
0.6
0.5
0.4
0.3
0.2
0.1
1.0 0.5 0.5 1.0
Figure 3.5 - Shaded area = P[ 0, .5
]
3.1 From the Finite to the (Very) Infinite 167
in KnoxProb7`Utilities`, which takes the density function, a plot domain, and another pair
called between which designates the endpoints of the shaded interval. Notice that the
a
probability weight of every singleton point a is 0 in this model, since f x d x = 0. Only
a
sets that have length will receive positive probability. As a further example, the probability
of the event .3, .2
is
.2
fx x
.3
0.36625
b
Activity 3 Why is f x d x also equal to P a, b
, P a, b
, and P a, b
?
a
1
fx x
1
1.
3
It is clear from the fact that f x 1 x2 that the integrand in the defining expression for
4
P E
is non-negative on , hence P E
0 for all events E, establishing axiom 2 of
probability. The third axiom of probability, that is, the additivity over countable disjoint
unions, is actually tricky to show in general, because of the fact that the Borel events can be
quite complicated. We will omit the proof. But at least for unions of finitely many disjoint
intervals the additivity property of the integral, shown in calculus, yields the conclusion of
the third axiom without difficulty, as follows.
168 Chapter 3 Continuous Probability
P a1 , b1
a2 , b2
a3 , b3
f x d x
a1 ,b1
a2 ,b2
a3 ,b3
b1 b2 b3
f x d x f x d x f x d x
a1 a2 a3
P a1 , b1
P a2 , b2
P a3 , b3
Example 5 Suppose that an electronic device on which there is a maintenance plan survives
for a uniformly distributed time (See Section 1.3) in the real interval 0, 4
. It will be
replaced at the earlier of time 3 and its death time. Find a probability measure appropriate for
the experiment of observing the time of replacement.
The possible outcomes Ω of the experiment belong to the sample space 0, 3
.
But the actual death time of the device is assumed to be such that the probability that it lies
in a Borel subset E of 0, 4
is the length of the subset divided by 4, which is the same as
1
4
d x. There is a non-zero probability that the replacement occurs exactly at time 3, namely:
E
P replace at time 3
P device survives at least 3 time units
4 1 1 (3)
3 d x
4 4
Hence the outcome Ω 3 should receive positive probability weight 1/4. For Borel subsets
E 0, 3 we have
P E
P replace at time t E
P device dies at time t E
1 1 (4)
d x 4 length E
4
E
Together, formulas (3) and (4) define our probability measure. Axiom 1 of probability is
satisfied, since
1 1 1 3
P
P 3
P 0, 3
length 0, 3 1.
4 4 4 4
Axiom 2 is clearly satisfied, since the point 3 has positive probability, and for Borel subsets
of 0, 3 formula (4) must always give a non-negative result. Do Activity 4 below to help
convince yourself that the third axiom holds.
Activity 4 Argue that P is Example 5 does satisfy the additivity axiom of probability for
two disjoint events A1 and A2 . (Hint: since the two events are disjoint, at most one of
them can contain the point 3.)
Example 6 Consider the sample space x, y 2 : x2 y2 4, that is, the disk of
radius 2 in the plane. Let a set function be defined on the Borel subsets of by
3.1 From the Finite to the (Very) Infinite 169
P E
c 4 x2 y2 d x d y
E
Find the value of c that makes this a valid probability measure and compute the probability
of the unit square P 0, 1
0, 1
.
In this continuous model, we are defining probabilities as volumes bounded by the
surface graph of a two-variable function and the x y coordinate plane. Notice that the
condition x2 y2 4 forces the integrand to be non-negative over as long as c 0. Hence
axiom 2 of probability will hold, and axiom 3 will also follow from the additivity of the
double integral over disjoint sets. What we simply need to do is to choose c so that the
double integral over all of is 1. By symmetry this double integral is 4 times the integral
over the piece of the disk in the first quadrant:
P
2 4x2
4 c 4 x2 y2
y x
0 0
8cΠ
Setting this equal to 1 yields c 1 8 Π. The probability of the square is the volume
underneath the surface shown in Figure 6, which is computed below as 5/(12Π).
P0,1 0,1
1 1 1
4 x2 y2
y x
0 0 8Π
5
12 Π
1
Plot3D 4 x2 y2
, x, 0, 1, y, 0, 1,
8Π
PlotStyle Gray, Lighting "Neutral",
BaseStyle 8, FontFamily "Times"
170 Chapter 3 Continuous Probability
0.16
0.14
0.12 1.0
0.10
0.08
0.0 0.5
0.5
0.0
1.0
Figure 3.6 - Probability modeled as volume
Exercises 3.1
1. Devise an example of a random experiment and associated probability space in which a
countable number of points receive positive probability and as well an interval of real
numbers receives probability through integration. (Hint: a randomized experiment can be
constructed which randomly chooses between two other experiments.) Carefully describe
the action of the probability measure.
2. Let 1, 2, 3, 4, 5 be a sample space. Find (a) the smallest Σ-algebra containing the
events 2, 3 and 1, 5; and (b) the smallest Σ-algebra containing both of the events 1, 2
and 1, 2, 3.
3. Explain why the set of all integers is a Borel set. Explain why any at most countable set
of points on the real axis is a Borel set.
4. Prove that an arbitrary intersection (not necessarily a countable one) of Σ-algebras is a Σ-
algebra. Conclude that the intersection of all Σ-algebras containing a collection of sets is
the smallest Σ-algebra containing .
5. What is the smallest Σ-algebra containing a singleton set x, for x ?
6. Show that all of the intervals a, b, a, b
, , b, a, b are Borel sets for any real
numbers a b.
7. Find a constant c such that P E
E c x2 d x is a valid probability measure on
1, . Then find P 2, 4 8,
.
8. Find a constant c such that P E
E c logx d x is a valid probability measure on
1, e
, and then compute P 2, e
and P 1, 2
.
c
9. Find a constant c such that P E
E xy
d x d y is a valid probability measure on
1, 2
1, 2
, and then compute P 1.5, 2
0, 1.5
.
10. (Mathematica) Consider a binomial branch process as in Example 3 with 6 time steps,
normalized initial price 0 and p 1 2. Each time step, the price changes by 1 6 . Find
and plot the probability mass function of the asset value at time 1, and use it to find the exact
3.1 From the Finite to the (Very) Infinite 171
probability that the asset value at time 1 is at least 1. Then superimpose the graph of the
following function onto a probability density histogram and use numerical integration to find
the approximate area underneath that function corresponding to the interval 1, in order
to continuously approximate the same probability. Does your picture suggest a better interval
over which to integrate in order to improve the approximation?
xB xB
The continuous case will come out similarly, with an integral replacing the discrete
sum. But why are we concerned with continuous random variables? There are many
interesting experiments which give rise to measurement variables that are inherently
continuous, that is, they take values in intervals on the line rather than at finitely many
discrete points. For instance, the pressure on an airplane wing, the amount of yield in a
chemical reaction, the measurement error in an experimental determination of the speed of
light, and the time until the first customer arrives to a store are just a few among many
examples of continuous variables. To make predictions about them, and to understand their
central tendency and spread, we must have a way of modeling their probability laws.
We have already set up the machinery to characterize the probability distribution of a
continuous random variable in Section 3.1: integration of a continuous function (thought of
as a density of probability) over a set is the means for calculating the probability of that set.
But now we consider probability density functions on the state spaces E of random variables
rather than on the sample spaces . From here on, will be pushed to the background.
172 Chapter 3 Continuous Probability
Definition 1. A random variable X with state space E is said to have the probability
density function (abbr. p.d.f.) f if, for all Borel subsets B of E,
PX B f x d x
(1)
B
In order for the axioms of probability to be satisfied, to be a valid density function f must be
non-negative and f x d x = 1. Figure 7 gives the geometric meaning of formula (1). The
E
probability that X will fall into set B is the area under the density curve on the subset B of the
real axis.
Needs"KnoxProb7`Utilities`"
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
3 2 1 1 2 3 3 2 1 1 2 3
Figure 3.7 - Shaded area is PX .5, 1.5 Figure 3.8 - Shaded area is F.7 PX .7
It totals all probability weight to the left of and including point x (see Figure 8). By the
Fundamental Theorem of Calculus, differentiating both sides of (2) with respect to x gives
the relationship
F x f x (3)
Thus, the c.d.f. also characterizes the probability distribution of a continuous random
variable, since if it is given then the density function is determined.
Activity 1 From calculus, the differential equation F x f x only determines F up to
an additive constant, given f. In our setting, what further conditions are present which
guarantee that each density f has exactly one c.d.f.?
3.2 Continuous Random Variables and Distributions 173
From past work we know one named continuous density function so far: the continu-
ous uniform density on [a,b] has value 1 b a on the interval a, b, and vanishes outside
the interval. For example, if f is the continuous uniform density on 0, 4, then
1 4 if x 0, 4
f x
0 otherwise
1
fx_ : Whichx 0, 0, 0 x 4, , x 4, 0;
4
Fx_ : Whichx 0, 0, 0 x 4, 0.25` x, x 4, 1;
Plotfx, Fx, x, 1, 5,
PlotStyle Black, Gray
1.0
0.8
0.6
0.4
0.2
1 1 2 3 4 5
Figure 3.9 - Uniform(0,4) density (black), and c.d.f. (gray)
which is the continuous uniform distribution on a, b. As in the discrete case, the functions
PDF[distribution, x] and CDF[distribution, x]
174 Chapter 3 Continuous Probability
can be applied to distributions objects. PDF returns the density function, CDF returns the
cumulative distribution function, and x is the desired argument of each function.
For example, for the uniform distribution on 0, 4 we can define the density and
c.d.f. as follows:
Then we can calculate a probability like PX 1, 3 in either of two ways in Mathematica;
by integrating the density from 1 to 3, or by computing
P1 X 3 PX 3 PX 1 F3 F1.
3
fx x
1
F3 F1
1
2
1
2
SeedRandom63 471
uniflist1
RandomRealUniformDistribution0, 10, 100;
uniflist2 RandomReal
UniformDistribution0, 10, 100;
GraphicsRowHistogramuniflist1,
10, ChartStyle Gray,
Histogramuniflist2, 10, ChartStyle Gray
3.2 Continuous Random Variables and Distributions 175
15 15
10 10
5 5
2 4 6 8 10 2 4 6 8 10
Activity 2 Do several more simulations like the one above, changing the sample size to
200, 500, and then 1000 sample values. What do you observe? Do you see any changes
if you change the distribution to uniform0, 1?
Example 2 Consider the triangular density function sketched in Figure 11. (It could be
appropriate for example to model the distribution of a random variable X that records the
amount of roundoff error in the computation of interest correct to the nearest cent, where X
is measured in cents and takes values between negative and positive half a cent.) Verify that
this function is a valid density and find a piecewise defined formula for its cumulative
distribution function.
2.0
1.5
1.0
0.5
From the graph, f is clearly non-negative on its state space .5, .5 and its formula is
24x if .5 x 0
f x 2 4 x if 0 x .5
0 otherwise
176 Chapter 3 Continuous Probability
The area under f corresponding to all of the state space is the area of the triangle, which is
1 2 1 2 1. Therefore f is a valid density function. For values of x between .5 and 0
the c.d.f. is
x
1
Fx 2 4 t d t 2 t 2 t2 x
.5 2 x2 2 x
2
.5
Notice that F0 1 2, which is appropriate because the density is symmetric about 0, and
therefore half of its overall probability of 1 lies to the left of x 0. For values of x between
0 and .5,
x
1 1 1
Fx 2 4 t d t
2 t 2 t2 0x 2 x2 2 x
2 2 2
0
In the interest rounding example then, the probability that the rounding error is between .2
and .2 is F.2 F.2. This turns out to be .82 .18 .64 after calculation.
Example 3 The Weibull distribution is a familiar one in reliability theory, because it is used
as a model for the distribution of lifetimes. It depends on two parameters called Α and Β,
which control the graph of its density function. The state space of a Weibull random variable
X is the half line 0, , and the formula for the c.d.f. is
Α
Fx PX x 1 exΒ , x
0 (4)
1.0
0.8
0.6
0.4
0.2
2 4 6 8
Figure 3.12 - Weibull2, 3 p.d.f. (black), c.d.f. (gray)
In the electronic version of the text, you can experiment with various combinations of
parameter values Α and Β using the Manipulate command below to see the effect on the
Weibull density (see also Exercise 8).
Manipulate
Plotgx, Α, Β, x, 0, 8, PlotRange 0, 1.6,
AxesOrigin 0, 0, Α, 1, 4, Β, 1, 4
If the lifetimes (in thousands of miles) of auto shock absorbers of a particular kind
have a Weibull distribution with parameters Α 2, Β 30, then for example the proportion
of them that last at least 30,000 miles is PX
30 1 F30, which is about 37%, as
shown below.
0.367879
Activity 3 Find the equation of the Weibull density function in general. Use Mathemat-
ica to plot it in the case where Α 2, Β 30 as in the last example, and write a short
paragraph about the shape and location of the density, and what the graph implies about
the distribution of shock absorber lifetimes.
178 Chapter 3 Continuous Probability
PX , Y B f x, y d x d y
(5)
B
The most familiar case of (5) occurs when B is a rectangular set, B a, b c, d.
Then (5) reduces to:
b d
For more complicated Borel subsets of 2 , the techniques of multivariable calculus can be
brought to bear to find the proper limits of integration. One final comment before we do an
example computation: the extension of (5) and (6) to more than two jointly distributed
random variables is simple. For n random variables X1 , X2 , ... , Xn , the density f becomes
a function of the n-tuple (x1 , x2 , ... , xn ) and the integral is an n-fold iterated integral. We
will not deal with this situation very often, except in the case where the random variables are
independent, where the integral becomes a product of single integrals.
Example 4 Consider the experiment of randomly sampling a pair of numbers X and Y from
the continuous uniform 0, 1 distribution. Suppose we record the pair U, V , where
U minX , Y and V maxX , Y. First let us look at a simulation of many replications of
this experiment. Then we will pose a theoretical model for the joint distribution of U and V
and ask some questions about them.
The next Mathematica command is straightforward. Given the distribution to be
sampled from and the number of pairs to produce, it repeatedly simulates an X, then a Y, and
then appends the min and max of the two to a list, which is plotted.
3.2 Continuous Random Variables and Distributions 179
SimPairsdist_, numpairs_ :
ModuleX, Y, UVlist, UVlist ;
DoX RandomRealdist; Y RandomRealdist;
AppendToUVlist, MinX, Y, MaxX, Y,
i, 1, numpairs;
ListPlotUVlist, AspectRatio 1, AxesLabel u, v,
PlotStyle Black, PointSize0.02`
Here is a sample run of SimPairs; in the e-version of the text you should try running
it again a few times, and increasing the number of pairs to see that this run is typical.
v
1.0
0.8
0.6
0.4
0.2
u
0.2 0.4 0.6 0.8
Figure 3.13 - 500 simulated sorted uniform0, 1 pairs
Notice that points seem to be randomly spread out within the triangle whose vertices
are 0, 0, 0, 1, and 1, 1, which is described by the inequalities 0 u v 1. No
particular region appears to be favored over any other. A moment's thought about how U and
V are defined shows that indeed both can take values in the interval 0, 1, but the minimum
U must be less than or equal to the maximum V; hence the state space of the pair U, V is
the aforementioned triangle. But both of the data points from which this pair originated were
uniformly distributed on 0, 1, which combined with the empirical evidence of Figure 13
suggests to us that a good model for the joint distribution of U and V would be a constant
density over the triangle. In order for that density to integrate to 1, its value must be 1
divided by the area of the triangle, i.e., 1/(1/2) = 2. So we model the joint density of U and
V as
f u, v 2 if 0 u v 1
180 Chapter 3 Continuous Probability
and f 0 otherwise. (We will deal with the problem of finding the distribution of trans-
formed random variables analytically later in the book.)
We can ask, for example, for the probability that the larger of the two sampled values
is at least 1/2. This is written as PV
.5 and is computed by integrating the joint density f
over the shaded region in Figure 14. The easiest way to do this is to integrate first on u, with
fixed v between .5 and 1, and u ranging from 0 to v. (You could also integrate on v first, but
that would require splitting the integral on u into two parts, since the boundaries for v
change according to whether u is less than .5 or not.)
1 v
2 u v
.5 0
0.75
v v
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
u u
0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0
Figure 3.14 - Region of integration for PV
.5 Figure 3.15 - Region of integration for PV
2 U
As another example, let us find the probability that the maximum exceeds twice the
minimum, i.e., PV
2 U. The boundary line v 2 u for the inequality cuts through the
state space as shown in Figure 15, and the solution of the inequality is the shaded area. This
time the limits of integration are easy regardless of the order you use. Here we integrate v
first, from 2 u to 1, then u from 0 to .5.
.5 1
2 v u
0 2u
0.5
Activity 4 Give geometric rather than calculus based arguments to calculate the two
probabilities in Example 4.
3.2 Continuous Random Variables and Distributions 181
Marginal probability density functions are defined in the continuous case much as
marginal p.m.f.'s are in the discrete case. Instead of summing the joint mass function, we
integrate the joint density over one of the variables.
Definition 3. Let X and Y be continuous random variables with joint density f x, y. The
marginal density function of X is
where the integral is taken over all Y states that are possible for the given x. (We may
integrate over the entire state space E y of Y under the understanding that f may vanish for
some y values, depending on x.) Similarly the marginal density of Y is
where the integral is taken over all X states that are possible for the given y.
Notice that the marginal densities completely characterize the individual probability
distributions of the two random variables. For example, the probability that X lies in a Borel
subset B of its state space is
where E y is the state space of Y. You can do a similar derivation for Y to conclude that
PY B qy d y. So, px and qy are the probability density functions of X and Y in
B
the one-variable sense.
In Example 4, for instance, where the joint density f u, v was constantly equal to 2
on the triangle, the marginal density of the smaller sampled value U is
1
pu 2 d v 2 1 u, 0 u 1
u
qv 2 d u 2 v, 0 v 1
0
(Why does it make intuitive sense that p should be a decreasing function and q an increasing
function?)
Example 5 Let X and Y have a joint density of the form
182 Chapter 3 Continuous Probability
f x, y c
x2 y2 , x, y 0, 1
1 c
x y d y d x c x2 y2 d y d x
2 2
0 0 0 0
hence c is the reciprocal of the last double integral. We compute this integral below in
Mathematica.
1 1
x y
x y
2 2
0 0
2
3
3
x,y
0.8
0 0.6
y
0.2 0.4
0.4
0.2
x 0.6
0.8
3
Figure 3.16 - Joint density f x, y
x2 y2
2
3.2 Continuous Random Variables and Distributions 183
1 1
PX ,Y
2 2
3 1 1
x2 y2
x y
2 1
2 1
2
7
16
1
PX
2
3 1
x2 1 3
x
2 1
2
11
16
qy f x, y, z d x d z
Ez Ex
184 Chapter 3 Continuous Probability
made perfect sense from basic definitions of probability. When X and Y are continuous
random variables with joint density f x, y, the ratio of probabilities in the third expression
above no longer makes sense, because both numerator and denominator are 0. But the ratio
of joint density function to marginal density function in the fourth term does make sense,
and we will take that as our definition of the conditional density, by analogy with the
discrete case. (Actually, in Exercise 23 you will be led to make a better argument for the
legitimacy of this definition by taking a conditional c.d.f. as the starting point.)
Definition 4. Let X and Y be continuous random variables with joint density f x, y.
Denote by px and qy the marginal density functions of X and Y, respectively. Then the
conditional density of Y given X x is given by
f x, y
qy x (9)
px
Example 6 Consider again the random variables of Example 5. The joint density is
3
f x, y
x2 y2 , x, y 0, 1
2
and the two marginal densities are
3 1 3 1
px x2 , x 0, 1 and qy y2 , y 0, 1
2 3 2 3
Therefore, by formula (9) the conditional density of y given x is
3.2 Continuous Random Variables and Distributions 185
f x, y
x2 y2
qy x , y 0, 1
px 1
x2
3
The density of Y does depend on what x value is observed. For example, here is a
plot of the marginal density of Y, together with two of the conditionals, with X 1 4, and
X 3 4.
3
qy_ : y2 1 3
2
x2 y 2
qygivenxy_, x_ :
x2 1 3
1 3
Plotqy, qygivenxy, , qygivenxy,
, y, 0, 1,
4 4
PlotStyle Black, Gray, Dashing0.01, 0.01
2.5
2.0
1.5
1.0
0.5
In the electronic text you can use the Manipulate command below to see how the conditional
density changes as x changes.
PY
1 2 X 1 2 qy 1 2 d y
12
1 1 22 y2
y
1
2 1 22 1 3
5
7
The last idea that we will take up in this section is the independence of continuous
random variables. Here the definition expressed by formula (6) of Section 2.5 serves the
purpose again without change. We repeat it here for your convenience:
Definition 5. Two continuous random variables X and Y are independent of each other if
and only if for all Borel subsets A and B of their respective state spaces,
PX A, Y B PX A PY B (11)
All of the comments made in Section 2.5 about independent random variables have
their analogues here. An equivalent definition of independence would be P[Y B | X A] =
P[Y B] for all Borel subsets A and B such that P[X A] is not zero. X and Y are also
independent if and only if the joint probability density function f x, y factors into the
product of the marginal densities. In fact, X and Y are independent if and only if the joint
c.d.f. Fx, y PX x, Y y factors into the product PX x PY y. The correspond-
ing definition of independence for many random variables would amend formula (11) in the
natural way: the simultaneous probability that all of the random variables fall into sets is the
product of the marginal probabilities. Again in the many variable context, independence is
equivalent to both the factorization of the joint density, and the factorization of the joint
c.d.f. into the product of the marginals.
Activity 7 Show that if the joint p.d.f. of two continuous random variables factors into
the product of the two marginal densities, then the random variables are independent.
Thus,
SimPairsdist_, numpairs_ :
ModuleX, Y, pairlist,
pairlist ;
DoX RandomRealdist;
Y RandomRealdist;
AppendTopairlist, X, Y, i, 1, numpairs;
ListPlotpairlist, AspectRatio 1,
PlotStyle Black, PointSize0.015
In Figure 18 are the results of simulating 2000 such independent and identically distributed
Xi , Yi pairs using the SimPairs command.
188 Chapter 3 Continuous Probability
6
0.08
4 0.06
0.04 6
0.02
0.00 4
2 0 y
2 2
x 4
6 0
2 4 6 8
Figure 3.18 - 2000 simulated Weibull(2, 3) pairs Figure 3.19 - 2-dimensional Weibull(2, 3) joint density
The fact that X and Y have the same distribution shows up in the fact that the point cloud
looks about the same in the orientation of Figure 18 as it would if the roles of X and Y were
interchanged and the graph was stood on its side. The independence of X and Y is a little
harder to see, but you can pick it out by observing that regardless of the vertical level, the
dots spread out horizontally in about the same way, with a heavy concentration of points
near 2, and a few points near 0 and out at large values between 6 and 8. Similarly, regard-
less of the horizontal position, the points spread out from bottom to top in this way. We
expect this to happen because under independence, the observed value of one of the
variables should not affect the distribution of the other.
The graph of the joint density in Figure 19 sheds even more light on this phe-
nomenon. Our point cloud in Figure 18 is most dense exactly where the joint density in
Figure 19 reaches its maximum, around (2, 2). Also the point cloud is sparse where the
density is low, and the long right tail extending to 6 and beyond is visible in both graphs.
But the joint density graph shows the independence property a little better. By examining the
grid lines you can see that the relative density of points in the x direction is independent of
the y value, and the relative density of points in the y direction is independent of the x value.
The identical distribution of the X and Y variables is seen in Figure 19 by the fact that the
cross-sections look the same regardless of whether you slice the surface perpendicularly to
the x-axis or to the y-axis.
Exercises 3.2
1
1. Show that f x is a valid probability density function on . (It is referred to
Π
1x2
C ek x if x
0
f x
0 otherwise
where C and k are positive constants. Find C such that f is a valid probability density
function. Find the cumulative distribution function associated to this distribution.
4. Find the cumulative distribution function of the uniform distribution on a, b. If X has
ab
this distribution, find PX
2
.
5. For each of the following random variables, state whether it is best modeled as discrete or
continuous, and explain your choice.
(a) The error in measuring the height of a building by triangulation
(b) The number of New Year's Day babies born in New York City next year
(c) The relative humidity in Baltimore on a certain day at a certain time
6. The time T (in days) of a server malfunction in a computer system has the probability
density function
1
t et10 if t
0
f t 100
0 otherwise
Verify that this is a valid density and compute the probability that the malfunction occurs
after the tenth day.
7. The c.d.f. of a continuous random variable X is
0 if x 1
1 4 x 1 4 if 1x 2
F x 1 2 x 3 4 if 2x 3
1 4 x if 3x 4
1 if x
4
Find a piecewise expression for the density function f x and graph both f and F. Explain
why it does not matter what value you set for f at the break points 1, 2, 3, and 4. If X has this
probability distribution, find P1.5 X 3.5.
8. (Mathematica) By setting several different values for Α and fixing Β, graph several
versions of the Weibull(Α, Β) density on the same set of axes. Describe the dependence of
the density on the Α parameter. Do the same for the Β parameter.
9. Find the location of the maximum value of the Weibull(Α, Β) density in terms of Α and Β.
10. If a random variable X has the uniform(0, 1) distribution and Y 2 X 1, what possible
values can Y take on? Find PY y (i.e., the c.d.f. of Y) for all such values y by reexpress-
ing the inequality in terms of X . Use this to find the probability density function of Y.
190 Chapter 3 Continuous Probability
11. (Mathematica) Suppose that the time D (in years) until death of an individual who holds
a new 25 year $40,000 life insurance policy has the Weibull distribution with parameters
Α 1.5, Β 20. The policy will expire and not pay off anything if the policy holder
survives beyond 25 years; otherwise, if the holder dies within 25 years it pays the $40,000
face value to his heirs. What is the probability that the heirs will receive the $40,000? What
is the expected value of the amount that the insurance company will pay?
12. (Mathematica) The value V of an investment at a certain time in the future is a random
variable with probability density function
1 1
elog v4
2 2
f v , v
0
v 2Π
Find the probability that the value will be at least 40. Find the median v of the value
distribution, that is, the point such that PV v .5.
13. If X, Y, and Z are three independent uniform(0, 1) random variables, find
1
PX Y Z .
2
1
14. For the random variables U and V of Example 4, find PV U
4 .
19. Explain why the random variables X and Y of Example 6 are not independent of each
other. However, is there any case in which the marginal density qy equals the conditional
density qy x?
20. (Mathematica) If X and Y have joint density of the form
3.2 Continuous Random Variables and Distributions 191
c
f x, y , x, y 1, 2
x y
5 3 3 6
find the marginal densities of X and Y, and compute PY
X 2 and PX Y 5 .
4 2
21. If X1 , X2 , and X3 are a random sample (in sequence and with replacement) from the
uniform(0, 3) distribution, find (a) PX1
1, X2 1, X3
2; (b) PminX1 , X2 , X3
1
22. (Mathematica) Does the following random sample appear to have been sampled from
the distribution whose density function is
27
f x x2 e3 x , x
0 ?
2
Use graphics to explain why or why not.
6.21, 3.97, 6.34, 5.31, 6.52, 4.57, 4.03, 5.99, 4.99, 5.54, 4.69, 5.09, 4.13, 5.32, 3.16, 4.16,
4.49, 5.88, 6.23, 7.42, 3.98, 5.29, 5.22, 6.4, 5.25, 5.75, 4.4, 1.25, 5.45, 3.55, 5.83, 3.48,
3.57, 4.95, 5.06, 4.55, 3.06, 5.36, 6.6, 4.57, 6.39, 5.61, 5.72, 4.55, 3.67, 3.79, 4.84, 6.05,
5.99, 6.37, 6.58, 4.48, 5.52, 5.89, 7.19, 2.9, 4.1, 5.96, 5.56, 5.6, 4.72, 6.19, 4.14, 3.76, 4.17,
4.13, 4.05, 6.44, 5.23, 5.88, 5.4, 7.23, 2.82, 5.05, 5.57, 4.74, 4.73, 3.78, 6.0, 2.7, 6.35, 4.07,
4.98, 3.81, 4.6, 4.54, 4.39, 2.85, 5.13, 5.11, 4.83, 4.91, 4.14, 6.19, 3.64, 4.33, 4.54, 4.55,
5.33, 4.27
23. We can take as the definition of the conditional c.d.f. of Y given X x the limit of
PY y X x, x x
Let X and Y have the joint density function f x, y 2 x y on the region where the
density is positive. Given that 10% of the employees buy the basic policy, what is the
probability that fewer than 5% buy the supplemental policy?
26. An insurance policy is written to cover a loss X where X has density function
3
x2 for 0 x 2
f x 8
0 otherwise
The time (in hours) to process a claim of size x, where 0 x 2, is uniformly distributed on
the interval from x to 2x. Calculate the probability that a randomly chosen claim on this
policy is processed in three hours or more.
The idea was to express the average of states x that could be assumed by the random variable
X, using the probability mass function px as a system of weights to give states of higher
likelihood more influence on the average value. Now if we have a continuous random
variable with a state space that is a bounded interval for instance, we could think of approxi-
mating the average value by discretizing the state space into a finite number of states xi ,
i 1, ..., n separated by a common spacing x. Because of the meaning of the density
function f, the product f xi x is the approximate probability weight for state xi , and
therefore, the approximate expected value in the discretized state space would be
xi f xi x
xi
(The integral is taken over the entire state space of X.) We also use the term mean of X and
the notation Μ for E[X]. The expected value of a function g of X is
3.3 Continuous Expectation 193
In formula (3), we will again be most interested in the moments of the distribution,
i.e., expected powers of X and expected powers of X
Μ. Among the moments of chief
interest are the mean itself, which measures the center of the probability distribution, and the
variance:
Σ2 VarX E X
Μ2
x
Μ2 f x d x (4)
Ex
which measures the spread of the distribution around Μ. The square root Σ of the variance
Σ2 is the standard deviation of the distribution.
In this section we will discuss the meaning and computation of certain expectations,
paralleling the discussion that began in Section 2.1 and continued in Section 2.6 for discrete
random variables. We will feature the same basic results on means, variances, covariance
and correlation. Most of the proofs of the results are entirely similar to the discrete case with
sums replaced by integrals and, therefore, they will usually either be omitted or left to the
exercises.
Example 1 As a first example, let us find the mean of the uniforma, b distribution.
The p.d.f. of the distribution is f x 1 b
a for x a, b and zero otherwise.
Therefore the mean is
b
1 1 x2 b2
a2 b a
Μ EX x dx ba (5)
b
a b
a 2 2 b
a 2
a
It is very intuitive that the mean of the uniforma, b distribution should come out to be the
midpoint of the interval a, b, since all states in the interval contribute equally to the
continuous average that is the mean. You are asked to compute the variance of X in Exercise
1. This computation will be easier if you use the formula in the next activity, which you
should derive now.
Example 2 Exercise 10 in Section 2.6 asked for a simulation of sample means for larger
and larger sample sizes, to show the convergence of the sample mean to the theoretical mean
Μ. This is a good time to reconsider that investigation in the context of continuous expecta-
tion. We now write a command (essentially the same one that would be the solution of that
exercise) to successively simulate sample values from a given distribution, pause every m
sample values to recompute the sample mean, and update the list of sample means to date.
194 Chapter 3 Continuous Probability
Figure 20 shows the results of executing the command on the uniform2, 6 distribu-
tion, whose mean is the midpoint 4. We simulate 1000 sample values, and update the
sample mean every 10 values. The sample mean seems to be converging to the distributional
mean of 4. You should try repeating the command several times, without resetting the seed,
to see this phenomenon recurring, and also see the amount of variability in the sample means
early on in the sequence. This and other similar experiments provide empirical evidence for
the claim that the mean of a probability distribution is that value to which the mean of a
random sample converges as more and more observations are taken. This result is a pivotal
theorem in probability and statistics, called the Strong Law of Large Numbers. It will be
discussed more fully in Chapter 5.
SeedRandom439 873;
SimMeanSequence
UniformDistribution2, 6, 1000, 10
4.05
4.00
3.95
3.90
20 40 60 80 100
Figure 3.20 - Simulated sample mean as a function of sample size/10
3.3 Continuous Expectation 195
Example 3 Recall from Section 3.2 the Weibull distribution with parameters Α and Β. Its
p.d.f. is the following:
Α
f x Α Β
Α xΑ
1 e
xΒ , x 0
Its mean is
Α
EX x f x d x Α Β
Α xΑ e
xΒ d x
0 0
It turns out that a substitution u x ΒΑ expresses the integral in terms of a standard
mathematical function denoted r, which is defined by
r ur
1 e
u d u (6)
0
(See Exercise 4.) The gamma function will also play an important role in later development.
Among its many interesting properties is that if n is an integer, n n
1 . More
generally, for all Α 1, Α Α
1 Α
1, and also it can be shown that
1 2 Π . In Exercise 4 you will be asked to finish the derivation that the mean of the
WeibullΑ, Β distribution is
1
Μ Β 1 (7)
Α
If, as in Example 3 of Section 3.2 on shock absorber lifetimes, Α 2 and Β 30, then we can
use Mathematica to compute the mean lifetime as
Μ 30 Gamma3 2
NΜ
15 Π
26.5868
The sketch of the Weibull density with these parameters in Figure 21 shows that 26.59 is a
reasonable estimate of the center of the distribution.
0.025
0.020
0.015
0.010
0.005
10 20 30 40 50 60 70
Figure 3.21 - Weibull(2,30) density function
that return the mean and variance of any distribution in the package. For example, for the
uniform distribution on a, b,
MeanUniformDistributiona, b,
VarianceUniformDistributiona, b
ab 1
, a b2
2 12
Example 4 Compute the variance of the random variable X whose probability density
function is f x c
x4 if x 1, and 0 otherwise.
By the computational formula in Activity 1, we must compute the first moment
Μ EX and the second moment E X 2
. But first we should determine the constant c. In
order for the density to integrate to 1, we must have
x
3 1
1 c 1
x4 d x c c c 3
3 1
3
1
Then,
3 3 x
2 3
Μ x d x d x3
x4 x3
2 1
2
1 1
3 3 x
1
E X 2
x2 d x d x3 3
x4 x2
1 1
1 1
3.3 Continuous Expectation 197
Activity 2 For densities of the form f x c x p , x 1, and 0 otherwise, for what
powers p does the variance exist and for what powers does it not exist? How about the
mean?
.1 0 .1
02
Μ1 ER1 .05; Σ1 2 VarR1 .000833
2 12
For the second investment, we compute Μ2 ER2 and Σ2 2 VarR2 in Mathematica
below.
Μ2 N r 20 E
20 r r
0
Σ2sq N
r
Μ2 2 20 E
20 r r
0
0.05
0.0025
So we see that the mean returns on the two investments are the same, but investment 1 has a
smaller variance than investment 2. Since it is therefore less risky, the investor should prefer
investment 1.
198 Chapter 3 Continuous Probability
We can take our cue from the single-variable case in order to formulate a version of
expectation in the context of jointly distributed random variables.
Definition 2. Let X1 , X2 , ... , Xn be continuous random variables with joint density
function f x1 , x2 , ... , xn . Then the expected value of a real-valued function g of the
random variables is
where the integral is taken over all possible joint states x x1 , x2 , ... , xn .
Most of the time, except when the random variables are independent and the multiple
integral simplifies to a product of single integrals, we will consider the bivariate case, in
which formula (8) reduces to
where again f is the joint density of X and Y. Perhaps the most interesting such expectation
is the covariance of X and Y:
Cov X , Y E X
Μx Y
Μ y
(10)
E E x
Μx y
Μ y f x, y d x d y
y x
which is as before a measure of the degree of association between X and Y. When X and Y
tend to be large together and small together, the integrand will be large with high probabil-
ity, and so the double integral that defines the covariance will be large. The definition of the
correlation Ρ between X and Y is also the same as before:
CovX , Y
Ρ CorrX , Y (11)
Σx Σ y
Then the expected distance between the smaller and the larger is
3.3 Continuous Expectation 199
1 1
EV
U 2 v
u d v d u
0 u
1
v2
2
v u 1
u du
2
0
1
1 u2 2
2 2
u du
2 6
0
Alternatively in the right side of the first line above we could have noted that the double
integral breaks into a difference of two double integrals:
1 1 1 1
2vd vd u
2ud vd u
0 u 0 u
Example 7 Find the covariance and correlation of the random variables X and Y from
Example 5 of Section 3.2, whose joint density is
3
f x, y x2 y2 , x, y 0, 1
2
There are quite a few ingredients in the computation, but we will let Mathematica do
the tedious integrations for us. It is helpful that X and Y have the same marginal distribution,
hence the same mean and standard deviation. First let us find Μx Μ y . We can do this by
integrating the function gx, y x times the density, which we define below, to get EX .
3
fx_, y_ : x2 y2
2
1 1
mux x fx, y y x; muy mux
0 0
5
8
200 Chapter 3 Continuous Probability
To get the variance of X, which equals the variance of Y, we can integrate to find
E X
Μx 2
:
1 1
varx
x
mux2 fx, y y x; vary varx
0 0
73
960
The covariance and correlation can be done by using formulas (10) and (11):
1 1
covarxy
x
mux
y
muy fx, y y x
0 0
covarxy
Ρ
varx vary
1
64
15
73
Notice that X and Y have a slight negative correlation, and that, as usual, the correlation is
less than or equal to 1 in magnitude.
Here are the main theorems about expectation, listed in the continuous context. You
will observe that they are the same as the ones we proved for discrete random variables.
First is the linearity of expectation, which extends (see Exercise 13) to many random
variables.
Theorem 1. If X and Y are continuous random variables with finite means, and c and d are
constants, then
Ec X d Y c EX d EY (12)
If you check back to the proof of the analogue of Theorem 2 for discrete random variables,
you will find that it still works word-for-word here, because it relies only on the linearity of
expectation. Theorem 2 also extends readily to many independent random variables; constant
coefficients, if present, come out as squares and the variance of the sum breaks into the sum
of the variances.
When continuous random variables are independent, expected products factor into
products of expectations. We will include the proof of this theorem. It can be extended to
many random variables as well (see Exercise 14).
Theorem 3. If X and Y are independent continuous random variables, then
EgX hY EgX EhY provided these expectations exist.
Proof. If f is the joint density of X and Y, then by the independence assumption,
f x, y px qy, where p and q are the marginal densities of X and Y, respectively.
Therefore,
Eg X h Y E E gx hy f x, y d y d x
x y
Eg X EhY
In the case where the random variables are not independent, the formula for the
variance of a linear combination includes covariance terms as well, as indicated in the next
theorem.
Theorem 4. If X1 , X2 , ..., Xn are jointly distributed random variables, then
n n n i
1
Var ci Xi c2i VarXi 2 ci c j CovXi , X j (14)
i1 i1 i1 j1
A couple of familiar miscellaneous results about covariance are in the next theorem.
You should read about another interesting property in Exercise 15. Formula (15) is an often
useful computational formula for the covariance, and formulas (16) and (17) show that
whereas the covariance is dependent on the units used to measure the random variables, the
correlation is not. Therefore the correlation is a standardized index of association.
Theorem 5. Let X and Y be jointly distributed random variables with means Μx and Μ y , and
let a, b, c, and d be positive constants. Then
Cov X , Y EX Y
Μx Μ y (15)
So, to find the average value of gY given an observed value of X, we take the
weighted average of all possible values gy, weighted by the conditional density of y for the
observed x. The properties are similar to the discrete case; for instance, it is very easy to
show that conditional expectation is linear; and if X and Y are independent, then
EY X x EY. Again the law of total probability for expectation holds:
EEgY X EgY (19)
Since the joint density was f u, v 2, if 0 u v 1, the two conditional densities are
f u, v 2 1
qv u for v u, 1
pu 2 1
u 1
u
f u, v 2 1
pu v for u 0, v
qv 2v v
3.3 Continuous Expectation 203
v2 v 2 v2
E U 2 V v
EU V v2
3 2 12
Exercises 3.3
1. Derive an expression for the variance of the uniforma, b distribution.
2. If X is a random variable with probability density function
1
f x , x
Π1 x2
4. By carrying out the integral by hand, verify the expression in (7) for the mean of the
WeibullΑ, Β distribution.
5. (Mathematica) Use Mathematica to find the first three moments ET, E T 2
, and E T 3
6. Let Μ be the mean of the uniform0, 4 distribution, and let Σ be the standard deviation.
If X is a random variable with this distribution, find: (a) P X
Μ Σ; (b)
P X
Μ 2 Σ; (c) P X
Μ 3 Σ
7. (Mathematica) Use simulation to approximate the mean of the Weibull4, 6 distribution.
Then use Mathematica and formula (7) to calculate the mean analytically. Graph the density
function, noting the position of the mean on the x-axis.
8. (Mathematica) Recall the insurance policy problem of Exercise 3.2-11, in which there is
a policy that pays $40,000 to the holder's heirs if he dies within the next 25 years. The death
time (in years beginning from the present) has the Weibull1.5, 20 distribution. Now let us
take into account the premiums paid by the policy holder, which for simplicity we suppose
are collected at a constant rate of $1800 per year until the time of death. Find the expected
profit for the insurance company.
9. Suppose that X and Y have joint density
13. Prove that if X1 , X2 , ..., Xn are continuous random variables with finite means and
c1 , c2 , ..., cn are constants then
Ec1 X1 c2 X2 cn Xn c1 EX1 c2 EX2 cn EXn
14. Prove that if X1 , X2 , ..., Xn are continuous random variables with finite means and if the
X's are independent of each other, then
EX1 X2 Xn EX1 EX2 EXn
15. Show that if X and Y are random variables and a, b, c, and d are constants then
Cova X b Y, c X d Y a c VarX b c a d CovX , Y b d VarY
17. If random variables X and Y are uncorrelated (i.e., Ρ 0), are the random variables
2 X
1 and 3 Y 1 also uncorrelated? Explain.
18. Show that if X and Y are continuous random variables with joint density f x, y, then
E gX X x gx
Needs"KnoxProb7`Utilities`"
8 0.006
0.005
6
0.004
4 0.003
0.002
2
0.001
(a) (b)
Figure 4.1 - (a) Empirical distribution of National League batting averages; (b) Distribution of SAT scores
207
208 Chapter 4 Continuous Distributions
Observed distributions like this are pervasive in the social and physical sciences.
Another example is the data set charted in Figure 1(b). (The first closed cell above the
figure contains the list of raw data.) This time 167 SAT math scores for entering students at
my college show the same sort of distribution, with a center around 600, fewer observations
near the tails, and data distributed more or less symmetrically about the center (though there
is a small left skew, created mainly by a few observations under 450).
Random sampling gives other instances of data distributed in this hill shape. For
instance, below we take random samples of size 40 from the uniform(0, 2) distribution,
compute the sample means for 100 such random samples, and plot a histogram of the
resulting sample means. Almost all of the sample means are within about .25 of the mean
Μ 1 of the uniform distribution that we sampled from. This histogram shape is no coinci-
dence, because a theorem called the Central Limit Theorem, discussed in Chapter 5,
guarantees that the probability distribution of the sample mean is approximately normal.
SeedRandom98 996;
sampmeans Table
MeanRandomRealUniformDistribution0, 2, 40,
i, 1, 100;
g1 Histogramsampmeans, .1, "ProbabilityDensity",
ChartStyle Gray, BaseStyle 8
Activity 1 What is the standard deviation of the uniform(0, 2) distribution? What is the
standard deviation of the sample mean X of a random sample of size 40 from this
distribution? How many standard deviations of X does .25 represent?
The probability density function that provides a good fit to distributions such as the
three above is the following.
Definition 1. The normal distribution with mean parameter Μ and variance parameter Σ2 is
the continuous distribution whose density function is
4.1 The Normal Distribution 209
1
ex Μ
2 2 Σ2
f x , x (1)
2 ΠΣ 2
a b
Figure 4.3 - Normal probability Pa X b is the shaded area under normal density
NormalDistribution[Μ, Σ]
which represents the NΜ, Σ2 distribution. Note that the second argument is the standard
deviation Σ, which is the square root of the variance Σ2 . The functions PDF, CDF, and
RandomReal can therefore be applied to this distribution object in the same way as before.
To illustrate, suppose that a random variable X has the normal distribution with mean Μ = 0
and variance Σ2 4, hence standard deviation Σ = 2. The commands below define the
density function in general and the particular c.d.f., integrate the density function to find
P0 X 2, then find the same probability by using the c.d.f.
210 Chapter 4 Continuous Distributions
0.341345
0.341345
(If you remove the numerical evaluation function N from the last command and reexecute,
you will probably get a closed form that involves a function called Erf, which is defined in
terms of a normal integral.)
By using a clever technique to change from a single to a double integral, followed by
a substitution, it is possible (see Exercise 3) to show that the normal density in formula (1)
integrates to 1 over the real line, as it should. Here we check this fact in Mathematica for
the N(0, 4) density we used above. You should try changing the mean and standard devia-
tion parameters several times to make sure that this property still holds.
fx, 0, 2 x
Figure 3, as well as the density formula itself, suggests that the normal density
function is symmetric about its mean Μ. Figure 4 gives you another look. We plot in three
dimensions the normal density as a function of x and the mean Μ for constant standard
deviation Σ = 2. In the foreground we see the symmetry of f x, 2, 2 about Μ = 2, and as
Μ increases to 2 we see the density curve cross section receding into the background in such
a way that its center of symmetry is always at Μ.
2
1
0 Μ
1
0.20 2
0.15
0.10
0.05
0.00
5 0 5
x
Figure 4.4 - Standard normal density as function of x and Μ
Activity 2 Show that the points Μ ± Σ are inflection points of the normal density with
mean Μ and standard deviation Σ. (Use either hand computation or Mathematica.)
0.4
0.3
0.2
0.1
5 0 5
If you think about what Activity 2 says, you will reach the conclusion that the normal
density curve spreads out as the standard deviation Σ increases. You may run the animation
above to confirm this visually. We fix the mean at 0 and vary Σ from 1 to 4 to observe that
the larger is Σ, the more probability weight is distributed to the tails of the distribution.
Example 1 In Activity 1 we asked some questions about the scenario of taking repeated
random samples of size 40 from the uniform(0, 2) distribution and finding their sample
means. Figure 2 was a histogram of 100 such sample means. Let us fill in the details now
and see how an appropriate normal distribution fits the data.
212 Chapter 4 Continuous Distributions
The mean of the uniform(0, 2) distribution is 1, and the variance is 2 02 12, i.e.
1/3. Recall that if X is the mean of a random sample of size n from a distribution whose
mean is Μ and whose variance is Σ2 , then the expected value of X is also Μ and the variance
of X is Σ2 n. So in our situation, X has mean 1 and variance (1/3)/40 = 1/120, hence, its
standard deviation is 1
120
.09. The observed maximum distance of data points from
Μ 1 of about .25 that we noted in Figure 3 translates into a distance of around 3 standard
deviations of X . Now we superimpose the graph of the N1, 1
120 density function on the
histogram of Figure 3. Figure 5 shows a fairly good fit, even with our small sample size of
40, and only 100 repetitions of the experiment.
1
g2 Plotfx, 1, , x, 0.75, 1.25,
120
PlotStyle Black, BaseStyle Gray;
Showg1, g2, Ticks 0, 0.75, 1, 1.25, Automatic
0.75 1 1.25
Figure 4.5 - N(1, 1/120) density and histogram of 100 sample means of size 40 from uniform(0, 2)
Activity 3 Use the RandomReal command to simulate 1000 observations from the
N 0, 1 distribution, and plot a scaled histogram of this sample together with the
N 0, 1 density.
Example 2 Assume that demands for a manufactured item are normally distributed with
unknown mean and variance. Suppose that there are fixed costs of $10,000 plus a cost of $5
per item to manufacture them. If the plan is to sell the items for $10 apiece, estimate the net
profit and the probability of breaking even if a historical random sample of demands is as
follows:
The total cost of manufacture for x items is 10 000 5 x, and the revenue for selling
these x items is 10 x. So the net profit is 10 x 5 x 10 000 5 x 10 000. The breakeven
point is where net profit is zero, which happens at the point x 2000. But demands are
random, so if X is the random variable that models the number of items demanded, we can
assume that X is normally distributed, but we are not given the mean and standard deviation
parameters of the distribution. Let us use the sample mean X of the twelve observed random
sample values to estimate Μ.
NMeandemands
2086.42
The random profit is 5 X 10 000, and by linearity of expectation we see that the expected
profit E5 X 10 000 is approximately:
5 2086 10 000
430
With a selling price of $10 per item, this particular venture is not a great money-maker. To
find the break-even probability we will also have to estimate the variance Σ2 and correspond-
ing standard deviation Σ. We can estimate Σ2 by the sample variance, which is an average
squared deviation of the data values from the sample mean. The exact formula for the
sample variance is
1 n
2
S2 Xi X (3)
n1 i1
The so-called sample standard deviation S is the square root S S 2 of the sample
variance, and serves as an estimate of Σ. Mathematica can compute these statistics by the
functions
Variance[datalist] and StandardDeviation[datalist]
which are contained in the kernel. Here are the sample variance and standard deviation for
our list of demands.
214 Chapter 4 Continuous Distributions
NVariancedemands, StandardDeviationdemands
Rounding to the nearest integer, we estimate Σ by 256. The event of breaking even
(or better) is the event that X 2000. With our estimates of Μ and Σ we can compute:
PX
2000
0.631541
As we have seen before, a large portion of the time a random variable will take on a value
within 2 standard deviations of its mean, and so we expect the demand for the item to be
within 2(256) = 512 of the true mean demand, estimated by the sample mean of 2086. Here
are the profits for the demands that are 2 standard deviations below and 2 standard devia-
tions above the estimated mean:
5 2086 2 256
10 000, 5 2086 2 256
10 000
2130, 2990
With high probability, the profit will be between -$2130 and $2990.
We have been calling the two parameters of the normal distribution by the names
"mean" and "variance," but we have not yet justified that this is what they are. Here are
Mathematica computations of the integrals that would define the mean and variance of the
N(0, 1) distribution. The second integral is actually EX 2
, but this is the same as the
variance in light of the fact that the first integral shows that EX 0.
1
Ex
2 2
Μ x x
2Π
1
Ex
2 2
sigsq x2 x
2Π
1
4.1 The Normal Distribution 215
As expected, the mean is the same as the parameter Μ, that is 0, and the variance is Σ2 = 1.
You should try to integrate by hand to check these results; it will help to note that the first
integrand is an odd function and the second one is an even function. We now state and
prove the general result.
Theorem 1. If X is a random variable with the N(Μ, Σ2 ) distribution, then
EX Μ and VarX Σ2 (4)
Proof. To show that EX Μ, it is enough to show that EX Μ 0. This expectation is
1
exΜ
2 2 Σ2
x Μ dx
2 ΠΣ2
The substitution z x Μ
Σ changes the integral to
1
ez
2 2
Σ z dz
2Π
which equals zero from above, because it is a constant times the expected value of a N(0, 1)
random variable. For the second part, to show that VarX EX Μ2
Σ2 it is enough
to show that EX Μ2 Σ2
= 1. The expectation on the left is
x Μ2 1
exΜ
2 2 Σ2
dx
Σ 2
2 ΠΣ2
As mentioned above, it can be verified that this integral equals 1, which completes the proof.
The next theorem, usually called the standardization theorem for normal random
variables, shows that by subtracting the mean from a general NΜ, Σ2 random variable and
dividing by the standard deviation, we manage to center and rescale the random variable in
such a way that it is still normally distributed, but the new mean is 0 and the new variance is
1. The N(0, 1) distribution therefore occupies an important role in probability theory and
applications, and it is given the special name of the standard normal distribution.
Theorem 2. If the random variable X has the N(Μ, Σ2 ) distribution, then the random
variable defined by
216 Chapter 4 Continuous Distributions
X Μ
Z (5)
Σ
has the N(0, 1) distribution.
Proof. We will use an important technique that will be discussed systematically in Section
4.3, called the cumulative distribution function method. In it we write an expression for the
c.d.f. of the transformed random variable, take advantage of the known distribution of the
original random variable to express that c.d.f. conveniently, and then differentiate it to get
the density of the transformed variable.
The c.d.f. of Z can be written as
X Μ
FZ z PZ z P z PX Μ Σ z (6)
Σ
But we know that X has the N(Μ, Σ2 ) distribution. If Gx is the c.d.f. of that distribution,
then formula (6) shows that FZ z GΜ Σ z. Differentiating, the density of Z is
fZ z FZ z G Μ Σ z Σ gΜ Σ z Σ
where g is the N(Μ, Σ2 ) density function. Substituting into formula (1) we obtain
1 1
eΜΣ z Μ
2 2 Σ2
ez
2 2
fZ z gΜ Σ z Σ Σ
2 ΠΣ2 2Π
Activity 4 Use the c.d.f. technique of the proof of Theorem 2 to try to show a converse
result: If Z N0, 1 then the random variable X Σ Z Μ has the N(Μ, Σ2 ) distribution.
Example 3 It has been a tradition in probability and statistics to go on at some length about
standardizing normal random variables, and then to use tables of the standard normal c.d.f.
to find numerical answers to probability questions. The availability of technology that can
quickly give such numerical answers without recourse to standardization has reduced the
necessity for doing this, but I cannot resist the temptation to show a couple of quick example
computations.
Suppose that the National League batting averages in the introductory discussion are
indeed normally distributed. Let us estimate their mean and variance by the sample mean and
sample variance of the list of averages.
4.1 The Normal Distribution 217
Μ Meanaverages
sigsq Varianceaverages
Σ sigsq
0.248585
0.00338447
0.0581762
Rounding to three significant digits, we are assuming that a generic randomly selected
batting average X has the N(.249, .00338) distribution, so the values of Μ and Σ will be taken
to be .249 and .0582, respectively. What is the probability that this random batting average
exceeds .300?
Standardizing by subtracting the mean and dividing by the standard deviation yields:
X Μ .300 Μ .300 .249
PX .300 P PZ PZ .876
Σ Σ .0582
Thus we have converted a non-standard normal probability involving X to a standard normal
probability involving Z. Now many statistical tables of the standard normal c.d.f. are set up
to give, for various values of z, the area under the standard normal density that is shaded in
Figure 6(a), which is the probability P0 Z z.
z .876
Such a table would tell us that the area in Figure 6(b), which is P0 Z .876, is .3095.
How do we find PZ .876 from this? Because of the symmetry of the standard normal
density about 0, the area entirely to the right of 0 must be exactly .5. This breaks up into the
shaded area in Figure 6(b) plus the unshaded area to the right of .876. Thus, .5 = .3095 +
PZ .876; hence, PZ .876 = .5 .3095 .1905. To obtain this result from Mathemat-
ica is easier. We can either call for the complement of the standard normal c.d.f. at .876:
218 Chapter 4 Continuous Distributions
0.190515
or before standardizing we can ask for the complement of the normal c.d.f. with Μ = .249 and
Σ = .0582 at the point .300:
0.190437
(The slight difference in the numerical answers is due to the fact that the argument .876 in
the first call to the standard normal c.d.f. was rounded.)
As a second example computation, what is the probability that a randomly selected
batting average is between .244 and .249? Standardizing, we obtain
.244 .249 .249 .249
P.244 X .249 P Z P.086 Z 0
.0582 .0582
Most standard normal tables do not give probabilities for intervals that lie to the left of 0, so
you would usually not be able to read this off directly. But again by symmetry (refer to
Figure 6(a)) the area between .086 and 0 equals the area between 0 and .086, which can be
read from the standard normal table, or computed in Mathematica as follows. (Why are we
subtracting .5 from this c.d.f. value to obtain the probability?)
0.0342668
Again, with Mathematica we have no need for tables or for standardizing in this kind of
computation, because we can compute the original probability by subtracting c.d.f. values
directly:
0.0342313
Again the small difference between the two answers is due to the rounding of .086.
4.1 The Normal Distribution 219
and the standard normal probability on the far right does not depend on the particular values
of Μ and Σ. Mathematica computes this probability as:
NCDFNormalDistribution0, 1, 1
CDFNormalDistribution0, 1, 1
0.682689
NCDFNormalDistribution0, 1, 2
CDFNormalDistribution0, 1, 2
0.9545
Example 5. The pth percentile of the probability distribution of a random variable X is the
point x p such that PX x p
p. (a) Find a general relationship between the pth percentile
x p of the NΜ, Σ2 distribution and the pth percentile z p of the standard normal distribution.
Then (b), defining the ith smallest member Xi of a random sample X1 , X2 , … , Xn as the pth
percentile of the sample, where p i
n 1, discuss what a plot of Xi vs. z p would look
like if the random sample came from some normal distribution. (A plot of the pairs z p , Xi
described in part (b) is called a normal scores, or normal quantile plot.)
(a) By standardizing in the defining equation for the pth percentile of a distribution,
we get
xp Μ
p PX x p
PZ PZ z p
Σ
Thus, since the pth percentile of the standard normal distribution is clearly a unique value,
we must have that
xp Μ
zp . (9)
Σ
(b) We leave unproved the intuitively reasonable fact that the p i n 1th percentile of
the random sample should estimate the corresponding percentile x p of the distribution being
sampled from. That is, Xi
x p . But by formula (9) above, if the sample was taken from a
normal distribution, then x p is in turn related to the standard normal percentile by
x p Μ Σ z p . Hence we have Xi
Μ Σ z p . A normal quantile plot should show an
approximately linear relationship between the standard normal percentiles and the ordered
sample values, whose slope is roughly Σ, and whose y-intercept is roughly Μ.
Let us illustrate this conclusion in three ways: (1) for a simulated sample that does
come from a normal distribution; (2) for a simulated sample that does not; and (3) for a real
data set, specifically the SAT scores from the start of this section. In the Mathematica input
cell below, we first initialize the random seed, and then we create random samples of size 30
from the N0, 22 and the uniform4, 4 distribution and sort them into increasing order.
SeedRandom17 351;
normsample
TableRandomRealNormalDistribution0, 2, 30;
sortednormsample Sortnormsample;
unifsample Table
RandomRealUniformDistribution 4, 4, 30;
sortedunifsample Sortunifsample;
4.1 The Normal Distribution 221
normquants TableQuantile
NormalDistribution0, 1, i 31, i, 1, 30;
normpairs Transposenormquants,
sortednormsample;
unifpairs Transposenormquants,
sortedunifsample;
normplotnorm ListPlot
normpairs,
AxesLabel zp , "X i
", PlotStyle Black;
normplotunif ListPlot
unifpairs,
AxesLabel zp , "U i
", PlotStyle Black;
GraphicsRownormplotnorm, normplotunif
Xi Ui
6 3
4 2
1
2 zp
zp 1.51.00.5
1 0.5 1.0 1.5
1.51.00.5 0.5 1.0 1.5 2
2
3
(a) (b)
Figure 4.7 - (a) Normal scores plot for simulated N 0, 22 sample; (b) Normal scores plot for simulated unif 4, 4
sample
The difference is rather subtle, and is further clouded by the occurrence of an outlying value
of more than 6 in the simulated normal random sample. In part (a) of Figure 7, note that
except for this extreme value, the points are relatively linear with a slope of around 2, and an
intercept of around 0, as predicted by our analysis. Part (b) shows the normal scores plot for
a sample from the uniform distribution with a roughly equivalent range of values. Note that
there is curvature reminiscent of the cube root graph, or the inverse tangent graph. This can
be explained by the fact that the standard normal percentiles increase quickly away from the
origin, and slowly near the origin, owing to the heavy distribution of probability weight near
0. But the uniform percentiles will increase at a constant rate throughout the interval 4, 4.
So the rate at which the uniform percentiles change with respect to the normal percentiles
will be greatest near zero, and will be smaller on the tails, as the simulated sample bears out.
222 Chapter 4 Continuous Distributions
The main benefit of normal scores plots, however, is to diagnose departures from
normality in unknown data sets. Figure 1(b) was a histogram of a set of 167 SAT scores that
were entered as the Mathematica list SATscores. The histogram seemed to suggest that a
normal distribution of scores was reasonable to assume. Let us construct a normal scores
plot and check this. Below we use the same approach to construct the plot. First we sort the
scores, next we table the 167 percentiles with p i
167 1 of the standard normal
distribution, then we transpose the quantiles and sorted scores to a list of pairs, and finally
we ListPlot that data.
sortedSAT SortSATscores;
normquants167 TableQuantile
NormalDistribution0, 1, i 168, i, 1, 167;
datapoints Transposenormquants167, sortedSAT;
ListPlot
datapoints,
AxesLabel zp , "X i
", PlotStyle Black
Xi
800
700
600
500
zp
2 1 1 2
Figure 4.8 - Normal scores plot for SAT scores
The linearity here is quite good, so we should have very little discomfort about the assump-
tion of normality.
Activity 6 Use the graph in Figure 8 to visually estimate the mean and standard
deviation of the distribution of SAT scores, and then calculate the sample mean and
standard deviation in Mathematica.
Exercises 4.1
1. (Mathematica) The list below consists of 110 Nielsen television ratings of primetime
shows for the CBS, NBC, ABC, and Fox networks selected from the time period of July
7July 11, 2008. Does the distribution of ratings seem approximately normal? Regardless
of the apparent normality or lack of it, estimate the normal parameters, and superimpose the
corresponding density function on a scaled histogram of the ratings.
4.1 The Normal Distribution 223
ratings 5.8, 6.4, 7.0, 7.9, 7.2, 6.9, 3.5, 3.5, 5.9,
4.5, 5.6, 5.7, 2.9, 3.4, 3.3, 3.6, 3.7, 4.2, 4.1, 4.5,
4.7, 5.4, 5.7, 6.4, 4.2, 3.6, 4.5, 4.7, 5.6, 6.2,
5.0, 5.6, 4.4, 4.3, 4.9, 4.9, 7.5, 8.2, 5.0, 4.7,
3.0, 3.6, 5.2, 5.5, 2.7, 2.6, 2.6, 2.9, 4.0, 4.5,
4.7, 5.2, 4.9, 5.1, 5.7, 6.1, 3.1, 3.0, 4.3, 4.5,
3.4, 3.0, 5.1, 5.6, 5.7, 5.8, 2.2, 2.1, 2.1, 2.3,
3.7, 3.9, 6.1, 6.2, 6.5, 6.8, 3.9, 3.7, 2.4, 2.3,
3.1, 3.0, 3.2, 3.5, 4.0, 4.7, 5.2, 5.3, 2.1, 2.0,
2.2, 2.4, 3.3, 3.8, 3.6, 3.8, 4.3, 4.5, 5.3, 5.4,
2.9, 3.3, 4.2, 4.7, 4.5, 4.3, 1.7, 1.9, 2.1, 2.3;
2. (Mathematica) The data below are unemployment rates (in %) in a sample of 65 Illinois
counties in June, 2008. Does the distribution of unemployment rates seem approximately
normal? Estimate the normal parameters, and superimpose the corresponding density
function on a scaled histogram of the unemployment rates.
unemployment
5.4, 7.1, 8.5, 9.0, 6.6, 6.6, 7.0, 8.0, 7.1, 7.1, 6.0,
7.6, 7.4, 7.6, 7.6, 7.2, 5.7, 6.5, 6.0, 8.2, 8.4, 8.2,
7.1, 6.2, 6.7, 6.2, 7.0, 6.4, 5.5, 6.2, 6.7, 6.4, 5.5,
5.5, 7.1, 6.6, 6.4, 5.2, 8.7, 7.5, 6.4, 5.4, 6.5, 8.4,
6.3, 5.7, 6.5, 6.2, 7.6, 5.6, 6.5, 6.9, 7.0, 6.5, 8.3,
7.6, 6.5, 7.4, 6.5, 8.1, 7.4, 7.0, 5.7, 7.8, 6.9;
3. Show that the N(0, 1) density integrates to 1 over all of ; therefore, it is a good density.
(Hint: To show that f x d x 1 it is enough to show that the square of the integral is one.
Then express the square of the integral as an iterated two-fold integral, and change to polar
coordinates.)
4. Show that the N(Μ, Σ2 ) density function is symmetric about Μ.
5. Use calculus to show that the N(Μ, Σ2 ) density function reaches its maximum value at Μ,
and compute that maximum value.
6. (Mathematica) Recall that if X1 , X2 , .. ., Xn is a random sample of size n, then the
1 2
sample variance is S 2 ni1 Xi X . Simulate 100 samples of size 20 from the N(0,
n1
1) distribution, the N(2, 1) distribution, and the N(4, 1) distribution and, in each case, sketch
a histogram of the 100 observed values of S 2 . Discuss the dependence of the empirical
224 Chapter 4 Continuous Distributions
distribution of S 2 on the value of Μ. Conduct a similar experiment and discuss the depen-
dence of the empirical distribution of S 2 on the value of Σ2 .
7. (Mathematica or tables) If X is a N(Μ, Σ2 ) random variable, compute (a) PX Μ Σ;
(b) PX Μ 2 Σ; (c) PX Μ 3 Σ.
8. (Mathematica or tables) Suppose that compression strengths of human tibia bones are
normally distributed with mean Μ 15.3 (units of kg mm2 1000) and standard deviation
Σ 2.0. Among four independently sampled tibia bones, what is the probability that at least
three have strengths of 17.0 or better?
9. (Mathematica or tables) Assume that in a certain area the distribution of household
incomes can be approximated by a normal distribution with mean $30,000 and standard
deviation $6000. If five households are sampled at random, what is the probability that all
will have incomes of $24,000 or less?
10. (Mathematica) Simulate 500 values of Z X Μ
Σ where X NΜ, Σ2 for several
cases of the parameters Μ and Σ. For each simulation, superimpose a scaled histogram of
observed z values and a graph of the standard normal density curve. What do you see, and
why should you have expected to see it?
11. Suppose that a table of the standard normal distribution tells you that
P0 Z 1.15 .3749, and P0 Z 2.46 .4931. Find (a) P1.15 Z 2.46; (b)
P1.15 Z 2.46; (c) PZ 2.46; (d) PZ 1.15.
12. Assume that X has the N(1.8, 0.72 ) distribution. Without using Mathematica or tables,
find P1.65 X 1.85 if a table of the standard normal distribution tells you that
P0 Z .07 .0279 and P0 Z .21 .0832.
13. (Mathematica) For each of 200 simulated random samples of size 50 from the unifor-
m(4, 6) distribution, compute the sample mean. Then plot a scaled histogram of the sample
mean together with an appropriate normal density that you expect to be the best fitting one to
the distribution of X . Repeat the simulation for the Weibull(2, 30) distribution that we
encountered in Example 3 of Section 3.3.
14. Suppose that the times people spend being served at a post office window are approxi-
mately normal with mean 1.5 minutes and variance .16 minutes. What are the probabilities
that:
(a) a service requires either more than 1.9 minutes or less than 1.1 minutes?
(b) a service requires between .7 minutes and 2.3 minutes?
15. (Mathematica) The deciles of the probability distribution of a random variable X are the
points x.1 , x.2 , x.3 , ... , x.9 such that
PX x.1 .1, PX x.2 .2, PX x.3 .3,
etc. Find the deciles of the N0, 1 distribution.
4.1 The Normal Distribution 225
16. (a) Produce a normal scores plot for 40 simulated values from the Weibull(2,3) distribu-
tion. Does it appear to be about linear? (b) Produce a normal scores plot for the unemploy-
ment data in Exercise 2. Are you confident in an assumption of normality?
17. (Mathematica) Suppose that an insurance company determines that the distribution of
claim sizes on their auto insurance policies is normal with mean 1200 and standard deviation
300. When a claim is submitted, the company pays the difference between the amount
claimed and the "deductible" amount listed on the particular policy. Half of the company's
policies have deductible amounts of $250, 1/4 have $500 deductibles, and the remaining 1/4
have $1000 deductibles. What is the expected payout of a random claim?
18. (Mathematica or tables) The rate of return on investment is defined as (final value
initial value)/(initial value). Suppose that the rate of return on a particular investment is
normal with mean .10 and standard deviation .2. Find: (a) the probability that the invest-
ment loses money; (b) the probability that an investment of $1000 has a final value of at
least $1050.
Example 1 I recently taught an elementary statistics class and became interested in the
accuracy with which the final exam score could be predicted by other classwork such as
homework and quizzes. Above is the data set of homework percentages, quiz percentages,
and final exam scores (out of 200 points) for the 23 students in the class, in the form of three
Mathematica lists called hwk, quiz, and final. The dependence of the final exam score on
both of the predictor variables of homework and quiz percentage is illustrated in the plots in
Figure 9. There is a strong tendency for the final score to increase, following a roughly
linear function, with both variables. The points exhibit variability, falling in a cloud rather
than perfectly along a line.
g1 ListPlotTransposehwk, final,
AxesLabel "hwk ", "final", PlotStyle Black;
g2 ListPlotTransposequiz, final, AxesLabel
"quiz ", "final", PlotStyle Black;
GraphicsRowg1, g2
final final
180 180
160 160
140 140
120 120
hwk quiz
40 50 60 70 80 60 70 80 90
In many (but not all) cases of bivariate data in which the pairs plot as they do in
Figure 9, the individual variables also display a roughly normal shape. This is a rather small
data set, but the dot plots in Figure 10 for the quiz and final variables are reasonably normal.
(A dot plot of a single variable data set simply graphs each data point on an axis, stacking
dots for values that are the same or nearly the same. The DotPlot command in Knox-
Prob7`Utilities` sketches such plots. It takes only one argument, the list of data that it is
plotting, but has several options including VariableName to put a text name on the axis.)
Needs"KnoxProb7`Utilities`"
g3 DotPlotquiz, VariableName "quiz";
g4 DotPlotfinal, VariableName "final";
GraphicsRowg3, g4
4.2 Bivariate Normal Distribution 227
quiz final
50 60 70 80 90 100 100 120 140 160 180
Figure 4.10 - Empirical distributions of quiz percentage and final exam score
Activity 1 Produce a dot plot of the homework percentage data. Does it look normally
distributed? Can you speculate on the reason for the shape that you see?
Example 2 As another example, consider the data set below, obtained by the Centers for
Disease Control (CDC). For the year 2007, they estimated for each state plus the District of
Columbia the percentage of children in the state aged 19-35 months who had been immu-
nized with three different types of common vaccine: polio, MMR, and hepatitis. Each triple
corresponds to one state, and we have used Mathematica's Transpose function to obtain
three lists of data, one for each vaccine.
As Figures 11 and 12 show, the individual distributions of the MMR and hepatitis variables
look reasonably normal, and the joint distribution of the two again shows a roughly linear
increasing pattern, with variability inherent in the way the points form a cloud around a line.
A couple of outlying data points around (90.5,85) and (87,94) seem to fit the elliptically
shaped cloud of points less well.
MMR hepatitis
86 88 90 92 94 96 84 86 88 90 92 94 96 98
ListPlotTransposeMMR, hep,
AxesLabel "MMR", "hepatitis", PlotStyle Black
hepatitis
98
96
94
92
90
88
86
MMR
86 88 90 92 94 96
Figure 4.12 - Empirical joint distribution of immunization coverage for MMR and hepatitis
Activity 2 Produce a dot plot of the polio data in Example 2 to see whether it also
appears normally distributed. Construct scatter plots like Figure 12 to see whether the
joint behavior is similar for the other pairs of variables: MMR and polio, and hepatitis
and polio.
4.2 Bivariate Normal Distribution 229
1
f x, y
2 ΠΣx Σ y 1 Ρ2
2 (1)
1 x Μx 2 x Μx y Μ y y Μ y
exp 2Ρ
2 1 Ρ2 Σ2x Σx Σ y Σ2y
Activity 3 Let the parameter Ρ = 0 in formula (1). What do you notice about the joint
density f x, y?
As usual with joint densities, the probability that the pair X , Y falls into a set B is
PX , Y B
f x, y d x d y
(2)
B
which is the volume bounded below by the set B in the x-y plane and above by the surface
z f x, y.
Mathematica knows about this distribution. In the standard package MultivariateS-
tatistics` is the object
MultinormalDistribution[meanvector, covariancematrix]
where the argument meanvector is the list Μx , Μ y and the argument covariancematrix is the
2 2 matrix
230 Chapter 4 Continuous Distributions
Σ2x Ρ Σx Σ y
Ρ Σx Σ y Σ2y
(written in Mathematica list form as { {Σ2x , Ρ Σx Σ y },{Ρ Σx Σ y , Σ2y } }). We will see shortly
that these parameters are aptly named. The parameters Μx and Μ y are the means of X and Y,
Σ2x and Σ2y are the variances of X and Y, and Ρ is the correlation between X and Y, which
means that the matrix entry Ρ Σx Σ y is the covariance. (You will show that CorrX , Y Ρ
in Exercise 12.)
Here for example is a definition of the bivariate normal density with Μx 0, Μ y 2,
Σ2x = 1, Σ2y = 4, and Ρ = .6. Then ΡΣx Σ y .6 1 2 1.2. Note the use of the list
x, y as
the argument for the two variable form of the PDF function.
Needs"MultivariateStatistics`"
meanvector 0, 2;
covariancematrix 1, 1.2, 1.2, 4;
fx_, y_ : PDFMultinormalDistribution
meanvector, covariancematrix, x, y
We can understand the graph of f in two ways: by just plotting the density surface or by
looking at its contour graph, i.e., the curves of equal probability density. (The closed cell
shows the code for generating Figure 13.)
4
0.10
0.05 6 2
0.00 4
2 2y
0
1 0
x0 1 2
2 2
2 1 0 1 2
The probability mass is concentrated most densely in elliptical regions around the point
0, 2, which is the mean. The major and minor axes are tilted, and there is more spread in
the y-direction than in the x-direction. (Look at the tick marks carefully.) Simulated data
from this distribution show the same tendency. Here are 200 points randomly sampled from
the same distribution and plotted.
4.2 Bivariate Normal Distribution 231
simlist RandomRealMultinormalDistribution
meanvector, covariancematrix, 200;
ListPlotsimlist, PlotStyle
Black, PointSize0.015
8
6
4
2
3 2 1 1 2 3
2
Note the similarity between the simulated data in Figure 14 and the real data in
Figures 9 and 12. It appears that the bivariate normal distribution could be a good model for
such real data. The role of the mean vector Μx , Μ y in determining the center of the
distribution and the point cloud is clear. Try the next activity to investigate the exact role of
the variance parameters and the correlation parameter.
Activity 4 Sketch contour graphs of the bivariate normal density, holding Μx and Μ y at
0, Σx and Σ y at 1, and varying Ρ between .9 and .9. Try it again with Σx Σ y 2 and
Σx Σ y 3. What seems to be the effect of changing Ρ when the standard deviations
are held fixed and equal? Next fix Ρ = .7, and plot contours for the cases: Σx 1, Σ y 2
and Σx 1, Σ y 3 and Σx 1, Σ y 4. For fixed Ρ, what is the effect of changing the
ratio of standard deviations?
Exercise 10 asks you to show from the density formula (1) that the contours of the
bivariate normal density are indeed ellipses centered at Μx , Μ y , rotated at an angle Α with
the x-axis satisfying
Σ2x Σ2y
cot2 Α (3)
2 ΡΣx Σ y
Therefore, when the variances are equal, the cotangent is 0, hence 2Α = Π/2, and so the angle
of rotation is Α = Π/4, as you probably noticed in Activity 4. You may have also noticed that
the variance parameters affected the lengths of the two axes of symmetry of the ellipses: as
Σ2x increases, the length of the axis parallel to the rotated x-axis grows, and as Σ2y increases
the rotated y-axis grows. As in the one-dimensional case, these parameters control the
232 Chapter 4 Continuous Distributions
spread of the distribution. In the electronic book, you can execute the cell below which
enables you to fix the means Μx and Μ y , the variance Σx 2 , and the correlation Ρ, and manipu-
late the variance Σ2y to see the changes in the elliptical contours.
Clearg;
gx_, y_, Μx_ , Μy_ , Σx_ , Σy_ , Ρ_ :
PDFMultinormalDistributionΜx, Μy,
Σx 2 , Ρ Σx Σy, Ρ Σx Σy, Σy 2 , x, y ;
ManipulateContourPlotgx, y, 0, 0, 1, Σy, .8,
x, 3, 3, y, 3, 3, PlotRange All, Σy, 1, 4
The role of the parameters Σ2x and Σ2y is easiest to see in a very special case. When
you did Activity 3 you should have deduced that when Ρ = 0 the joint density f x, y
simplifies to the product px qy of a NΜx , Σ2x ) density for X and a NΜ y , Σ2y ) density for
Y. It is clear in this case that X and Y are independent, and the Μ and Σ2 parameters deter-
mine the centers and spreads of their marginal distributions. It is not a general rule that lack
of correlation implies independence, but it is true when the two random variables have the
bivariate normal distribution.
Activity 5 In the closed cell below is a data list of pairs of atmospheric ozone levels
collected by the Economic Research Service of the United States Department of
Agriculture for a number of counties in the East Coast in the year 1987. The first
member of each pair is an average spring 1987 ozone level, and the second is the
summer ozone for the same county. Plot the pairs as points in the plane to observe the
relationship between the two variables: spring and summer ozone level. Also, plot a
histogram of the individual season levels. Do the underlying assumptions of the model
for random bivariate normal pairs seem to hold for this data? What anomalies do you
see, and can you guess the reasons for them?
Example 3 Let us estimate for the statistics data in Example 1 the probability that a student
receives at least 80% on both quizzes and final exam (i.e., a final exam score of at least 160
points).
Have faith for a moment in the earlier claims that for a bivariate normal pair X , Y,
X
NΜx , Σ2x and Y
NΜ y , Σ2y and Ρ CorrX , Y. Assume that the quiz percentage X
and the final exam score Y of a randomly selected student form a bivariate normal pair. We
can estimate the means of X and Y by the component sample means below.
4.2 Bivariate Normal Distribution 233
Μx NMeanquiz;
Μy NMeanfinal;
meanvector Μx , Μy
71.813, 134.304
Here are the sample variances and standard deviations, which estimate the variances and
standard deviations of X and Y.
175.443, 812.767
13.2455, 28.5091
The correlation Ρ must also be estimated. Recall that its defining formula is
EX Μx Y Μ y
Ρ
Σx Σ y
The expectation in the numerator is an average product of differences between X and its
mean with differences between Y and its mean. Given n randomly sampled pairs Xi , Yi a
sensible estimate of the correlation is the sample correlation coefficient R, defined by
R
ni1 Xi X Yi Y n 1 ni1 Xi X Yi Y
(4)
Sx S y
2 2
ni1 Xi X ni1 Yi Y
R Correlationquiz, final
covariancematrix
varx, R Σx Σy , R Σx Σy , vary;
MatrixFormcovariancematrix
0.810708
175.443 306.137
306.137 812.767
Actually, we need not have gone to this trouble of writing the estimated covariance matrix
out longhand, because there is another command
Covariance[datalist]
which does the whole job of computing the estimated covariance matrix, as you can see
below. Its argument is the full data list of pairs, which we create using Transpose.
175.443 306.137
306.137 812.767
Now we can define the bivariate normal density function using the parameters we have
estimated.
The question asks for PX 80, Y 160 , which would be the following integral.
0.144642
In this particular statistics course, it was very difficult to do well on both items. The
estimated joint density of quiz and final scores is in Figure 15.
4.2 Bivariate Normal Distribution 235
0.0006
0.0004 200
0.0002
0.0000
150
final
60
quiz 80 100
100
Figure 4.15 - Approximate joint density of quiz % and final exam score
2 1 Ρ2 Σx 2 Σx Σ y Σy2
ΡΣ y 2 (5)
x Μx 2 y Μ y
Σx
x Μx
2 Σx 2 2 Σ y 2 1 Ρ2
The negative of this expression is just the exponent in the density formula. To shorten
notation write
ΡΣ y
Μy x Μy
x Μx and Σ y x 2 Σ y 2 1 Ρ2 (6)
Σx
Substituting the right side of formula (5) into formula (1), we see that the bivariate normal
joint density can be written in factored form as
236 Chapter 4 Continuous Distributions
2
1 x Μx 2 1 y Μ y x
f x, y exp exp (7)
2 Π Σx 2 Σx 2 2 Π Σy x 2 Σy x2
The marginal density of X can be found by computing the integral f x, y d y. But the
two leading factors in (7) come out of this integral, and the integral of the product of the two
rightmost factors is 1, because they constitute a normal density with mean Μ y x and variance
Σ y x 2 . Hence the marginal density of X is
1 x Μx 2
px exp
2 Π Σx 2 Σx 2
where px is the marginal density of X. Therefore, by the definition of conditional density,
2
f x, y 1 y Μ y x
qy x exp (8)
px 2 Π Σy x 2 Σy x2
Hence, the conditional density of Y given X x is also normal, with parameters Μ y x and
Σ y x 2 given by formula (6).
The formula for Μ y x in (6) is particularly interesting, because it gives a predictive
linear relationship between the variables; if X x is known to have occurred, then the
ΡΣ y
conditional expected value of Y is Μ y
Σx
x Μx . It is rather surprising however that the
conditional variance Σ2y 1 Ρ of Y does not depend on the particular value that X takes on.
2
But it is intuitively reasonable that the largest value of the conditional variance is Σ2y in the
case Ρ = 0 where the variables are uncorrelated, and the conditional variance decreases to 0
as Ρ increases to 1. The more strongly that X and Y are correlated, the less is the variability of
Y when the value of X is known.
Symmetric arguments yield the marginal distribution of Y and the conditional p.d.f.
of X given Y y, in the theorem below, which summarizes all of our results.
4.2 Bivariate Normal Distribution 237
Theorem 1. Let X , Y have the bivariate normal density in (1) with parameters
Μx , Μ y , Σ2x , Σ2y , and Ρ. Then,
(a) the marginal distribution of X is NΜx , Σ2x );
(b) the marginal distribution of Y is NΜ y , Σ2y );
(c) Ρ is the correlation between X and Y;
(d) the curves of constant probability density are ellipses centered at Μx , Μ y and
rotated at an angle Α with the x-axis satisfying cot2 Α Σ2x Σ2y 2 ΡΣx Σ y ;
(e) if Ρ = 0, then X and Y are independent;
(f) the conditional density of Y given X x is normal with conditional mean and
variance
ΡΣ y
Μy x Μy
x Μx and Σ y x 2 Σ y 2 1 Ρ2
Σx
(g) the conditional density of X given Y y is normal with conditional mean and
variance
ΡΣx
Μx y Μx
y Μ y and Σx y 2 Σx 2 1 Ρ2
Σy
Activity 6 Finish the details of the proof that Y has a normal distribution, and that the
conditional distribution of X given Y y is normal.
Example 4 You may have heard of the statistical problem of linear regression, which
involves fitting the best linear model to a data set of pairs xi , yi . The presumption is that
the data are instances of a model
Y a
bX
Ε (9)
where X is a random variable that has the NΜx , Σx 2 distribution, and Ε is another normally
distributed random variable with mean 0 and some variance Σ2 , whose correlation with X is
zero. The game is to estimate the coefficients a and b and to use them to predict new y
values for given x values. Let us see that the bivariate normal model implies a linear
regression model, and also gives information about the coefficients.
Suppose that X and Y are bivariate normal. Define a random variable Ε by:
ΡΣ y
Ε Y Μ y X Μx
Σx
Since X and Y are normally distributed, and Ε is a linear combination of them, it can be
shown that Ε is normally distributed. (We will show this fact in the independent case in the
next section, though we omit the proof in the correlated case.) It is also easy to see that E[Ε ]
= 0 (why?).
238 Chapter 4 Continuous Distributions
The last line follows because CovX , Y ΡΣx Σ y . Therefore, Y and X are related linearly by
ΡΣ y ΡΣ y
Y Μ y X Μx
Ε Y Μ y
X Μx
Ε (10)
Σx Σx
and X and Ε satisfy the distributional assumptions of linear regression. In particular, the slope
coefficient b Ρ Σ y Σx , and the predicted value of Y given X x is the conditional mean
ΡΣ y
Μy x Μy
x Μx . Notice that the farther x is from Μx , the more the predicted y differs
Σx
from Μ y .
Let us close this section with an example of a different set of data on which Theorem
1 sheds light.
Example 5 The data above have to do with highway death rates in fatalities per million
vehicle miles over a forty-year span beginning in 1945. The first component is an index to
the year, the second component is the fatality rate in the state of New Mexico in that year,
and the third is the death rate in the entire U.S. in that year. We wonder how highway death
rates in New Mexico relate to those in the U.S. as a whole. Is a bivariate normal model
appropriate? Can we make predictions about highway fatalities in New Mexico (or in the
U.S.)?
The scatter plot in Figure 16 shows a very strong linear dependence between the two
variables, which supports a bivariate normal model. But the histograms following in Figure
17 show right skewed marginal distributions.
ListPlotTransposenewmexico, us,
AxesLabel "New Mex", "U.S.",
PlotStyle Black, PointSize0.02`
U.S.
10
New Mex
6 8 10 12 14
Figure 4.16 - U.S. vs. New Mexico highway fatality rates, 1945-1984
GraphicsRowHistogramnewmexico,
5, ChartStyle Gray, BaseStyle 8,
Histogramus, 5, ChartStyle Gray, BaseStyle 8
15 15
10 10
5 5
4 6 8 10 12 14 4 6 8 10 12
Figure 4.17 - Marginal distributions of New Mexico (left) and U.S. (right) fatality rates
240 Chapter 4 Continuous Distributions
At times, data analysts can apply stabilizing transformations such as x p , p 1, and logx to
knock down large data values and make the transformed variables look a bit more symmet-
ric. Let us work from here onward with the logarithms of our two variables. You can check
that the scatterplot still looks good after the log transformation. The individual histograms
for the logged variables show somewhat more symmetry than those for the original variables,
although they are by no means perfect.
25
15
20
10 15
10
5
5
Now let us estimate the bivariate normal parameters. We use the Mean, StandardDevi
ation, and Correlation commands as before.
Μ1 , Μ2 , Σ1 , Σ2 , Ρ Meanlognewmex,
Meanlogus, StandardDeviationlognewmex,
StandardDeviationlogus,
Correlationlognewmex, logus
Notice the very high value Ρ .96 which indicates the strength of the relationship. By
Theorem 1(a), since the logged New Mexico death rate X has approximately the
N2.05947, .346194 2 distribution, we can find the probability P1.7 X 2.3 for example
as
CDFNormalDistributionΜ1 , Σ1 , 2.3
CDFNormalDistributionΜ1 , Σ1 , 1.7
0.606852
4.2 Bivariate Normal Distribution 241
Since X is the log of the actual death rate, say X logR, .606852 is also the probability
that R e1.7 , e2.3 5.47, 9.97 .
We can also estimate the linear regression relationship between the logged variables.
Let Y be the log of the U.S. death rate and write using Theorem 1(f) the conditional mean of
Y given X x as:
Ρ Σ2
Yx_ : Μ2
x Μ1 ;
Σ1
Yx
Then the predicted value of the logged U.S. death rate in a year in which the logged New
Mexico death rate was 2.2 is
Y2.2
1.78059
Notice how the occurrence of an x value somewhat above the mean of X led to a predicted Y
value above the mean of Y. This predicted Y is actually the mean of the conditional
distribution of Y given X 2.2. The variance of that distribution is
Σ2 2 1 Ρ2
0.00974876
Therefore, given X 2.2 we have for instance that the conditional probability that Y is at
least as large as Μ y is
1 CDFNormalDistribution1.78059, .00974876 , Μ2
0.92043
Activity 8 How would you predict the actual (not logged) U.S. death rate in the last
example given the actual New Mexico death rate?
242 Chapter 4 Continuous Distributions
Exercises 4.2
1. (Mathematica) Consider the set of pairs below, which are sale prices of homes (in
dollars), and square footage of the home for single family homes up for sale in Pigeon Forge,
Tennessee in 2008. The first cell is actually a dummy cell to give you an idea of the
structure of the data set; the full data set is in the closed cell beneath it, and it consists of 115
such pairs. Investigate whether this data set seems to obey the assumptions of the bivariate
normal model. Does a log transformation on one or both variables help?
2. (Mathematica) Two company stocks have weekly rates of return X and Y which are
bivariate normal with means .001 and .003, variances .00002 each, and correlation .2.
Simulate 500 observations of the pair X , Y and make a scatterplot of your simulated data.
Compute the probability that simultaneously the rate of return X exceeds .0015 and the rate
of return Y is less than .002.
3. (Mathematica) The Bureau of Transportation Statistics reports the pairs below, which are
percentages of late arrivals and departures of U.S. airline flights from 19952008. Check
that the assumptions of the bivariate normal model are reasonable. Estimate the parameters
of that model. Using the estimated parameters, compute (a) the probability that a year has no
more than 20% late arrivals and no more that 18% late departures; (b) the probability, given
that a year has 21% late arrivals that it has at least 19% late departures; (c) the predicted
percentage of late departures in a year that has 21% late arrivals.
4. (Mathematica) The data below are final 20078 team statistics for the 30 NBA teams for
the variables: field goal percentage, free throw percentage, assists per game, and points per
game (all for home games only). Which of the individual variables seem to be about
normally distributed? Which of the first three variables seems to predict points per game
best? Assuming bivariate normal models, predict (a) the points per game for a team that
shoots a 44% field goal percentage; (b) the points per game for a team that shoots a 75% free
throw percentage; and (c) the points per game for a team that averages 22 assists per game.
5. (Mathematica) The following data from the Bureau of Labor Statistics are incidence rates
of work-related non-fatal injuries and illnesses per 100 workers per year for a sample of
business and industry categories in 2005. The first element of the pair X is the rate of such
incidents in which work time was lost and the second element Y is the rate of incidents that
did not result in lost work time. Do you find a bivariate normal model to be reasonable?
Estimate the functional form of the relationship of X to Y, and use it to predict the work loss
rate X for an industry in which the rate of non-work loss incidents Y is 3.0. Compare your
answer to the sample mean of X and comment.
244 Chapter 4 Continuous Distributions
injuries
2.8, 2.5 , 3.5, 3.1, 3.3, 2.5, 1.6, 3.1,
3.1, 3.9, 7.7, 4.3, 5.6, 3.5, 4.8, 2.5,
3.4, 3.0, 2.6, 1.6, 3.5, 1.6, 1.4, 1.4,
2.3, 1.4, 2.2, 1.4, 2.1, 1.5, 2.5, 1.3,
2.1, 1.0, 2.8, 2.5, 3.1, 2.5, 3.6, 3.2,
3.1, 3.6, 3.6, 2.6, 3.1, 2.2, 4.7, 3.1,
4.4, 3.0, 6.1, 3.2, 6.1, 3.0, 3.6, 3.1,
4.3, 2.0, 3.6, 2.2, 7.2, 3.0, 2.3, 2.1,
2.8, 2.0, 1.5, 1.8, 3.8, 2.8, 5.2, 4.2,
2.5, 1.9, 2.3, 1.8, 1.8, 1.2, 1.8, 1.4,
2.2, 1.5, 4.1, 3.0, 4.9, 3.2, 4.7, 4.4,
3.6, 2.8, 3.8, 4.2, 4.6, 4.8, 3.8, 4.0,
3.0, 3.5, 2.9, 4.0, 2.6, 2.8, 1.0, 1.0,
1.2, 1.0, 2.6, 2.7, 4.6, 3.7, 2.4, 1.9,
4.0, 3.3, 1.6, 1.4, 2.2, 1.9, 2.1, 2.7;
offday, nooffday Transposeinjuries;
6. (Mathematica) The pairs below are, respectively, the age-adjusted mortality rate and
sulfur dioxide pollution potential in 60 cities (data from the U.S. Department of Labor
Statistics). Do you find evidence that these variables can be modeled by a bivariate normal
distribution? Try a log transformation and check again. Estimate the parameters for the log
transformed data. Does it seem as if mortality is related to the amount of sulfur dioxide
present? What is the conditional variance and standard deviation of log mortality given the
log sulfur dioxide level?
7. (Mathematica) Simulate 200 data pairs from the bivariate normal distribution with
Μx Μ y 0, Σx 2 Σ y 2 1, and each of Ρ = 0, .1, .3, .5, .7, .9. For each value of Ρ, produce
individual histograms of the two variables, and a scatterplot of the pairs. Report on the key
features of your graphs and how they compare for changing Ρ.
8. (Mathematica) Consider again the vaccination coverage data of Example 2. (a) Find the
approximate probability that a state with 90% coverage on polio has at least 90% coverage
on MMR; (b) Find the approximate probability that a state with 95% coverage on MMR has
at least 95% coverage on polio.
9. (Mathematica) As in Exercise 2, compute the conditional probability that the second stock
has a rate of return of at least .005 given that the first earns a rate of return of .002.
10. Verify that the contours of the bivariate normal density are ellipses centered at Μx , Μ y ,
rotated at an angle Α with the x-axis satisfying formula (3).
11. (Mathematica) In the statistics data of Example 1, estimate (a) the unconditional
probability that a student scores at least 120 on the final; and (b) the conditional probability
that a student scores at least 120 on the final given that he averages at least 80% on the
quizzes.
12. Show that the parameter Ρ in the bivariate normal density is indeed the correlation
between X and Y. (Hint: simplify the computation by noting that
X Μx Y Μ y
Ρ E
Σx Σy
15. Mathematica is capable of using its RandomReal command to simulate from the
bivariate normal distribution with given Μx , Μ y , Σx , Σ y , and Ρ. Explain how you could
simulate such data yourself using the linear regression model in equation (9).
16. If X and Y are bivariate normal, write the integral for PX x, Y y , then change
variables in the integral as in the hint for Exercise 12. What density function is now in the
integrand? Use this to generalize an important result from Section 4.1.
17. A generalization of the bivariate normal distribution to three or more jointly distributed
normal random variables is possible. Consider three random variables X1 , X2 , and X3 , and
let
246 Chapter 4 Continuous Distributions
Μ1 Σ1 2 Ρ12 Σ1 Σ2 Ρ13 Σ1 Σ3
Μ Μ2 and Ρ12 Σ1 Σ2 Σ2 2 Ρ23 Σ2 Σ3
Μ3 Ρ13 Σ1 Σ3 Ρ23 Σ2 Σ3 Σ3 2
be, respectively, the column vector of means of the X's and the matrix of variances and
paired covariances of the X's. Here, Ρi j is the correlation between Xi and X j . Consider the
trivariate normal density in matrix form:
1
1 xΜ
e12 xΜ
t
f x
32
2 Π det
in which x is a column vector of variables x1 , x2 , and x3 , the notation "det" stands for
determinant, 1 is the inverse of the covariance matrix, and the notation yt means the
transpose of the vector y. Show that if all paired correlations are 0 then the three random
variables X1 , X2 , and X3 are mutually independent.
18. Referring to the formula in Exercise 17 for the multivariate normal density function f x
in matrix form, show that in the two-variable case f x agrees with the formula for the
bivariate normal density function in (1). (In place of 2 Π32 in the denominator, put 2 Π22
for the two-variable case. In general, it would be 2 Πn2 for n variables.)
Activity 1 Look up and reread the bulleted references above. Try to find other exam-
ples from earlier in the book of transformed random variables.
The idea of the cumulative distribution function technique is to write down the c.d.f.
FY y PY y of Y, then substitute Y gX in, and express the probability PgX y
as a function of y by using the known distribution of X. We can do this in either the discrete
or continuous case, but we will focus mostly on continuous random variables in this section.
In the case that X and Y are continuous, once we have the c.d.f. of Y, differentiation produces
the density function. We illustrate in the next example.
Example 1 Let X be a continuous random variable with the p.d.f. f x 2 x, x 0, 1,
f x 0 otherwise. Find the probability density function of Y X 2 .
Since X takes on only non-negative values here, we have
FY y PY y P X 2 y
P0 X y (2)
y
y
P0 X y
2 x d x x2 0 y (3)
0
and this computation is valid for all y 0, 1. So, the c.d.f. of Y is FY y y for these y's,
and clearly FY vanishes for y
0 and FY = 1 for y 1. (Why?) Differentiation of the c.d.f.
of Y gives the density of Y as fY y 1 for y 0, 1. Our conclusion is that if X has the
distribution given in the problem, then Y X 2 has the uniform(0,1) distribution.
Equations (2) and (3) in the last example illustrate beautifully the heart of the c.d.f.
method; make sure you understand how they are obtained and why we are looking at them.
The next activity should serve as a good check.
Example 2 Let X and Y be independent uniform(0,1) random variables, and let U be the
smaller and V the larger of X and Y. We will show that the pair U, V has the constant joint
density function fU,V u, v 2, 0 u v 1. (This fills in the analytical gap in Example 4
in Section 3.2.)
Because U is the smaller and V is the larger of X and Y, the state space is clearly the
set of pairs u, v indicated in the statement: 0 u v 1. Now just as it is true in the
single variable case that a density f x is the derivative F x of the c.d.f., it is also true in
the two variable case that the joint density is the second order mixed partial of the joint
c.d.f., that is:
2 F
f x, y (4)
x
y
You may see this by applying the Fundamental Theorem of Calculus twice to the double
integral that defines the joint c.d.f. In the case at hand we have that the joint c.d.f. of U and
V is
FU,V u, v PU u, V v
PX u, Y v, X Y PX v, Y u, X Y,
by separately considering the two cases where X is the smaller and where Y is the smaller.
Now to find the total probability, we would integrate the joint density f x, y 1 over the
shaded region in Figure 19, but that is just the area of the shaded region. This area, by
adding two rectangle areas, is
FU,V u, v u v uv u 2 u v u2
4.3 New Random Variables from Old 249
The partial derivative of this with respect to v is 2 u, and the second order mixed partial is
2 FU,V
fU,V u, v 2 u 2
u
v
u
as desired.
u v 1
Figure 4.19 - Region of integration for U u, V v, when u v
The c.d.f. technique is particularly useful in the proof of the next important theorem
on simulation.
Theorem 1. Let Uuniform(0,1), and let F be a continuous, strictly increasing c.d.f. Then
X F 1 U is a random variable with the distribution associated with F.
Proof. Since F is strictly increasing and continuous, F 1 exists and is strictly increasing and
continuous. The c.d.f. of X F 1 U is
PX x P F 1 U x
P FF 1 U Fx
PU Fx Fx
since the c.d.f. of U is FU u u, u 0, 1. Thus, X has the distribution function F.
Theorem 1 completes the story of simulation of continuous random variables. By the
random number generating methods discussed in Chapter 1, one can simulate a pseudo-
random observation U from uniform(0,1). To obtain a simulated X that has a given distribu-
tion characterized by F, just apply the transformation X F 1 U.
250 Chapter 4 Continuous Distributions
Example 3 An important continuous distribution that we will look at later in this chapter is
the exponential(Λ) distribution, whose p.d.f. is
f t Λ eΛ t , t 0 (5)
This turns out to be the distribution of the amount of time Tn Tn1 that elapses between
successive arrivals of a Poisson process. The Poisson process itself can be simulated if we
can simulate an exponential random variable S. By Theorem 1, S F 1 U has this
distribution, where U unif 0, 1 and F is the c.d.f. associated with f. It remains only to
find F 1 . First, the c.d.f. associated with the density in formula (5) is
t
Hence,
1 t
1
t FF 1 t t 1 eΛ F F 1 t log1 t
Λ
So, we can simulate an exponential random variable by
1
S log1 U (7)
Λ
Below is a Mathematica command that uses what we have done to simulate a list of n
exponential(Λ) observations. Observe that the data histogram of the simulated values has the
exponentially decreasing shape that one would expect, given the form of the density
function.
SimulateExpn_, Λ_ :
Log1 RandomReal
Table , n
Λ
Needs"KnoxProb7`Utilities`"
SeedRandom13 645
4.3 New Random Variables from Old 251
0.5
0.4
0.3
0.2
0.1
5 10 15
Figure 4.20 - Histogram of 200 simulated exp(1/2) observations with exp(1/2) density function
which is not expressible in closed form, nor is its inverse. But recall that Mathematica has
the function
Quantile[distribution, p]
in the kernel, which returns the inverse c.d.f. evaluated at p for the given distribution. For
example,
0.773373
1.5
These computations are telling us that if F is the N0, 22 c.d.f., then F1.5 .773373, and
correspondingly F 1 .773373 1.5. The following command simulates a list of n normal
252 Chapter 4 Continuous Distributions
observations, using Theorem 1, i.e., by computing F 1 U for the desired number of pseudo-
random uniform random variables U. The dotplot of the observations in Figure 21 illustrates
that the shape of the empirical distribution is appropriate.
SimNormaln_, Μ_ , Σ_ : TableQuantile
NormalDistributionΜ, Σ, RandomReal, n
4 2 0 2 4
Figure 4.21 - 200 simulated N 0, 2 observations
2
Moment-Generating Functions
In many interesting applications of probability we need to find the distribution of the
sum of independent random variables:
Y X1 X2 X n
In Example 2 of Section 2.6 we simulated such sample means, and also observed by
properties of expectation that E[X Μ and Var(X Σ2 n. But can we draw conclusions
analytically about the entire distribution of X ? Remember that at least for large sample
sizes, it appeared from the simulation experiment that this distribution was approximately
normal.
A new probabilistic device, defined below, is very useful for problems involving the
distributions of sums of random variables.
Definition 1. The moment-generating function (m.g.f.) of a probability distribution is the
function
Mt E et X
(8)
valid for all real numbers t such that the expectation is finite.
Notice that M is just a real-valued function of a real variable t. It would be calculated as
M t et k qk
(9)
k E
Mt
et x f x d x
(10)
E
e Μ t E eΣ t X ΜΣ
e Μ t E eΣ t Z
e Μ t MZ Σ t
The second line follows by completing the square on z in the exponent, and the fourth line is
true since the last integral on the right side of the third line is that of a Nu, 1 density, which
equals 1. Therefore the general NΜ, Σ2 m.g.f. is
Mt e Μ t MZ Σ t e Μ t eΣ t
2 2
e Μ tΣ
2 t 2 2
, t (12)
etc.; in general, the nth derivative of the m.g.f. evaluated at 0 is M n 0 EX n . So, the
function Mt generates all of the moments of the distribution of X. You will prove this in
Exercise 10.
As interesting as the previous fact might be, it takes second place to the following
theorem, whose proof is beyond the level of this book: the m.g.f. is unique to the distribu-
tion, meaning that no two distributions share the same m.g.f. This implies that if we can
recognize the m.g.f. of a transformed random variable Y by knowing the m.g.f.'s of those
random variables that Y depends on, then we have the distribution of Y. This line of
reasoning is known as the moment-generating function technique. We use it in two exam-
ples below to prove important results about the normal distribution.
4.3 New Random Variables from Old 255
Example 7 Let X NΜ, Σ2 . Use the moment-generating function technique to show that
Y a X b is normally distributed.
1
From formula (12), the m.g.f. of X is Mt expΜ t 2
Σ2 t2 . Therefore, the
m.g.f. of Y is
MY t E et Y
E eta X b
et b E et a X
et b M t a
2 t a2
et b e Μ t a12 Σ
ea Μb t 12 Σ a2 t 2
2
The last expression is the m.g.f. of the Na Μ b, a2 Σ2 distribution; hence by the unique-
ness of m.g.f's, Y must have this distribution.
Activity 5 Use the moment-generating function technique to show the normal standard-
X Μ
ization theorem: if X NΜ, Σ2 , then Z Σ
N0, 1.
MY t E et Y
E eta1 X1 a2 X2 an Xn
E ni1 et ai Xi
ni1 E et ai Xi
1 2
t ai Μi Σi 2 ai t
ni1 e 2
1
t ai Μi Σi 2 ai 2 t2
e 2
The latter is the m.g.f. of N ai Μi , Σi 2 ai 2 , which completes the proof.
A special case of the last example deserves our attention. Let X1 , X2 , ... , Xn be a
random sample from the NΜ, Σ2 distribution. In particular, the Xi 's are independent.
Consider the sample mean X . It can be rewritten as
Xi 1 1 1
X X1 X2 Xn
n n n n
This is one of the most important results in probability and statistics. The sample mean of a
normal random sample is a random variable with a normal distribution also, whose mean is
the same as the mean Μ of the distribution being sampled from, and whose variance Σ2 n is
1 n times the underlying variance. We have seen these results about the mean and variance
before, but we have just now proved the normality of X . Let us close this section with an
illustration of the power of this observation.
Example 9 A balance is suspected of being in need of calibration. Several weight measure-
ments of a known 1 gram standard yield the data below. Give a probability estimate of the
likelihood that the balance needs recalibration. (Assume that the standard deviation of
measurements is Σ = .0009 grams.)
datalist
1.0010, .9989, 1.0024, 1.0008, .9992, 1.0015,
1.0020, 1.0004, 1.0018, 1.0005, 1.0013, 1.0002;
proven guilty beyond a reasonable doubt. If the 12 observations together are unlikely to
have occurred if the balance was correctly calibrated, then we convict the balance of the
crime of miscalibration. We can summarize the information contained in the raw data by the
sample mean X ; then if the particular x, a possible value of the random variable X , that we
observe is a very unlikely one when Μ = 1, we would conclude that Μ is most probably not 1.
First, we let Mathematica give us the observed value x of the sample mean.
Meandatalist
1.00083
So the observed sample mean is a bit bigger than 1. But is it big enough to constitute
"beyond reasonable doubt?" As a random variable, under the "assumed innocence" of the
balance, X would have the N1, .00092 12 distribution by formula (15). Now the
likelihood that X comes out to be identically equal to 1.00083 is 0 of course, since X is
continuously distributed. But that is not a fair use of the evidence. It is more sensible to ask
how large is the probability that X 1.00083 if the true Μ = 1. If that is very small then we
have a lot of probabilistic evidence against Μ = 1. We can compute:
1 CDFNormalDistribution1, .0009
12 , 1.00083
0.00069995
We now see that if Μ = 1 were true, it would only be about .0007 likely for us to have
observed a sample mean as large or larger than we did. This is very strong evidence that the
true Μ is greater than 1, and the balance should be recalibrated. We might say that in
deciding this, our risk of an erroneous decision is only about .0007.
It would be a good idea for you to reread this example a couple of times, paying
particular attention to the use of statistical evidence to draw a conclusion. It illustrates the
kind of thinking that you will see frequently if you take a course in statistics.
Activity 6 How sensitive is the decision we made in Example 9 to the largest observa-
tion 1.0024? Delete that and rerun the problem. Do we still prove the balance guilty
beyond a reasonable doubt? What is the new error probability?
Exercises 4.3
3
1. Use the c.d.f. technique to find the density function of (a) Y X 2 ; (b) Y X if X is a
continuous random variable with density function f x 3 x2 , x 0, 1.
258 Chapter 4 Continuous Distributions
2. Use the c.d.f. technique to find the density function of U Z 2 if Z N0, 1. This
density is called the chi-square density, and will be of interest to us in a later section.
3. Explain, with justification, how to simulate a uniforma, b observation using a uni-
form0, 1 observation.
4. Prove this converse of the simulation theorem. If F is the strictly increasing, continuous
c.d.f of the random variable X, then the random variable U FX has the uniform(0,1)
distribution.
5. (Mathematica) Write a Mathematica command to simulate a list of n observations from
the distribution whose density function is f x 3 x2 , x 0, 1. Run the command several
times, and compare histograms of data lists to the density function.
6. If a random temperature observation in Fahrenheit is normally distributed with mean 70
and standard deviation 5, find the probability that the temperature recorded in degrees
Celsius exceeds 20.
7. Derive the moment-generating function of the uniforma, b distribution.
8. Derive the moment-generating function of the geometric(p) distribution.
9. A residence has three smoke alarms. The owners test them every three months to be sure
that they are functioning; let random variables X1 , X2 , and X3 be 1 if alarm 1, 2, or 3
respectively is still functioning and 0 if not. Assume that the alarms operate independently
and each functions with probability 2/3. Find the moment-generating function of the product
X1 X2 X3 . What does this product random variable represent, and what p.m.f. corresponds to
the m.g.f. of the product?
10. Prove that if X is a random variable with moment-generating function Mt, then the nth
derivative of the m.g.f. at 0 is the nth moment, i.e., M n 0 EX n .
11. Derive the moment-generating function of the Bernoulli(p) distribution (see Activity 4),
and use it to show that the sum of n i.i.d. Bernoulli(p) random variables has the binomi-
aln, p distribution.
12. Derive the moment-generating function of the Poisson(Μ) distribution, and use it to show
that if X1 PoissonΜ1 , X2 PoissonΜ2 , … , Xn PoissonΜn are independent random
variables, then X1 X2 Xn has a Poisson distribution with parameter
Μ1 Μ2 Μn .
13. Recall from Section 4.1 the example data set on National League batting averages.
Accepting as correct the estimates of the mean batting average (.249) and the standard
deviation (.0582), what is the probability that a sample of 20 such batting averages will have
a sample mean of less than .250? Greater than .280?
4.3 New Random Variables from Old 259
14. As in Exercise 4.1.14, suppose that the times people spend being served at a post office
window are normal with mean 1.5 minutes and variance .16 minutes. What are the probabili-
ties that
(a) the average service time among ten people requires more than 1.7 minutes;
(b) the total service time among ten people requires less than 16 minutes?
Let X represent the combined losses from the three cities. Calculate E X 3
.
Definition 1. The ith order statistic Xi of a random sample X1 , X2 , …, Xn from a popula-
tion that is characterized by a p.d.f. (or p.m.f.) f x is the ith smallest member of the sample.
The smallest and largest order statistics X1 and Xn estimate the smallest and largest
possible states of the random variable X whose density function (or mass function) is f x.
The sample range, which is the difference:
Xn X1 (3)
estimates the maximum spread of values of X . One of the most important statistics based on
the order statistics is the sample median M of the random sample, defined by
X n1 if n is odd
2
M (4)
Xn 2
Xn 21
2
if n is even
i.e., the middle value of the ordered sample in the odd case, and the average of the two
middle values in the even case. The sample median estimates the median of the probability
distribution of X , which is the point to the left of which is half of the total probability. So
order statistics and functions of them play an important role in statistical estimation, and we
need to know about their probability distributions.
Activity 1 If the observed values of a random sample are 23, 4, 16, 7, and 8, what are
the sample median and sample range? If an additional sample value of 9 is included,
what are the new median and range?
Our goal in this section is to derive formulas for the joint density, single variable
marginal densities, and two-variable joint marginal densities of order statistics. We can then
use these to make inferences about populations based on observed values of the order
statistics; for instance, we can answer questions such as: if a population has the characteris-
tics that it is claimed to have, how likely is it that the order statistics would come out as they
did? If the answer is: not very likely, then the underlying assumptions about the population
are probably wrong. We will see other applications of order statistics in examples and
exercises, especially involving the minimum and maximum.
To avoid the sticky problem of sample values possibly being equal, meaning that
there is not a unique way of mapping sample values X1 , X2 , …, Xn to order statistics
4.4 Order Statistics 261
X1 , X2 , …, Xn , we make the simplifying assumption that the sample is taken from a
continuous distribution. Then with probability 1, strict inequality prevails:
X1 X2 … Xn (5)
We will not give a rigorous derivation of the joint distribution of order statistics,
which requires knowledge of how joint substitutions in multiple integrals work for non-
invertible transformations. An intuitive argument in the next paragraph actually sheds more
light on why the formula for which we will argue is true. To set it up, recall that the meaning
of a probability density is that the approximate probability that the random variable takes a
value in the infinitesimally small interval x, x d x around the point x is f x d x, where f
is the density of the random variable.
Consider the case of a random sample of size 3, X1 , X2 , X3 from a continuous
distribution with density function f x. The joint density of the three sample variables is
f x1 , x2 , x3 f x1 f x2 f x3
Now except for a set of outcomes of probability 0 (where X values may be tied), the sample
space can be partitioned into 3 6 disjoint events:
A1 Ω : X1 Ω X2 Ω X3 Ω
A2 Ω : X1 Ω X3 Ω X2 Ω
A3 Ω : X2 Ω X1 Ω X3 Ω
A4 Ω : X2 Ω X3 Ω X1 Ω
A5 Ω : X3 Ω X1 Ω X2 Ω
A6 Ω : X3 Ω X2 Ω X1 Ω
On each individual event Ai , the values of the order statistics relate directly to the sample
values; for instance for outcomes Ω in A3 , the values of the order statistics are
X1 Ω X2 Ω, X2 Ω X1 Ω, X3 Ω X3 Ω. Consequently, if we try to write an
expression for the joint probability that the three order statistics take values in infinitesimally
small intervals yi , yi d yi , where y1 y2 y3 , we have for outcomes in A3 :
PX1 y1 , y1 d y1 , X2 y2 , y2 d y2 ,
X3 y3 , y3 d y3
A3
PX2 y1 , y1 d y1 , X1 y2 , y2 d y2 , X3 y3 , y3 d y3 (6)
f y1 d y1 f y2 d y2 f y3 d y3
f y1 f y2 f y3 d y1 d y2 d y3
A derivation similar to formula (6) can be done for each of the 6 disjoint events Ai , and the
right side would be the same in each case (try at least one other case yourself). Thus the total
probability that the order statistics jointly take values in the respective intervals
yi , yi d yi is:
262 Chapter 4 Continuous Distributions
This suggests that the joint density function of X1 , X2 , and X3 is 6 f y1 f y2 f y3 ,
non-zero on the set of states for which y1 y2 y3 . Generalizing to a random sample of
size n motivates the following theorem.
Theorem 1. The joint density of the order statistics of a random sample of size n from a
continuous distribution with p.d.f. f x is
f y1 , y2 , … , yn n f y1 f y2 f yn ; y1 y2 yn (8)
.1 1 y4
4 2 y1 2 y4 2 y2 y4 2 y2 2
d y2 d y4 d y1
0 .9 y1
.1 1 y4 y4
24 2 y1 2 y4 2 y2 2 y3 y3 y2 y4 y1
0 .9 y1 y2
0.0185368
The probability that the smallest order statistic is less than .1 and the largest exceeds .9 is
about .0185.
Activity 2 In the example above, use the c.d.f. technique to find PX1 .1 and
PX4 .9 separately. Do you expect the product of these to equal
PX1 .1, X4 .9?
4.4 Order Statistics 263
Next we show the formulas for the marginal densities of single order statistics and
pairs of order statistics. We could derive these formally by setting up multiple integrals of
the joint density with respect to all of the other order statistics, but again it is instructive to
take the more intuitive approach to motivating the formulas.
Theorem 2. (a) The marginal density of the ith order statistic in a random sample of size n
from a continuous distribution with p.d.f. f x and c.d.f. Fx on state space a, b is:
n
fi yi Fyi i1 1 Fyi ni f yi ; a yi b (9)
i 1 n i
(b) The joint marginal density of the jth and k th order statistic j k in a random sample of
size n from a continuous distribution with p.d.f. f x and c.d.f. Fx on state space a, b is:
f j k y j , yk
n j1 k j1
Fy j
Fyk Fy j
(10)
j 1 k j 1 n k
1 Fyk nk f y j
f yk ; a y j yk b
Proof. (a) (Informal) In order for the ith smallest order statistic Xi to be in the infinitesimal
interval yi , yi d yi , one of the sample values X j must be in that interval, exactly i 1 of
them must be less than yi , and the remaining n i of them must exceed yi d yi . A multino-
mial experiment is generated with n trials corresponding to the sample values, with three
success categories (sample value < yi ), (sample value in yi , yi d yi ), and (sample value >
yi d yi ). The success probabilities for the categories are approximately Fyi , f yi d yi ,
and 1 Fyi , respectively. The appropriate multinomial probability of i 1 type 1 suc-
cesses, 1 type 2 success, and n i type 3 successes is therefore:
n
PXi yi , yi d yi
Fyi i1 f yi d yi 1 1 Fyi ni
i 1, 1, n i
On slight rearrangement, this equation implies the density in formula (9) for the random
variable Xi .
(b) Do this argument yourself in the next Activity.
Activity 3 Intuitively justify the form of the joint density of X j and Xk in formula
(10). (Hint: In order for both X j y j , y j d y j and Xk yk , yk d yk to occur,
how many sample values must fall into what intervals, and with what probabilities?)
sample median and superimpose a scaled histogram of simulated medians on a graph of the
p.d.f. of X3 . Lastly, if the observed values of the random sample were .45, .76, .82, .92,
and .68, does the value of the sample median give you cause to seriously doubt the hypothe-
sis that the sample was taken from the uniform(0,1) distribution?
At the heart of all of the questions posed in this example is the p.d.f. of X3 . Since
we have n 5, i 3, f x 1, and Fx x for x 0, 1, formula (9) of Theorem 1 yields:
5 2
f y3 y3 2 1 y3 1 30 y3 2 2 y3 3 y3 4
; y3 0, 1
22
Notice that 1/2 is the median of the distribution of each sample value, so it is interesting to
know if the distribution of the sample median X3 puts half of its weight to the left of the
distributional median m 1 2 and half to the right. We compute:
PX3 12
.5
30 y3 2 2 y3 3 y3 4 y3
0
0.5
EX3
1
30 y3 y3 2 y3 y3 y3
2 3 4
0
1
2
Therefore, considered as an estimator of the distributional median m .5, the sample median
has the desirable property that its expected value is m.
The program below simulates a desired number of sample medians of a desired
sample size n from a given population distribution. The sample itself is taken and sorted,
then the median of that sample is computed and added to a list. In computing the sample
median, the Mod function is used to tell the difference between even and odd sample size,
and the appropriate case in formula (4) is employed.
4.4 Order Statistics 265
SimulateMedians
nummeds_, n_, dist_ : Module
medindex;
AppendTo
medlist, med, nummeds,
even case
medindex n 2;
Do
sample RandomReal
dist, n;
sortedsample Sort
sample;
med sortedsample
medindex
sortedsample
medindex 1 2;
AppendTo
medlist, med, nummeds;
medlist
Now we apply the function to our question, simulating 200 random samples, each of size 5.
The histogram of sample medians and the plot of the density function of the sample median
follow.
Needs
"KnoxProb7`Utilities`"
SeedRandom
451 623
medianlist SimulateMedians
200, 5, UniformDistribution
0, 1;
g1 Histogram
medianlist, 8,
"ProbabilityDensity", ChartStyle Gray;
g2 Plot30 y2 2 y3 y4 , y, 0, 1,
PlotStyle Black
;
Show
g1, g2, BaseStyle 8
266 Chapter 4 Continuous Distributions
2.0
1.5
1.0
0.5
Figure 4.22 - Histogram of 200 simulated medians of samples of size 5 from unif(0,1)
Visually it does appear that the distribution of the sample median is symmetric about the
distributional median of 1/2 in this case, and the histogram of the simulated medians follows
the density function fairly well. (Try to check the symmetry mathematically.)
Finally, if the observed sample values are as given in the problem, then the sample
median is the middle value of the sorted sample, which is .76. We ask the question: if the
uniform(0,1) distribution is the correct one, what is the probability that the sample median is
.76 or more? This probability is computed in the integral below.
PX3
.76
1
30 y3 2 2 y3 3 y3 4 y3
.76
0.0932512
So it is only about 9.3% likely that a sample of size 5 would produce such a large median.
This is suspicious, but it does not provide conclusive evidence against the hypothesis that
the underlying distribution is uniform(0,1).
Example 3 Using the same assumptions as Example 2, find the p.d.f. of the sample range
X5 X1 and find PX5 X1 .5.
By formula (10) of Theorem 2, the joint p.d.f. of X1 and X5 is
5 3
f15 y1 , y5 030
y1 0 y5 y1 1 y5 0 1 1
20 y5 y1 3 , 0 y1 y5 1
Then if we denote the sample range by Y X5 X1 , the graph of the region of y1 , y5
pairs satisfying Y
y in Figure 23 indicates that the c.d.f. of Y can be computed as:
4.4 Order Statistics 267
y5
1
y5 y1 y
y1
1 y 1
Figure 4.23 - Region of integration for Y X5 X1
y
Using our formula for f15 , the c.d.f. of the range is:
Fy
y y5
20 y5 y1 3 y1 y5
0 0
1 y5
20 y5 y1 3 y1 y5
y y5 y
5 1 y y4 y5
Simplifying a bit, the c.d.f. of Y is Fy 5 y4 4 y5 , and the p.d.f. of the range is its
derivative, f y 20 y3 20 y4 , y 0, 1. Substituting directly into the c.d.f. formula,
PX5 X1 .5 F.5 5 .54
4 .55
.1875.
Example 4 Find the probability distribution of the sample median M of a sample of size 4
from the distribution with p.d.f. f x 3 x2 ; x 0, 1. Find also the expected value of M.
1
Since we are in the case of even sample size, M X2 X3 . Integrating f from
2
0 to x gives easily that the underlying c.d.f. is Fx x ; x 0, 1. The joint density of X2
3
216 y2 5 y3 2 y3 5
; 0 y2 y3 1
268 Chapter 4 Continuous Distributions
y3 y3
1 1
y2 y3 2 m
2m y2 y3 2m m
y2 y2
m 1 m 1
The region of integration for the last probability in formula (11) has a different shape
depending on whether or not 2 m 1, that is, m 1 2. Part (a) of Figure 24 shows the case
where m 1 2, in which the boundary line has slope -1 and intercept 2 m 1. In that case
we can just integrate out y3 first between the two slanted boundaries, and let y2 range from 0
to m. The c.d.f. of M in this case is:
Case m 12
m 2 my2
G1
m_ : 216 y2 5 y3 2 y3 5 y3 y2 ;
0 y2
G1
m
74 m9 1024 m12
7 77
gdensity1
m_ : D
G1
m, m;
gdensity1
m
In the case where m 1 2 illustrated by part (b) of Figure 24, the region of integration is the
more complicated polygonal region that is shaded. But it is easier to complement and
integrate over the small triangle in the upper right corner as follows:
Case m 12
1 y3
G2
m_ : 1 216 y2 5 y3 2 y3 5 y2 y3 ;
m 2 my3
G2
m
162 m 648 m2
1 320 m3 648 m4
11 7
5184 m5 438 m9 1024 m12
384 m6
7 7 77
gdensity2
m_ : D
G2
m, m
gdensity2
m
162 1296 m
960 m2 2592 m3
11 7
25 920 m4 3942 m8 12 288 m11
2304 m5
7 7 77
To compute the expected value of M we must integrate m times the density separately over
the two regions and add. The command below shows that EM .771429.
E
M
.5 1
m gdensity1
m m m gdensity2
m m
0 .5
0.771429
module failures occur independently. When a failure of any module occurs, the whole
system must be inspected and repaired or replaced. Find the expected time T at which this
happens.
We may consider the three failure times of modules, X1 , X2 , X3 as a random sample
from the given distribution. The time T is then just the first order statistic X1 . For the sake
of variety and review, this time let us use first principles to find the c.d.f. and density
function of X1 , and then check to see that the result agrees with formula (9).
First we need to determine the value of the constant c , by forcing the integral of the
density function over the given state space to equal 1:
40
Solvec x 40 x x 1, c
0
3
c
32 000
t 3
F
t_ : x 40 x x;
0 32 000
F
t
3 t2 t3
1600 32 000
The c.d.f. of the system failure time T X1 is, by independence of the modules,
G t PT
t 1 PT t
1 PX1 t PX2 t PX3 t
1 1 Ft3
3
f
t_ : t 40 t
32 000
E
T
40
N t 3 1 F
t2 f
t t
0
12.2857
Exercises 4.4
1. The sample median is less sensitive to outlying data values than the sample mean is, and
for this reason some statisticians prefer the median as an estimator of the center of a distribu-
tion. If a random sample consists of the observed values 6.1, 3.5, 4.8, 5.5, and 4.1 explain
how one data value can be added to this list that changes the median by less than .5 but
changes the mean by 10.
2. Let X1 , X2 , X3 be a random sample from the distribution with density function
f x x ; x 1, 1. Find (a) PX1 .5; (b) PX3 .5; (c) PX1 .5, X3 .5.
3. Suppose that X1 , X2 is a random sample of size 2 from the distribution with density
function f x 1 2; x 1, 1. Find by direct probabilistic methods the joint c.d.f.
Fy1 , y2 PX1
y1 , X2
y2 of the order statistics of the sample, and take the partial
derivatives with respect to both y1 and y2 to obtain the joint density. Check that the result is
the same as in Theorem 1.
4. Find the p.d.f. of the sample range of a random sample of size 4 from the uniform
distribution on 1, 1.
5. (Mathematica) Write a Mathematica command to simulate a desired number of sample
medians of a random sample of n observations from the distribution whose density function
is f x 2 x, x 0, 1. What is the median of the underlying distribution? Use the
command to produce a scaled histogram of 200 sample medians in the case n 11, and
superimpose on that graph the graph of the p.d.f. of the sample median. Finally estimate
EM and Var M using simulation in the case of a sample of size 11.
6. On a particularly uninspiring day in Mathematical Statistics class, students get up and
leave independently of one another at random, uniformly distributed times in the interval
0, 50 minutes. The class begins with 8 students. The dogged professor will continue class
until the last student leaves. Find the probability that the class lasts no longer than 40
minutes.
7. (Mathematica) Approximate the probability that the minimum value in a random sample
of size 5 from the standard normal distribution is positive.
272 Chapter 4 Continuous Distributions
where x is the amount of a claim in thousands. Suppose that three such claims will be made.
What is the expected value of the largest of the three claims?
4.5 Gamma Distributions 273
Main Properties
The family of gamma distributions is one of the most important of all in probability
and statistics. In this section we will learn about the distributional properties of the general
gamma distribution, then we will discuss some of the special features and applications of an
important instance: the exponential distribution. Another instance of the gamma family, the
chi-square distribution, will be introduced in the next section.
Definition 1. A continuous random variable X has the gamma distribution if its probability
density function is:
1
f x; Α, Β xΑ1 exΒ , x 0 (1)
ΒΑ Α
which can be obtained by integration by parts for many values of Α. We will consider its
properties in more detail in a moment.
Activity 1 Use the substitution u x Β to show that the integral of the gamma density
is 1.
Needs"KnoxProb7`Utilities`";
fx_ : PDFGammaDistribution2, 3, x;
PlotContsProbfx, x, 0, 18, 1, 5,
Ticks 1, 3, 5, 7, 9, 11, 13, 15, Automatic,
ShadingStyle Gray
274 Chapter 4 Continuous Distributions
0.12
0.10
0.08
0.06
0.04
0.02
5 10 15
Figure 4.25 - The (2,3) density function
The area shaded is the probability P1 X 5 which can be found by integrating the
density function over the interval 1, 5.
5
N fx x
1
0.451707
Activity 2 Try to approximate the median m of the (2,3) distribution by using the CDF
function. (Remember that the median of a distribution is such that exactly half of the
total probability lies to its right.) Then check your guess by using the Quantile command.
0.35
0.20
0.30
0.15 0.25
0.20
0.10 0.15
0.10
0.05
0.05
2 4 6 8 10 12 14 2 4 6 8 10 12 14
(a) (b)
Figure 4.26 - (a) Gamma densities, Α = .5 (dashed), 1 (gray), 2 (thin), 3 (thick) and Β = 3; (b) Gamma densities, Β =
.5 (dashed), 1 (gray), 1.5 (thin), 2 (thick) and Α = 2
The graphs in Figure 26(a) illustrate the dependence of the gamma density on the Α
parameter for Α = .5, 1, 2, and 3. In all cases we fix Β at 3. The two cases where Α 1 show
a markedly different behavior: for Α = .5 the density is asymptotic to the y-axis, and for Α = 1
there is a non-zero y-intercept. (What is it?) You can check that in fact Α = 1 is a break point,
beyond which the density graph takes on the humpbacked shape of the other two curves.
Note that the probability weight shifts to the right as Α increases.
4.5 Gamma Distributions 275
In Figure 26(b) we examine the dependence of the shape of the gamma density on the
Β parameter for Β = .5, 1, 2, and 3. In each case Α is set at 2. As Β increases, notice the
spreading out of the density accompanied by a shift to the right. In the electronic text you
can execute the Manipulate command below to see the effect of changes to the parameters on
the density graph.
The gamma function upon which the gamma density depends has some interesting
properties that permit the density to be written more explicitly in some cases. First,
1 et dt 1. (3)
0
Second, by using integration by parts, we can derive the following recursive relationship:
n 0 tn1 et dt
tn1 et
0 0 n 1 tn2 et d t
(4)
0 n 1 0 tn2 et d t
n 1 n 1
Defining 0! to be 1 as is the custom, this formula gives values to the gamma function for all
positive integers n. It also turns out that 1 2 Π , as Mathematica shows below.
1 2 t
t E t
0
The first two moments of the gamma distribution depend on the parameters Α and Β
in an easy way.
Theorem 1. If X (Α, Β) then
(a) E[X] = Α Β;
(b) Var(X) = Α Β2 .
Proof: (a) We have
276 Chapter 4 Continuous Distributions
1
EX t
tΑ1 etΒ dt.
0 ΒΑ Α
But by multiplying and dividing by constants this integral can be rearranged into
Α 1 1
EX Β
tΑ exΒ dt.
Α 0 Α1
Β Α 1
The integral on the right is that of a (Α+1, Β) density; hence it reduces to 1. Therefore,
Α 1 ΑΑ
EX Β
Β
ΑΒ.
Α Α
(b) We leave the proof of this part as Exercise 3.
The next result on sums of independent gamma random variables is crucial to much
of statistical inference and applied probability. It says that if the random variables share the
same Β parameter, then the sum also has the gamma distribution. This will yield a very
fundamental property of the Poisson process in Chapter 6, and it will loom large in the study
of inference on the normal variance parameter in Section 4.6.
Theorem 2. (a) The moment-generating function of the Α, Β distribution is
1
Mt E etX
(6)
1 Β tΑ
In the integral on the right, for fixed t we make the change in parameter Δ = Β 1 Β t.
Notice that Δ Β 1 1 Β t and 1/Δ = 1/Β t. Then,
1 1
M t
ΒΑ
0 Α
xΑ1 exΔ dx
ΔΑ 1
ΒΑ
0 ΔΑ Α
xΑ1 exΔ dx
Δ Α 1
Β
1Β tΑ .
4.5 Gamma Distributions 277
MY t Eet X1
Eet X2
Eet Xn
1 1 Β t Αi
Since the Β parameters for the two gamma distributions agree, Theorem 2(b) implies that
T1 T2 1 3, 2. Thus, the probability PT1 T2 9 is about .34, as computed below
in Mathematica.
0.342296
Activity 3 How would you cope with the problem in the last example if T2 had the
2, 3 distribution?
fx_, Λ_ : PDFExponentialDistributionΛ, x
1.4 0.25
1.2
1.0 0.20
0.8 0.15
0.6 0.10
0.4
0.2 0.05
The exponential distribution has a property that is a bit peculiar. Consider the
picture in Figure 27 of the exp(1.5) density, and two typical times s .7 and t = 1.1 hence t
+ s = 1.8. In the first graph, the axis origin is set at (0,0) as usual, and we shade the area to
the right of s = .7, and on the second graph we move the axis origin to (t, 0) = (1.1, 0) and
shade the area to the right of t + s = 1.8. Though the vertical scales are different, the graphs
look identical. A more careful way of saying this is that the share that the first shaded region
has of the total area of 1 appears to be the same as the share that the second shaded region
has of the total area to the right of t = 1.1. That is,
PX s PX ts
1
PX t
.
The right side of the equation above is also a conditional probability, which implies that
PX s PX t s X t. (8)
The intuitive interpretation of equation (8) is as follows. Suppose that we are waiting
for our friend Godot, who will arrive at an exponentially distributed time X. We wait for t
units of time and Godot does not come. Someone else comes by, and asks how likely it is
that it will be s more time units (total time t + s) until Godot gets here. Thus, we are asked
about the conditional probability on the right. We must answer the unconditional probability
P[X > s], which is the same as if we had just begun waiting for Godot at time instant t. Our
waiting has done no good, probabilistically. This characteristic of the exponential distribu-
tion is called memorylessness. It is a great help in analyzing customer service situations in
which times between successive arrivals are exponentially distributed. We will have more to
say about this in Chapter 6.
4.5 Gamma Distributions 279
Some of the basic properties of the exponential(Λ) distribution are below. Three of
them, the mean, variance, and moment-generating function follow directly from its representa
tion as the (1, 1/Λ) distribution. The c.d.f. in (10) is a direct integration of the exponential
density. You should be sure you know how to prove all of them.
If X exp Λ, then E[X] = 1/Λ and Var X 1 Λ2 (9)
1
If X exp Λ, then the m.g.f. of X is M t Eet X
(11)
1 t Λ
Example 2 If the waiting time for service at a restaurant is exponentially distributed with
parameter Λ = .5/min. then the expected waiting time is 1/.5 = 2 minutes, and the variance of
the waiting time is 1/.52 = 4 so that the standard deviation is 2 minutes also. Using the
Mathematica c.d.f. function, the probability that the waiting time will be between 1 and 4
minutes is computed as:
0.471195
Of course, we would get the same result by integrating the density between 1 and 4, as
below.
4
fx, .5 x
1
0.471195
Example 3 A cellular phone has a 2 year warranty so that if the phone breaks prior to the
end of the warranty period the owner will be paid a prorated proportion of the original
purchase price of $120 based upon the fraction of time remaining until the 2 year mark.
After 2 years, nothing will be paid. The phone breaks at an exponentially distributed time
with mean 4 years. Find the expected amount that the warranty will pay.
The warranty pays an amount X 120 2 T 2 if the breaking time T 2 and zero
otherwise. Since the mean of 4 years is the reciprocal of the exponential parameter Λ,
T exp1 4. Then the expected value of X is:
280 Chapter 4 Continuous Distributions
2
N
120
2 t 2
1 4 E
1 4
t t
0
25.5674
The company offering the warranty will have to build a margin of at least $25.57 into the
purchase price in order to expect to cover their costs.
Exercises 4.4
1. (Mathematica) Suppose that X is a random variable with the (1.5, 1.8) distribution.
Evaluate: (a) PX 3; (b) PX 5; (c) P3 X 4.
2. Use the moment-generating function of the gamma distribution in Theorem 2 to derive the
formulas for the mean and variance of the distribution.
3. Derive the formula Var(X) = ΑΒ2 for the variance of a gamma random variable by direct
integration.
4. A project requires two independent phases to be completed, and the phases must be
performed back-to-back. Each phase completion time has the exp(4.5) distribution. Find the
probability that the project will not be finished by time 0.5.
5. Redo Exercise 4 under the assumption that the two phases may be performed concurrently.
6. Use calculus to show that if X (2, Β) then P[X 3] is an increasing function of Β.
7. If we are waiting for a bus which will arrive at an exponentially distributed time with
mean 10 minutes, and we have already waited for 5 minutes, what is the probability that we
will wait at least 10 more minutes? What is the expected value of the remaining time
(beyond the 5 minutes we have already waited) that we will wait?
8. (Mathematica) If X (4, 1/6), find: (a) a number b such that P[1 X b] = .15; (b) a
number c such that P[X > c] = .2.
9. Find the c.d.f. of the (2, 1/Λ) distribution analytically.
10. (Mathematica) Gary operates his own landscape business. It is now 12:00 noon, and he
has three lawns scheduled to mow. Suppose that the amounts of time T1 , T2 , and T3 (in
hours) that it takes to complete the mowing jobs have respectively the 1.5, 1, 2.5, 1,
and 1, 1 distributions. If he has a 7:00 p.m. date and needs an hour to get ready for it,
what is the probability that he will be on time? Suppose that before he leaves he also
receives a phone call from his sister that lasts an exponentially distributed amount of time
with mean 10 minutes. What now is the probability of being on time (if he accepts the call)?
11. (Mathematica) For what value of Β does the (3, Β) distribution have median equal to 2?
4.5 Gamma Distributions 281
12. Find a general expression for the pth percentile of the exp(Λ) distribution, i.e., a value x p
such that PX x p
p.
One of the most important areas of application for the exponential distribution is the
Poisson process. Exercises 1315 preview some of the ideas that are fleshed out in Section
6.2. A constructive definition of the Poisson process is that it is a collection of random
variables Nt t0 such that for each outcome Ω the function mapping time t to state Nt Ω is a
step function starting at 0 and increasing by jumps of size 1 only that occur at a sequence of
times T1 , T2 , T3 , ... such that the times between jumps Si Ti Ti1 are independent,
identically distributed exp(Λ) random variables. Here we take T0 0 so that S1 T1 . The
constant Λ is called the rate parameter of the process.
13. If the first four times between jumps are 1.1, 2.3, 1.5, and 4.2, what are the first four
jump times T1 , T2 , T3 , and T4 ? What are the values of N at times .8, 2.6, 4.1, and 5.9? What
would be a reasonable estimate of the rate parameter of the process?
14. (Mathematica) What is the probability distribution of the nth jump time Tn ? For a
Poisson process with rate Λ .56, find the probability that T3 6.1.
15. (Mathematica) The event Nt n is equivalent to what event involving Tn ? For a
Poisson process with rate Λ 2, compute PN3.1 4 in two different ways.
Sample Problems from Actuarial Exam P
16. An insurance policy reimburses dental expense, X, up to a maximum benefit of 250. The
probability density function for X is: f x c e.004 x , x 0 where c is a constant. Calculate
the median benefit for this policy.
17. An insurance company sells two types of auto insurance policies: Basic and Deluxe. The
time until the next Basic Policy claim is an exponential random variable with mean two
days. The time until the next Deluxe Policy claim is an independent exponential random
variable with mean three days. What is the probability that the next claim will be a Deluxe
Policy claim?
Μ = 1. Statistical inference is the problem of using random samples of data to draw conclu-
sions in this way about population parameters like the normal mean Μ.
There are other important inferential situations in statistics that depend on the three
distributions that we will cover in this section. We will only scratch the surface of their
statistical application; the rest is best left for a full course in statistics. But you will complete
this section with a sense of how the chi-square distribution gives information about Σ2 in
NΜ, Σ2 , how the Student's t-distribution enables inference on Μ in NΜ, Σ2 without the
necessity of knowing the true value of Σ2 , and how the F-distribution applies to problems
where two population variances Σ1 2 and Σ2 2 are to be compared. So our plan in this section
will be to define each of these distributions in turn, examine their main properties briefly,
and then illustrate their use in statistical inference problems.
Chi-Square Distribution
n 2
If you observe the sample variance S 2 Xi X n 1 of a random sample
i1
X1 , X2 , ... , Xn from a normal distribution many times, you will find the observed s2 values
distributing themselves in a consistent pattern. The command below simulates a desired
number of sample variances in a straightforward way. The argument numvars refers to the
number of sample variances we wish to simulate, sampsize is the common sample size n for
each sample variance, and Μ and Σ are the normal distribution parameters. In the electronic
version of the text you should try running it yourself several times. The right-skewed shape
of the empirical distribution in Figure 28 should remind you of the gamma distribution. In
fact, we argue in a moment that a rescaled version of S 2 has a probability distribution which
is a special case of the gamma family, defined below.
Needs"KnoxProb7`Utilities`";
SimSampleVariancesnumvars_, sampsize_, Μ_ , Σ_ :
TableVarianceRandomRealNormalDistributionΜ, Σ,
sampsize, i, 1, numvars
SeedRandom984 521;
1.2
1.0
0.8
0.6
0.4
0.2
Figure 4.28 - Histogram of 200 sample variances of samples of size 20 from N 0, 1
0.15
0.10
0.05
5 10 15 20
Figure 4.29 - Χ r densities; r 5 dashed, r 6 gray, r 7 thin, r 8 thick
2
284 Chapter 4 Continuous Distributions
Activity 1 Why is the mean of the Χ2 r distribution equal to r? What is the variance of
the distribution?
Theorem of Calculus to differentiate with respect to u. Note that 1 2 Π .) This is the
key to understanding what the Χ2 -distribution has to do with the sample variance S 2 .
Recall that by the normal standardization theorem,
Xi Μ
Zi
N0, 1 (2)
Σ
for each sample member Xi ; hence, by the result mentioned above,
Xi Μ 2
Ui Zi 2
Χ2 1 (3)
Σ
for each i 1, 2, ... , n. Moreover, in Exercise 2 of this section you are asked to use
moment-generating functions to show that the sum of independent chi-square random
variables is also chi-square distributed, with degrees of freedom equal to the sum of the
degrees of freedom of the terms. So, the random variable
n n Xi Μ 2
U Ui
Χ2 n (4)
i1 i1
Σ
Observe that V differs from U only in the sense that in V, Μ has been estimated by X . Since
U has a Χ2 -distribution, we might hope that V does also, especially in view of the simulation
evidence. The following theorem gives the important result.
Theorem 1. If X1 , X2 , ... , Xn is a random sample from the NΜ, Σ2 distribution, and S 2 is
the sample variance, then
n 1 S 2
Χ2 n 1 (6)
Σ2
4.6 Chi-Square, Student's t, and F-distributions 285
Recalling that the square of a standard normal random variable is Χ2 1, it follows that for
the case n 2,
n 1 S 2 2 1 S 2 X1 X2 2
Χ2 1
Σ2 Σ2 2 Σ2
When we talk about the t-distribution in the next subsection, the second part of
Theorem 1 becomes important: S 2 is independent of X . At first glance this almost seems
ludicrous, because both S 2 and X depend functionally on the sample values X1 , X2 , ... , Xn .
To see that it is possible for S 2 and X to be independent, consider again the case n 2,
where S 2 is a function of X1 X2 by formula (7), while X is a function of X1
X2 . We will
286 Chapter 4 Continuous Distributions
It has the same basic properties as the single variable m.g.f.; most importantly, it is unique to
the distribution. In particular, if we find that a joint m.g.f. happens to match the joint m.g.f.
of two independent random variables (which is the product of their individual m.g.f.'s), then
the original two random variables must have been independent. This observation plays out
very nicely with Y1 X1 X2 and Y2 X1
X2 , where X1 and X2 are independent NΜ, Σ2
random variables. In this case by formula (8), the joint m.g.f. of Y1 and Y2 is
M t1 , t2 Eet1 Y1
t2 Y2 Eet1 X1 X2
t2 X1
X2
Eet1
t2 X1
t2 t1 X2
Eet1
t2 X1 Eet2 t1 X2
1 2 1 2
Μt1
t2
Σ2 t1
t2 Μt2 t1
Σ2 t2 t1
e 2 e 2
1 1
0t1
2 Σ2 t1 2 2 Μt2
2 Σ2 t2 2
e 2 e 2
In the first line we use the defining formulas for Y1 and Y2 , we gather like terms and then use
independence of X1 and X2 to factor the expectation in line 3, we use the known formula for
the normal m.g.f. in line 4, and the fifth line is a straightforward rearrangement of terms in
the exponent of the fourth line, which you should verify. But the final result is that the last
formula is the product of the N0, 2 Σ2 m.g.f. and the N2 Μ, 2 Σ2 m.g.f. So, Y1 and Y2
must be independent with these marginal distributions. It follows that the sample mean X
and the sample variance S 2 are independent when n 2, and the result can be extended to
higher n.
Example 1 The data below are percentage changes in population between 2000 and 2007 in
a sample of 29 towns in west central Illinois.
Assume that these values form a random sample of population changes from all towns in the
region, which we suppose are approximately normally distributed. Can we obtain a reason-
able estimate of the variability of population changes, as measured by Σ? Is there a way of
measuring our confidence in the estimate?
4.6 Chi-Square, Student's t, and F-distributions 287
DotPlotpopchanges
10 8 6 4 2
Figure 4.30 - Distribution of a sample of percentage changes in population
This rather small data set is displayed in the form of a dot plot in Figure 30. There
are a few rather large observations indicating a right skewness in the empirical distribution,
but this is possible in a small sample and there are at least no gross departures from normal-
ity that are visible here; so we will proceed under the assumption that percentage population
changes are normally distributed. The sample variance and standard deviation are:
Ssquared Variancepopchanges
S StandardDeviationpopchanges
5.07384
2.25252
Hence, the observed s 2.25252 is a point estimate of the population variability Σ. But as
you have probably experienced in such contexts as opinion polls, a more informative
estimate would be equipped with a margin of error, so that we can say with high confidence
that the true Σ is in an interval. This gives us information about the precision of the point
estimate s.
This is where Theorem 1 comes in. The transformed random variable
Y n 1 S 2
Σ2 has the Χ2 28 distribution for this sample of size 29, so that we can find
two endpoints a and b such that the interval a, b encloses Y with a desired high probability.
Then we will be able to say something about Σ2 itself, and in turn about Σ. Let us demand
90% confidence that Y lands in a, b . Even under this specification, a and b can be chosen
in infinitely many ways, but a natural way to choose them is to set PY b .05 and
PY a .05, i.e., to split the 10% = .10 error probability into two equal pieces. Then we
can find a and b by computing
288 Chapter 4 Continuous Distributions
a QuantileChiSquareDistribution28, .05
b QuantileChiSquareDistribution28, .95
16.9279
41.3371
0.05
0.04
0.03
0.02
0.01
16.9279 41.3371
Figure 4.31 - 90% of the area under the Χ2 28 density
Σ2
(9)
n1 S 2 n1 S 2
P Σ2
b a
28 S 28 S
,
b a
1.85386, 2.89699
Student's t-Distribution
In 1908, statistician W. S. Gossett, an employee of the Guinness brewery, studied the
distribution of the sample mean. Recall that the expected value of X is the population mean
Μ, and the variance of X is Σ2
n, but Gossett considered standardizing X by an estimate
S n of its standard deviation, rather than the usually unknown Σ n . He was led to
discover a continuous probability distribution called the t-distribution, which we study now.
But for proprietary reasons he was unable to publish his result openly. In the end he did
publish under the pseudonym "Student," and for this reason the distribution is sometimes
called the Student's t-distribution.
In Figure 32 is the result of a simulation experiment in which 200 observations of the
random variable
X Μ
T (10)
S n
are obtained and plotted, where in each of the 200 sampling experiments a random sample of
size n 20 is drawn from the normal distribution with mean 10 and standard deviation 2.
The closed cell below this paragraph contains the code, but do not open it until you have
tried Exercise 13, which asks you to write such a simulation program. This is an important
exercise for your understanding of sampling for inference about a normal mean.
0.30
0.25
0.20
0.15
0.10
0.05
where Z
N0, 1, U
Χ2 r, and Z and U are independent. We abbreviate the distribution
as tr. The parameter r is called the degrees of freedom of the distribution.
Mathematica has an object called
StudentTDistribution[r]
in its kernel, to which we can apply functions like PDF, CDF, Quantile, and RandomReal in
the usual way. Here for instance is a definition of the density function, and an evaluation of
it for general x so that you can see the rather complicated form. (The standard mathematical
function Beta[a,b] that you see in the output is defined as a b a
b .)
1r
r
2
rx2
r 1
r Beta ,
2 2
degrees
0.4
0.3
0.2
0.1
3 2 1 1 2 3
It can be shown that the t-density converges to the standard normal density as r.
The cell above Figure 33 contains code for generating graphs of the t-density and the N0, 1
density for degrees of freedom r 2, ..., 25. In the electronic file you can execute the
command to observe the convergence of the t-density to the standard normal density as r
becomes large. Observe that the t-density is a little heavier in the tails, but the probability
weight shifts toward the middle as r increases. And the two distributions are very hard to
tell apart for even such moderate r as 20.
Activity 3 Use Mathematica to find out how large r must be so that the probability that
a tr random variable lies between 1 and 1 is within .01 of the probability that a
standard normal random variable lies between 1 and 1.
How is the formula for the tr density obtained? Using the c.d.f. technique of
Section 4.3, you can write an integral for the c.d.f. PT t PZ U r t
, then
differentiate with respect to t. You are asked to do this in Exercise 8. But the density
formula is of secondary importance to what we turn to now.
To see the connection of the t-distribution to statistical inference on the mean,
consider the sample mean X and sample variance S 2 of a random sample of size n from the
NΜ, Σ2 distribution. We know from previous work that
n 1 S 2
X
NΜ, Σ2
n and
Χ2 n 1 (12)
Σ2
and also X and S 2 are independent. So, we can standardize X and use the result as Z in the
defining formula (11) for the t-distribution, and we can take U n 1 S 2
Σ2 and r n 1
in that formula. The result is that the random variable in formula (13) has the tn 1
distribution:
X Μ
Σ n
T
tn 1 (13)
n1 S 2
Σ2 n1
Theorem 2. If X and S 2 are the sample mean and sample variance of a random sample of
size n from the NΜ, Σ2 distribution, then the random variable
X Μ
T (14)
S n
292 Chapter 4 Continuous Distributions
would have the t24 distribution, since the sample size is 25. If we observe a T of unusually
large magnitude, then doubt is cast on the claim that Μ 28. And we can quantify how
much doubt by computing the probability that a t24 distributed random variable is as large
or larger in magnitude than the actual value we observe. The smaller is that probability, the
less we believe the claim. Our particular data give an observed value
26.5 28
t
2.7 25
2.77778
CDFStudentTDistribution24, 2.7778
0.00522692
So if truly Μ = 28, we would only observe such an extremely small T with probability about
.005. It is therefore very likely that the true value of Μ is less than 28.
4.6 Chi-Square, Student's t, and F-distributions 293
Activity 4 The sample average mileage of 26.5 in the last example was within one S of
the hypothesized Μ = 28. In view of this, why was the evidence so strong against the
hypothesis? Is there strong evidence against the hypotheses that Μ = 27?
F-Distribution
The last of the three special probability distributions for statistical inference is
defined next.
Definition 3. Let U
Χ2 m and V
Χ2 n be two independent chi-square distributed
random variables. The F distribution with degree of freedom parameters m and n
(abbreviated Fm, n) is the distribution of the transformed random variable
U m
F (15)
V n
Mathematica's version of the F-distribution goes by the name
FRatioDistribution[m, n]
and is contained in the kernel. We define the density function below and give a typical
graph in Figure 34 for degrees of freedom m n 20. Its shape is similar to that of the
gamma density. Note in particular that the state space is the set of all x 0.
0.8
0.6
0.4
0.2
1 2 3 4 5 6
Figure 4.34 - F20, 20 density
Activity 5 Hold the m parameter fixed at 10, and use the Manipulate command to plot
the F-density for integer values of the n parameter ranging from 5 to 25 and observe the
effect of changing the n parameter on the shape of the density. Similarly, hold n at 10
and let m range from 5 through 25 to see the effect on the density of changes to m.
294 Chapter 4 Continuous Distributions
We use Mathematica to show the exact form of the Fm, n density below, which can
again be obtained by the c.d.f. technique, but that is less interesting than the use of the F-
distribution in doing inference on two normal variances, which we discuss in the next
paragraph.
fx, m, n
m 1
mm2 nn2 x1 2 n m x
2 mn
m n
Beta ,
2 2
m 1 S x 2 n 1 S y 2
U
Χ m 1 and V
2
Χ2 n 1
Σx 2 Σy2
and also, by the independence of the two samples, S x 2 and S y 2 are independent; hence U and
V are independent. Using this U and V in the definition of the F-distribution, and dividing
by their respective degrees of freedom yields the following result.
Theorem 3. If X1 , X2 , ... , Xm is a random sample from the NΜx , Σx 2 distribution with
sample variance S x 2 , and Y1 , Y2 , ... , Yn is an independent random sample from the
NΜ y , Σ y 2 distribution with sample variance S y 2 , then the random variable
Sx 2
Σx 2
F (16)
Sy2
Σy2
Exercise 19 asks you to simulate a number of F-ratios as in (16) and compare the
empirical distribution to the Fm 1, n 1 density. You should see a good fit.
An important special case of the use of the F-random variable in (16) occurs when
we want to make a judgment about whether it is feasible that two population variances Σx 2
and Σ y 2 are equal to one another. Under the hypothesis that they are equal, they cancel one
another out in (16), leaving F S x 2
S y 2 . If F is either unreasonably large or unreasonably
close to 0, i.e., if we observe an F too far out on either the left or right tail of the distribution
in Figure 34, then we disbelieve the hypothesis of equality of the two population variances.
The probability of observing such an F quantifies our level of disbelief. The next example
illustrates the reasoning, and recalls the idea of confidence intervals from earlier in the
section.
Example 3 I am the coordinator for a mathematics placement examination given every year
to entering students at my college. Not only is the average level of mathematics preparation
of students of concern in teaching, but also the variability of their preparation. Below are
samples of placement test scores from two years. Assuming that the population of all scores
is normally distributed for both years, let us see if there is significant statistical evidence that
the variability of scores is different from one year to another based on this data.
year1 9, 14, 16, 8, 19, 7, 14, 17, 16, 7, 13, 16,
11, 9, 30, 14, 9, 9, 9, 11, 13, 16, 6, 17, 18, 11,
12, 18, 3, 12, 20, 8, 13, 14, 21, 11, 27, 26, 5;
year2 16, 21, 12, 19, 13, 14, 9, 29, 21, 5, 7,
11, 4, 8, 17, 5, 10, 8, 13, 19, 14, 16, 15, 8,
6, 8, 15, 9, 6, 23, 19, 8, 19, 7, 16, 27, 13,
13, 28, 9, 7, 18, 12, 9, 10, 7, 8, 20, 19;
Lengthyear1
Lengthyear2
VarX NVarianceyear1
VarY NVarianceyear2
39
49
35.4103
39.5323
The first sample has size m 39 and the second has size n 49, and so the F-ratio in
(16) has the F38, 48 distribution. The two sample variances of 35.4103 and 39.5323 seem
296 Chapter 4 Continuous Distributions
to be close, but are they sufficiently close for us to believe that Σx 2 for year 1 is the same as
Σ y 2 for year 2?
There are two bases upon which we can give an answer, which are somewhat
different statistical approaches, but which turn out to be equivalent. The first approach is to
give an interval estimate of the ratio Σx 2
Σ y 2 in which we are highly confident that this
ratio falls. Under the hypothesis of equal population variances, the ratio is 1, and thus 1
should fall into our confidence interval. If it doesn't, then we disbelieve that the population
variances are the same, and the level of the confidence interval measures our probability of
being correct. To find a confidence interval for Σx 2
Σ y 2 of level 90% for example, we can
use Mathematica to find, for the F38, 48 distribution, two numbers a and b such that
PF a .05 and PF b .05; then Pa F b .90.
0.594734
1.65219
You might say that a .59 and b 1.65 are our extremes of reasonableness. It is not often
that an F-random variable with these degrees of freedom takes a value beyond them. Then
we can write
Sx 2
Σx 2 Sx 2 Σx 2 Sx 2
.90 Pa F b Pa b
P
(17)
Sy2
Σy2 b Sy2 Σy2 a Sy2
after rearranging the inequalities. So the two endpoints of the confidence interval are formed
by dividing the ratio of sample variances by b and a, respectively, which we do below.
VarX VarX
,
b VarY a VarY
0.542148, 1.5061
Since 1 is safely inside this 90% confidence interval, we have no statistical evidence against
the hypothesis Σx 2 Σ y 2 .
The second approach is to specify a tolerable error probability for our decision as to
whether the population variances are equal, such as 10%. Specifically, we want a rule for
deciding, such that if the two variances are equal, we can only err by deciding that they are
4.6 Chi-Square, Student's t, and F-distributions 297
F VarX VarY
0.895729
and since .895729 lies between a and b, we do accept the hypothesis Σx 2 Σ y 2 . Notice that
Sx 2 Sx 2 Sx 2
a b 1
Sy2 b Sy2 a Sy2
Referring to (17), we see that our decision rule for accepting the hypothesis Σx 2 Σ y 2 using
this second approach is exactly the same as the first approach using confidence intervals.
Activity 6 In the last example check to see whether 1 is inside confidence intervals of
level .80 and .70. If it is, why is this information more helpful than merely knowing that
1 is inside a 90% confidence interval?
Exercises 4.6
1. (Mathematica) If X has the Χ (10) distribution, find:
2
(a) P8 X 12 ;
(b) PX 15 ;
(c) the 25th percentile of the distribution, i.e., the point q such that PX q .25;
(d) two points a and b such that Pa X b .95.
2. Find the moment-generating function of the Χ2 r distribution. Use it to show that if
X1
Χ2 r1 , X2
Χ2 r2 , ... , Xn
Χ2 rn are independent, then the random variable
Y X1
X2
Xn has the Χ2 r1
r2
rn distribution.
3. (Mathematica) Derive a general formula for a 95% confidence interval for the variance
Σ2 of a normal distribution based on a random sample of size n. Then write a Mathematica
command to simulate such a confidence interval as a function of the normal distribution
parameters and the sample size n. Run it 20 times for the case Μ 0, Σ2 1, n 50. How
many of your confidence intervals contain the true Σ2 1? How many would you have
expected to contain it?
298 Chapter 4 Continuous Distributions
4. (Mathematica) In Example 2 of Section 4.1 we worked with a data set of demands for an
item, repeated below. Assuming that the measurements form a random sample from a
normal distribution, find a 90% confidence interval estimate of the population standard
deviation Σ.
6. (Mathematica) If X and S 2 are the sample mean and sample variance of a random sample
of size 20 from the NΜ, Σ2 distribution, find P[ X Μ, S 2 Σ2 ].
7. (Mathematica) For the population change data in Example 1, do you find statistically
significant evidence that the mean percentage change among all towns in the region is less
than -6%? Explain.
8. Derive the formula for the t-density using the approach suggested in the section.
9. (Mathematica) If T
t23, find: (a) PT 2.1 ; (b) PT 1.6 ; (c) a point t such that
Pt T t .80.
10. (Mathematica) In Exercise 4 of Section 4.2 we saw a data set of points per game scored
by the 30 NBA teams in 2007-2008. The data is repeated here for your convenience. If over
the last ten years the average points scored per game has been 98.7, do you find statistically
significant evidence that scoring has increased? Explain.
11. Derive the form of a 90% confidence interval for the mean Μ of a normal distribution
based on a sample of size n and the t-distribution.
12. Show that the t-distribution is symmetric about 0.
13. (Mathematica) Simulate a list of 200 T random variables as in formula (14), where
samples of size 20 are taken from the N0, 1 distribution. Superimpose a scaled histogram
of the data on a graph of the appropriate t-density to observe the fit.
14. In Exercise 6 of Section 4.2 were data on sulfur dioxide pollution potentials for a
sample of 60 cities. They are reproduced below for your convenience. You should have
found in that exercise that the variable X logSO2 had an approximate normal distribu-
tion. Comment as a probabilist on the reasonableness of the claim that the mean of X is 3.
SO2 59, 39, 33, 24, 206, 72, 62, 4, 37, 20, 27,
278, 146, 64, 15, 1, 16, 28, 124, 11, 1, 10, 5,
10, 1, 33, 4, 32, 130, 193, 34, 1, 125, 26, 78,
8, 1, 108, 161, 263, 44, 18, 89, 48, 18, 68, 20,
86, 3, 20, 20, 25, 25, 11, 102, 1, 42, 8, 49, 39;
15. (Mathematica) Find the smallest value of the degree of freedom parameter r such that
the tr density is within .01 of the N0, 1 density at every point x.
16. (Mathematica) If a random variable F has the F12, 14 distribution, find: (a) PF 1 ;
(b) P1 F 3 ; (c) numbers a and b such that PF a .05 and PF b .05.
17. (Mathematica) In Example 1 of Section 4.2 was a data set of scores in a statistics class
on three items: homework, quiz, and final exam, as shown above. Comment first on whether
it would make sense to test whether there is a significant difference in variability between the
quiz and final scores. Then consider the homework and quiz scores. View them as random
samples from the universe of possible scores that students could have earned. Using the F
ratio that was discussed in the section, do you find significant statistical evidence that there
is a difference in variability of these item scores? Comment on the appropriateness of doing
this.
18. Discuss how you would go about testing statistically whether one normal variance Σ1 2
is two times another Σ2 2 , or whether alternatively Σ1 2 is more than 2Σ2 2 .
19. (Mathematica) Write a Mathematica command to simulate a desired number of F-ratios
as in formula (16). Then superimpose a histogram of 200 such F-ratios on the appropriate F-
density, where the two random samples used to form S x 2 and S y 2 are both of size 20 and are
taken from N0, 4 and N0, 9 distributions, respectively.
20. (Mathematica) A cable television network is being pressured by its sponsors to show
evidence that its Nielsen ratings across its program offerings shows the same consistency as
the ratings for the same time period last year. Thus, they are interested not in average
Nielsen rating (which may be strongly affected by a hit program that rates highly, but carries
only a small amount of the sponsors' advertising), but variability of the ratings over the range
of shows. Find a 90% confidence interval for the ratio of population variances, assuming that
the ratings data below are independent random samples from normal populations. Conclude
from your interval whether the data are consistent with the hypothesis of equality of the two
variances.
21. If a random variable F has the Fm, n distribution, what can you say about the random
variable 1 F, and why?
Linear Transformations
If you did Exercises 1718 in Section 4.2 you encountered the multivariate generalization of
the normal distribution, called the multivariate normal distribution. This was a relatively
straightforward extension of the bivariate normal distribution, couched in vector and matrix
terms. The state space of a multivariate normal random vector X = X1 , X2 , … , Xn consists
of vectors x x1 x2 … xn n , and the joint density function can be written in matrix
form as
1
1 xΜ
e12 xΜ
t
f x (1)
2 Πn2 det
where we use "det" to denote the determinant of a matrix, the superscript -1 to denote the
inverse of a matrix, and the superscript t to indicate transpose of a vector or matrix. The two
parameters governing the distribution are:
Μ1
Μ2
Μ and
Μn
Σ1 2 Ρ12 Σ1 Σ2 Ρ13 Σ1 Σ3 Ρ1 n Σ1 Σn (2)
Ρ12 Σ1 Σ2 Σ2 2
Ρ23 Σ2 Σ3 Ρ2 n Σ2 Σn
Ρ13 Σ1 Σ3 Ρ23 Σ2 Σ3 Σ3 2 Ρ3 n Σ3 Σn
Ρ1 n Σ1 Σn Ρ2 n Σ2 Σn Ρ3 n Σ3 Σn Σn 2
respectively, the vector of means of the Xi 's and the matrix of variances and paired covari-
ances of the Xi 's. We refer to these as the mean vector and covariance matrix of the random
vector X, terms that have general usage outside of the domain of normal random variables.
Activity 1 Check in the case n 1 that the multivariate normal density reduces to the
single variable normal density from Section 4.1.
302 Chapter 4 Continuous Distributions
Let us now compute the m.g.f. of the multivariate normal distribution with mean
vector Μ and covariance matrix . The derivation will involve completing the square in a
similar vein to the single variable case. The two facts from linear algebra that we need for
this derivation are:
If A and B are matrices, then A Bt Bt At (4)
If A is a matrix, then A
A1 A1
A I, where I is an identity matrix (5)
n-fold integral representing the expectation, and make the simple change of variables
zi xi Μi for each i to get:
t
X
M t Eet
tt
Μ t
XΜ
e Eet
tt
Μ t t
xΜ 1 1 xΜ
e12 xΜ
t
e e d xn d xn1
2 Πn
2 det
t
Μ
z 1 1 z
e12 z
t t t
e d zn d zn1 d z1
t
e
2 Πn
2 det
In the integrand we can bring the two exponentials together; the exponent of e can then be
written
1
tt
z 1 2 zt 1 z zt 1 z 2 tt
z
2
1 1 t
2 zt 1 z 2 tt
z tt t 2
t t
1 1 1 t
2 z t z t t
2
t t
1 t
In line 2 we added and subtracted 2
t t from the expression, and in line 3 we noticed that
the expression in parentheses in line 2 factored. (Check this by multiplying out
z tt 1 z t in line 3 and using properties (4) and (5); note that the product tt
z
4.7 Transformations of Normal Random Variables 303
The last line follows because the integral is taken over all states of a valid multivariate
normal p.d.f. with mean vector t and covariance matrix . We have therefore proved:
Lemma 1. If X has the multivariate normal distribution with parameters Μ and , then the
moment-generating function of X is
1
tt
Μ tt t (6)
M t e 2
It now becomes very easy to prove our first main theorem.
Theorem 1. If X has the multivariate normal distribution with parameters Μ and , and A is
a matrix for which the product A X is defined, then A X has the multivariate normal distribu-
tion with parameters A Μ and A
At .
Proof. The m.g.f. of A X is
A X
Eet
A X
t t
M A X t Eet
MX At
t
1
tt
A Μ At
tt At
t
e 2
1
tt
A Μ tt A A t t
e 2
The last expression is the moment-generating function of the multivariate normal distribu-
tion with parameters A Μ and A
At , hence A X must have that distribution.
Example 1 Consider a random vector X with the bivariate normal distribution with mean
vector Μ 0, 0t and covariance matrix
1 .5
.5 1
which means that each of X1 and X2 have variance and standard deviation equal to 1. The
correlation between them is therefore
304 Chapter 4 Continuous Distributions
CovX1 , X2
Ρ .5.
1
1
1 1
Let a new random vector Y be formed by Y A X where A . Then Y1 X1 X2
1 1
and Y2 X1 X2 . Clearly A
Μ x 0, hence Theorem 1 implies that Y is multivariate normal
with mean vector 0 and covariance matrix A
At , which is
3. 0.
0. 1.
We have the surprising result that even though the original variables X1 and X2 were
correlated, the new variables Y1 and Y2 are not. Their variances are 3 and 1, respectively, the
diagonal elements of the matrix covY above.
Let us check this by simulation. Here we simulate 200 vectors X, transform each by
multiplying by A, and then plot them in the plane. We show the contour plot of the density
of Y next to the scatter plot of simulated points, and we see good agreement.
Needs"MultivariateStatistics`"
Xsimlist RandomRealMultinormalDistribution
0, 0, 1, .5, .5, 1, 200;
A 1, 1, 1, 1;
Ysimlist TableA.Xsimlisti, i, 1, 200;
g1 ListPlotYsimlist,
PlotStyle Black, PointSize0.015;
fy1_, y2_ : PDFMultinormalDistribution
0, 0, 3, 0, 0, 1, y1, y2;
g2 ContourPlotfy1, y2, y1, 4, 4, y2, 2, 2,
AspectRatio .6, ContourStyle Black,
ContourShading GrayLevel0, GrayLevel.1,
GrayLevel.2, GrayLevel.3, GrayLevel.4,
GrayLevel.5, GrayLevel.6, GrayLevel.7,
GrayLevel.8, GrayLevel.9, GrayLevel1;
GraphicsRowg1, g2
4.7 Transformations of Normal Random Variables 305
3 2
2 1
1
0
4 2 2 4 1
1
2 2
4 2 0 2 4
Activity 2 In the last example, use the elementary rules for variance and covariance to
check the form of covY obtained by Mathematica.
Example 2 One of the reasons for the importance of Theorem 1 is the fact that linear
combinations of correlated normal random variables do come up in applications. For
example, suppose that there are three risky assets whose rates of return are R1 , R2 , and R3 ,
which together have a trivariate normal distribution with mean vector and covariance matrix:
.04 .02 .02 .001
Μ .07 , .02 .04 .01
.09 .001 .01 .08
Find the distribution, mean and variance of the rate of return on the portfolio that places half
of the wealth into the first asset, and splits the remaining wealth evenly between the others.
What is the probability that the rate of return on the portfolio of assets will be at least .07?
Let A be the 1×3 matrix 1 2, 1 4, 1 4, so that if we denote by R the column
vector of the three rate of return variables, then the product A
R becomes the portfolio rate
1 1 1
of return R p R1 R2 R3 . By Theorem 1, R p is normally distributed. The mean of
2 4 4
1 1 1
R p is A
Μ .04 .07 .09 .06. The variance of R p is A
At , computed in
2 4 4
Mathematica below.
0.0065
306 Chapter 4 Continuous Distributions
Hence the rate of return on the portfolio, R p has the N.06, .0065 distribution. The probabil-
ity that R p
.07 is about .45 as shown in the computation below.
0.450644
In formula (7) Ir denotes an r r identity matrix, and the 0's indicate blocks of zeros that fill
out matrix A to give it dimension n n. By multiplying on the left by N and the right by N t ,
we obtain the equivalent relationship:
Ir 0
Q N A N t where A (8)
0 0
Example 3 A very simple example of a quadratic form is the sum of squares of independent
standard normal random variables Z1 , Z2 , … , Zn . To see this, consider the identity matrix I
of dimension n and the product
4.7 Transformations of Normal Random Variables 307
1 0 0 Z1
0 1 0 Z2
Zt I Z Z1 Z2 Zn
0 0 1 Zn
Z1
Z2
Z1 Z 2 Zn
Zn
ni1 Zi 2
By earlier results, this quadratic form has the Χ2 n distribution, since the Zi 2 variables are
independent and chi-square(1) distributed. Notice that I 2 I, so that I is idempotent, and
also I has rank n. A similar computation shows that if Xi , i 1, 2, ... , n are independent
normal random variables with means Μi and common variance Σ2 , then the quadratic form in
the random vector X Μ whose components are Xi Μi , defined by X Μt IX Μ
Σ2 ,
Xi Μi 2
equals ni1 , and it also has the Χ2 n distribution. As a third example, let 1 denote an
Σ2
1
n n matrix whose every entry is 1, and consider the matrix Q 1. You can check that
n
1 1 1 1
Q2 1
1 1
1 1 Q, so that Q is a symmetric, idempotent matrix. The rows
n n n2 n
of Q are identical and so the rank of Q is 1. For simplicity let Zi Xi Μi Σ, and consider
the quadratic form
1 1 1 Z1
1 1 1 1 Z2
X Μt QX Μ
Σ2
Z 1 Z2 Zn
n
1 1 1 Zn
ni1 Zi
1 n Zi
Z 1 Z2 Zn i1
n
ni1 Zi
Each of the entries in the column vector on the right is the same as n
Z, and so completing
the matrix multiplication gives that
1 2
X Μt QX Μ
Σ2 ni1 Zi
n
Z n
Z
n
n X Μ
2 (9)
Xi Μi 2
n
ni1 nΣ
Σ2
308 Chapter 4 Continuous Distributions
In the special case when all the means Μi are equal to a common value Μ, the right side is the
square of the standard normally distributed random variable X Μ Σ n , which we
know has the Χ 1 distribution. So these examples are pointing at a theorem that says under
2
some conditions that quadratic forms in normal random variables have the chi-square
distribution, and the rank of the matrix in the form determines the degrees of freedom. That
theorem is next.
Theorem 2. Assume that X X1 , X2 , … , Xn has the multivariate normal distribution
with mean parameter Μ and diagonal covariance matrix Σ2
I.
(a) Suppose that Q is a symmetric, n n idempotent matrix of rank r. Then the quadratic form
X Μt QX Μ
Σ2 (10)
are independent.
n
Proof. (a) Because the determinant of is Σ2 and the inverse of is 1
Σ2
I, the m.g.f.
of the real random variable Y X Μt QX Μ
Σ2 is the multiple integral
M t Eet
Y
1
e12 xΜ
t QxΜ
Σ2 t IxΜ
Σ2
e
t
xΜ
d xn d x1
2 Πn
2 Σ2
n
1
e12 xΜ I2 t Q xΜ
Σ2
t
n
2
d xn d x1
2 ΠΣ
2
detI 2 t Q1
1
e12 z I2 t Q zΣ
t 2
n
2
2 ΠΣ2 detI2 t Q1
d zn d z1
4.7 Transformations of Normal Random Variables 309
In the last line we made the substitution zi xi Μi for each i, and we multiplied and divided
by detI 2 t Q1 . The integral is 1, because the integrand is the multivariate normal
density with mean 0 and covariance matrix Σ2
I 2 t Q1 . Thus we have reduced the
m.g.f. of Y to the expression detI 2 t Q1 . Moreover, it is also true that the determi-
nant of the inverse of a matrix is the reciprocal of the determinant of the matrix, hence we
can rewrite the expression in exponential form as detI 2 t Q12 . By the diagonalization
formula (8), we can rewrite the m.g.f. as
In lines 3 and 5 we used the fact that N and N t are inverses, hence I N I N t and also
detN and detN t are reciprocals. Notice that in block form the matrix in the last determi-
nant is
Ir 0 Ir 0 1 2 t Ir 0
I 2t A 2t
0 Inr 0 0 0 Inr
The determinant of this diagonal matrix, that is the product of the diagonal elements, is
1 2 tr . Therefore, the m.g.f. of the quadratic form Y X Μt QX Μ
Σ2 simplifies
to:
MY t 1 2 tr2
We recognize this as the m.g.f. of the Χ2 r distribution, which proves part (a).
Now suppose that the hypotheses of part (b) are in force. Let Y1 and Y2 be the two
quadratic forms in question. We will show that the joint m.g.f. of Y1 and Y2 factors into the
product of their individual m.g.f.'s, which is enough to show independence of the forms.
Note that in the proof of part (a), we arrived at the formula detI 2 t Q12 for the m.g.f.
of a quadratic form in X Μ without using the hypothesis of idempotency, which we are not
making in part (b). Taking t 1, this allows us to write the following formula for the joint
m.g.f. of Y1 and Y2 .
I 2 t1 Q1 I 2 t2 Q2 I 2 t1 Q1 2 t2 Q2 4 t1 t2 Q1 Q2
I 2 t1 Q1 t2 Q2
which is the product of the m.g.f.'s of Y1 and Y2 . This completes the proof.
Example 4 It now remains to apply the previous theorem to our theorem about X and
n 1 S 2
Σ2 . We assume now that X1 , X2 , … , Xn is a random sample from the NΜ, Σ2
distribution, so the first hypothesis of Theorem 2 holds, and the mean vector Μ has equal
entries of Μ. We already noted in Example 3 that if Z X Μ Σ n , then Z 2 can be
1
written as a quadratic form X Μt Q1 X Μ
Σ2 where Q1 1 is a symmetric, idempo-
n
tent matrix of rank 1. Hence Z Χ 1 by part (a) of Theorem 2. Now consider the matrix
2 2
1
Q2 I 1 and the quadratic form X Μt Q2 X Μ
Σ2 . Written out in full,
n
1 1 n 1 n 1 n 1 n
1 n 1 1 n 1 n 1 n
Q2 1 n 1 n 1 1 n 1 n (12)
1 n 1 n 1 n 1 1n
Since the rows of Q2 sum to zero, they cannot be linearly independent, hence the rank of Q2
can be no more than n 1. Actually, in Exercise 10 you will show that the rank is exactly
n 1. Also, Q2 is idempotent, because:
1 1
Q2
Q2 I 1 I 1
n n
1 1
I 2
1
1
1
n n2
1 1
I 2
1
n
1
n n2
1
I 1 Q2
n
X Μt Q2 X Μ
Σ2 n 1 S 2
Σ2 (13)
4.7 Transformations of Normal Random Variables 311
So we have the distributional result that we wanted. The last item is the independence of X
and S 2 . Theorem 2(b) establishes the independence of Z 2 and n 1 S 2
Σ2 , because
1 1 1 1 1 n
Q1
Q2 1
I 1 1
1
1 1 1 0.
n n n n2 n n2
You will complete the argument that X must also be independent of S 2 in Exercise 12.
Exercises 4.7
1. (Mathematica) Write out the expression for the three-variable normal density in which the
means of X1 , X2 , and X3 are, respectively, 2.5, 0, and 1.8; the variances are, respectively, 1,
1, and 4; and the correlations are as follows: between X1 and X2 : 0; between X1 and X3 : 0;
and between X2 and X3 : .2.
2. Simplify the expression for the multivariate normal density when the covariance matrix is
diagonal.
3. If X and Y are independent N0, 1 random variables, find the joint distribution of
U X Y and V X Y.
4. (Mathematica) Let X be a random vector with the trivariate normal distribution with
mean vector Μ 1, 0, 1t and covariance matrix
4 .6 .4
.6 9 1.5
.4 1.5 1
1 1 0 0
1 1 1 0 0
Q1
2 0 0 1 1
0 0 1 1
8. (Mathematica) Under the same setup as Exercise 7, define the quadratic form
Y2 Xt Q2 X 4 where the matrix Q2 is below. Are Y1 and Y2 independent? What is the
probability distribution of Y2 ? Jointly simulate 200 observations of Y1 and Y2 using the same
underlying Xi simulated values for each, and construct a Mathematica list plot of the pairs to
see whether the simulated pairs seem to obey independence. Finally, let 14 denote the 4×4
matrix of 1's. What can be said about the forms Y3 Xt Q3 X 4 and Y4 Xt Q4 X 4 , where
1 1
Q3 Q1 14 and Q4 Q2 14 ?
4 4
1 0 1 0
1 0 1 0 1
Q2
2 1 0 1 0
0 1 0 1
12. This finishes the argument in Example 4 that since the quadratic forms Z 2 and
n 1 S 2
Σ2 are independent of each other then so are X and S 2 . (a) Show that if random
variables X and Y are independent, so are the random variables a X b and c Y d. (b)
Show that if random variables X 2 and Y are independent and the distribution of X is
symmetric about 0, then X and Y are independent.
CHAPTER 5
ASYMPTOTIC THEORY
313
314 Chapter 5 Asymptotic Theory
very well.
The mathematician Gerolamo Cardano (1501-1576) was one of the earliest to study
problems of probability. Cardano was a brilliant and eccentric character: an astrologer,
physician, and mathematician, who became Rector of the University of Padua. He did early
work on the solution of the general cubic equation, and was involved in a dispute over what
part was actually his work. But for us it is most interesting that he wrote a work called Liber
de ludo, or Book on Gambling, in which among other things he introduced the idea of
characterizing the probability of an event as a number p between 0 and 1, showed understand-
ing in the context of dice of the theoretical concept of equiprobable outcomes, and antici-
pated that if the probability of an event is p and you perform the experiment a large number
of times n, then the event will occur about n p times. Thus, he predicted important results
about binomial expectation and limiting probabilities that were more fully developed by
others much later. But Book on Gambling was only published posthumously in 1663, by
which time another very important event had already happened.
It is widely acknowledged that there is a specific time and place where probability as
a well-defined mathematical area was born. Blaise Pascal (1623-1662) was a prodigy who at
the age of 12 mastered Euclid's Elements. He was never in great health and died at the age
of 39, but before that he gained acclaim not only as a mathematician, but as an experimental
physicist, religious philosopher, and activist. In 1654 he began a correspondence with the
great mathematician Pierre DeFermat (1601-1665) from which probability theory came. In
those days in France, it was common for noblemen to dabble in academic matters as a part of
their social duties and pleasures. One Antoine Gombaud, the Chevalier de Méré, was such a
person. He was acquainted with Pascal and many other intellectuals of his day, and he
enjoyed dicing. The problem he posed to Pascal was how to fairly divide the stakes in a
game which is terminated prematurely. For instance, if a player has bet that a six will appear
in eight rolls of a fair die, but the game is ended after three unsuccessful trials, how much
should each player take away from the total stake? Pascal reported this to Fermat, and their
interaction shows a clear understanding of repeated independent trials, and the difference
between conditional and unconditional probability. Pascal reasoned that the probability of
winning on throw i is 1 6 5 6i1 so that adding these terms together for i 4, ..., 8 gives
the total probability p of winning on trials 4 through 8, which is how the stake should be
divided: a proportion p to the bettor under concern and 1 p to the other. But Fermat
responded that this analysis is only correct if you do not know the outcome of the first three
rolls. If you do, then the chance that the bettor wins on the next roll is 1/6 (note this is a
conditional probability), so that the bettor should take 1/6 of the stake in exchange for
quitting at trial 4, in fact at any trial. Pascal was happy with Fermat's analysis. Later on as
they studied independent trials problems, they both noticed the importance of the binomial
n
coefficient (not written in this notation in those days) to the computation of probabilities
k
involving independent trials. It should be added, though (Cooke (p. 201)), that combina-
torics itself had its origin long before. At least as early as 300 B.C., Hindu scholars were
5.1 Strong and Weak Laws of Large Numbers 315
asking such questions as how many philosophical systems can be formed by taking a certain
number of doctrines from a larger list of doctrines, which is clearly our basic problem of
sampling without order or replacement.
The second half of the seventeenth century was a very fertile one for probability, as
many of the most famous mathematicians and scientists were making contributions. Christi-
aan Huygens (1629-1695) was a great Dutch scholar who wrote the first book on probability:
De ratiociniis in ludo aleae, or On Reasoning in a Dice Game that compiled and extended
the results of Pascal and Fermat. In particular, Huygens developed ways to compute
multinomial probabilities, and crystallized the idea of expectation of a game. But it was
James Bernoulli (1654-1705) who wrote the seminal work Ars Conjectandi or The Art of
Conjecturing, released after his death in 1713, whose work would set new directions for the
burgeoning subject. In The Art of Conjecturing, Bernoulli set down the combinatorial
underpinnings of probability in much the form as they exist today. He is not the original
source of most of these results, as previously mentioned, and probably benefited in this
regard by the work of his mentor Gottfried Wilhelm Leibniz (1646-1716), who made
contributions to combinatorics as well as co-inventing calculus. But the most important
result to the development of probability was Bernoulli's proof of a special case of the
theorem that we now describe.
Probability would never be able to have a firm foundation without an unambiguous
and well-accepted definition of the probability of an event. And, this probability ought to
have meaningful predictive value for future performances of the experiment to which the
event pertains. One needs to know that in many repetitions of the experiment, the proportion
of the time that the event occurs converges to the probability of the event. This is the Law of
Large Numbers (a phrase coined by Simeon-Denis Poisson (1781-1840) for whom the
Poisson distribution is named).
The Law of Large Numbers actually takes two forms, one called the Weak Law and
the other called the Strong Law. Here are the modern statements of the two theorems.
Theorem 1. (Weak Law of Large Numbers) Assume that X1 , X2 , X3 , ... is a sequence of
independent and identically distributed random variables. Let
1 n
Xn Xi
n i1
be the sample mean of the first n X's, and let Μ denote the mean of the distribution of the X's.
Then for any Ε 0,
P X n Μ Ε 0 as n
(1)
Theorem 2. (Strong Law of Large Numbers) Under the assumptions of the Weak Law of
Large Numbers, for all outcomes Ω except possibly some in a set of probability 0,
X n Ω Μ as n
(2)
316 Chapter 5 Asymptotic Theory
Both theorems talk about the convergence of the sequence of sample means to the
population mean, but the convergence modes in (1) and (2) are quite different. To illustrate
the difference let us use simulation.
Convergence mode (1), usually called weak convergence or convergence in probabil-
ity, refers to an aspect of the probability distribution of X . Fixing a small Ε > 0, as the
sample size n gets larger, it is less and less likely for X to differ from Μ by at least Ε . We can
show this by looking at the empirical distribution of X for larger and larger n, and observing
that the relative frequencies above Μ + Ε and below Μ Ε become smaller and smaller. Here
is a command to simulate a desired number of sample means for a given sample size from a
given distribution. You may remember that we used the same command earlier in Section
2.6. We apply the command to sample from the uniform(0,1) distribution, for which Μ = .5,
taking Ε = .02 and sample sizes n 200, 400, and 800.
Needs"KnoxProb7`Utilities`"
SimulateSampleMeans
nummeans_, distribution_, sampsize_ :
TableMeanRandomRealdistribution, sampsize,
nummeans
SeedRandom18 732
list1 SimulateSampleMeans
200, UniformDistribution0, 1, 200;
list2 SimulateSampleMeans200,
UniformDistribution0, 1, 400;
list3 SimulateSampleMeans200,
UniformDistribution0, 1, 800;
0.5
0.4
0.3
0.2
0.1
0.5
0.4
0.3
0.2
0.1
0.5
0.4
0.3
0.2
0.1
In Figure 1 the subintervals have been chosen so that the middle two categories range
from .48 to .52, that is, from Μ Ε to Μ + Ε . The empirical proportion of sample means that
fall outside of this interval is the total height of the outer rectangles. You see this proportion
becoming closer to 0 as the sample size increases. This is what weak convergence means.
The mode of convergence characterized by (2), usually called strong, or almost sure
convergence, is more straightforward. Idealize an experiment that consists of observing an
infinite sequence X1 , X2 , X3 , ... of independent and identically distributed random vari-
ables. For fixed outcome Ω, the partial sample means X 1 Ω, X 2 Ω, X 3 Ω, ... form an
ordinary sequence of numbers. For almost every outcome, this sequence of numbers
converges in the calculus sense to Μ. It turns out that in general strong convergence of a
sequence of random variables implies weak convergence, but weak convergence does not
imply strong convergence. Both modes of convergence occur for the sequence of partial
means.
We can illustrate the Strong Law using the command SimMeanSequence from
Exercise 10 of Section 2.6 and Example 2 of Section 3.3, which successively simulates
observations from a discrete distribution, updating X each time m new observations have
been generated. So we are actually just picking out a subsequence of (X n Ω), which hits
every mth sample mean, but that will let us plot the subsequence farther out than the whole
sequence could have been plotted. The code for SimMeanSequence is repeated in the closed
cell below.
SeedRandom44 937;
SimMeanSequenceGammaDistribution2, 3, 2000, 10
6.4
6.3
6.2
6.1
6.0
The connected list plot of sample means in Figure 2 is the result of calling this
command on the 2, 3 distribution, whose mean is Μ 2
3 6, to simulate every tenth
member of the sequence of sample means for a total of 2000 observations. We see the
sequence of means converging to 6 as the Strong Law claims; however, the convergence is
not necessarily rapid or smooth.
5.1 Strong and Weak Laws of Large Numbers 319
In Exercises 2 and 3 you are led through a fairly easy proof of the Weak Law of
Large Numbers based on a lemma called Chebyshev's Inequality, which is interesting in
itself. This states that for any random variable X with finite mean Μ and variance Σ2 ,
1
P X Μ k Σ (3)
k2
for any k 0. In words, Chebyshev's inequality says that the chance that X is more than k
standard deviations away from its mean is no more than 1 k 2 . The remarkable thing about
the result is that it is a bound on the tail probability of the distribution that does not depend
on what the distribution is.
The Russian mathematician who proved the inequality is worthy of special mention.
Pafnutii L'vovich Chebyshev (18211894) formally introduced the concept of random
variables and their expectation in their modern forms. Besides his proof of the weak law for
means, he is responsible for stating the Central Limit Theorem using random variables, and
he was the teacher of Andrei Andreevich Markov (18561922) who introduced the depen-
dent trials analysis that was later to bear his name (see Section 6.1 on Markov chains).
James Bernoulli proved the weak law in The Art of Conjecturing in the following
special case. Let the sequence X1 , X2 , ... be a sequence of what we call today independent
Bernoulli random variables, taking the value 1 with probability p and 0 with probability
n
1 p. Then Μ EX p. But the sample mean X Xi n is just the proportion of
i1
says that as we repeat the experiment more and more often, the probability approaches 0 that
the sample success proportion differs by a fixed positive amount from the true success
probability p. The convergence of the sample proportion justifies the long-run frequency
approach to probability, in which the probability of an event is the limit of the empirical
proportion. It also is a precursor to the modern theory of estimation of parameters in
statistics.
The strong law is much more difficult to show than the weak law. In fact, it was not
until 1909 that Emile Borel (18711956) showed the strong law for the special case of a
Bernoulli sequence described above. Finally in 1929, Andrei Nikolaevich Kolmogorov
(19031987) proved the general theorem. Kolmogorov was one of the later members of the
great Russian school of probabilists of the late 1800's and early 1900's, which included
Chebyshev, Markov, and Alexsandr Mikhailovich Lyapunov (18571918) about whom we
320 Chapter 5 Asymptotic Theory
will hear again in the next section. That tradition has continued in the latter part of the
twentieth century, as has the traditional role of the French in probability theory.
Exercises 5.1
1. (Mathematica) The Chevalier de Méré also posed the following problem to Pascal, which
he thought implied a contradiction of basic arithmetic. Pascal expeditiously proved him
wrong. How many rolls of two dice are necessary such that the probability of achieving at
least one double six is greater than 1/2?
2. Prove Chebyshev's inequality, formula (3), for a continuous random variable X with p.d.f.
f, mean Μ, and standard deviation Σ. (Hint: Write a formula for the expectation of
X Μ2
Σ2 k 2 , and split the integral into the complementary regions of x values for which
x Μ k Σ and x Μ k Σ.)
3. Use Chebyshev's inequality to prove the weak law of large numbers.
4. (Mathematica) To what do you think the sample variance S 2 converges strongly? Check
your hypothesis by writing a command similar to SimMeanSequence and running it for
several underlying probability distributions.
5. (Mathematica) Is the Chebyshev bound in formula (3) a tight bound? Investigate by
computing the exact value of P[ X Μ 2 Σ] and comparing to the Chebyshev bound
when:
(a) X 3, 4; (b) X uniform0, 1; (c) X N2, 4.
6. Consider a sample space of an infinite sequence of coin flips in which we record
Xi 1 2i on the ith trial according to whether the ith trial is a head or tail, respectively.
Show that the sequence Xn converges strongly to 0. Show also that it converges weakly to
0.
7. Suppose that a random variable X has a finite mean Μ and we have an a priori estimate of
its standard deviation Σ 1.6. How large a random sample is sufficient to ensure that the
sample mean X will estimate Μ to within a tolerance of .1 with 90% probability?
then we disbelieve that Μ. We saw these ideas again in Section 4.6 on the t-distribution. But
in each case we made the apparently restrictive assumption that the population being
sampled from is normally distributed. There is no theorem more useful in statistical data
analysis than the Central Limit Theorem, because it implies that approximate statistical
inference of this kind on the mean of a distribution can be done without assuming that the
underlying population is normal, as long as the sample size is large enough. So, it is our
purpose to explore this theorem in this section, and we will also take the opportunity to give
some of the history of the normal distribution, the Central Limit Theorem itself, and the
beginnings of the subject of statistics.
It was Abraham DeMoivre (1667-1754) who, in a 1733 paper, first called attention to
an analytical result on limits which enables the approximation of binomial probabilities
n k nk
p 1 p for large n by an expression which turns out to be a normal integral.
k
Although DeMoivre was born at the time when the great French probability tradition was
developing, the fact that his family was Protestant made life in France uncomfortable, and
even led to DeMoivre's imprisonment for two years. After he was freed in 1688 he spent the
rest of his life in England where Newton's new Calculus was being worked out.
While DeMoivre had confined his work mostly to the case p 1 2, Pierre de
Laplace (1749-1827), motivated by an effort to statistically estimate p given the observed
proportion of successes in a very large sample, generalized DeMoivre's work to arbitrary
success probabilities in 1774. He essentially showed that (Katz, p. 550)
ΕΣ
2
u 2 2
limn P p p Ε limn e du
2Π 0
where p is the sample proportion of successes and Σ is the standard deviation of p, namely
p1 p n . Leonhard Euler had already shown that eu
2 2
du= Π 2 ; so since Σ is
0
approaching 0, the limit above is 1. Notice that with this result Laplace has a novel proof of
the Weak Law of Large Numbers by approximation, and is just inches from defining a
normal density function. Laplace published in 1812 the first of several editions of his
seminal work on probability, Theorie Analytique des Probabilites, which contained many
results he had derived over a period of years. In this book he proved the addition and
multiplication rules of probability, presented known limit theorems to date, and began to
apply probability to statistical questions in the social sciences. But Laplace was actually
more interested in celestial mechanics and the analysis of measurement errors in astronomi-
cal data than in social statistics. Because of the rise of calculus, celestial mechanics was a
hot topic in European intellectual circles at that time, and the contemporary advances in
probability combined with calculus formed the breeding ground for mathematical statistics.
322 Chapter 5 Asymptotic Theory
Carl Friedrich Gauss (1777-1855), who rose from humble family beginnings in
Brunswick, Germany to become one of the greatest and most versatile mathematicians of all
time, took up Laplace's study of measurement errors in 1809. Gauss was interested in a
function f x which gave the probability (we would say probability density instead) of an
error of magnitude x in measuring a quantity. Laplace had already given reasonable
assumptions about such a function: it should be symmetric about 0 and the integral of f x
over the whole line should be 1; in particular, f x should approach 0 as x approaches both
+ and . But it was Gauss who arrived at the formula for a normal density with mean
zero, by considering the maximization of the joint probability density function of several
errors, and making the further supposition that this maximum should occur at the sample
mean error. He was able to derive a differential equation for f in this way, and to show a
connection to the statistical problem of least squares curve fitting. Gauss' tremendous
contributions have led to the common practice of referring to the normal distribution as the
Gaussian distribution.
Let us return to the Central Limit Theorem, the simplest statement of which is next.
Theorem 1. (Central Limit Theorem) Let X1 , X2 , X3 , ... be a sequence of independent and
identically distributed random variables with mean Μ and variance Σ2 . Let X n be the mean
of the first n X's. Then the cumulative distribution function Fn of the random variable
Xn Μ
Zn (1)
Σ
n
1.5
1.0
0.5
Figure 5.3 - Empirical and limiting distribution of X , n 25, population distribution 2, 1
Thus, the Central Limit Theorem allows us to say (vaguely) that the distribution of
Xn is approximately NΜ, Σ2 n for "large enough" n, despite the fact that the sample
members X1 , X2 , ... , Xn may not themselves be normal. How large is "large enough" is a
very good question. In Figure 3 for example is a scaled histogram of 500 simulated sample
means from samples of size n 25 from a 2, 1 distribution, with a normal density of
mean Μ 2
1 2 and variance Σ2 n 2
12 25 2 25 superimposed. (We used the
5.2 Central Limit Theorem 323
command SimulateSampleMeans from the last section; you should open up the closed cell
above the figure to see the code.) The fit of the empirical distribution of sample means to the
appropriate normal density is quite good, even for n as small as 25. You should try running
this experiment with the gamma distribution again several times (intializing the random seed
to a new value), to see whether this is a consistent result. Also, do the activity below.
Activity 1 Rerun the commands that produced Figure 3 using the uniform0, 2
distribution instead. Experiment with sample sizes 25, 20, and 15. Does the histogram
begin to depart drastically from the normal density as the sample size decreases?
Your experimental runs probably indicated a good fit of the empirical distribution of
X to normality even for the smallest sample size of 15 for the uniform distribution. Experi-
ence has shown that n 25 is fairly safe for most population distributions and, for some of
the more symmetric population distributions, even fewer samples are necessary. This is
because there is a general result (Loève (p. 300)) that gives a bound on the difference
between the actual c.d.f. Fn of Zn in (1) and the standard normal c.d.f. G:
c
Fn x Gx
E
X 3 for all x, (2)
n Σ
12 3
where c is a constant. You can see that the maximum absolute difference decreases as n
grows, as you would expect, but it increases as the third absolute moment E
X 3 of the
distribution being sampled from increases. Actually, the ratio E
X 3 Σ3 can be used to
measure the lack of symmetry of a distribution, called its skewness. Thus, we have the
qualitative result that the more symmetric is the underlying distribution, the better the fit of
the distribution of X to normality for fixed n, alternatively, the smaller n can be taken for an
equally good fit.
Activity 2 If one distribution has a skewness that is half of the skewness of a second
distribution, what is the relationship between the sample sizes required to have equal
upper bounds on the absolute difference of c.d.f.'s in formula (2)?
converges to the standard normal c.d.f. at every point x. (The latter is actually a subtle
theorem, whose proof we will omit.)
We can write the m.g.f. of Zn as:
324 Chapter 5 Asymptotic Theory
t
Xn Μ
Σ
n
E
et Zn Ee
t n 1
Xi Μ
Ee Σ n
t
Xi Μ
(3)
Ee Σ n
t
Xi Μ
ni1 Ee Σ n
Let M1 t be the m.g.f. of every Xi Μ. Note that M1 0 E
e0 1, M1 0 EXi Μ 0,
and M1 0 E
Xi Μ2 VarXi Σ2 . Expanding M1 t in a second-order Taylor
series about 0 we have for some Ξ 0, t,
1 1
M1 t M1 0 M1 0 t 0 M1 0 t 02 M1 Ξ t 03
2 6
1 1
1 2
Σ2 t 2 6
M1 Ξ t3
If the third derivative of M1 is a well-behaved function near zero, then the third term in
parentheses goes swiftly to zero as n . The limit of the exponential expression is
n
t2 2 2 2
therefore the same as that of 1 which from calculus is well known to be et . Thus
n
random variables are possible, if the restrictions of identical distributions, independence, and
finiteness of the variance are removed.
The next example shows how our form of the Central Limit Theorem covers the
special cases studied by DeMoivre and Laplace involving approximation of binomial
probabilities.
Example 1 Suppose that we are conducting a poll to find out the proportion of the voting
citizenry in a state that is in support of a proposition legalizing gambling. If we decide to
sample 500 voters at random and 315 are in support, give an approximate 95% confidence
interval for that proportion. How many voters should be sampled if we want to estimate the
proportion to within 2% with 95% confidence?
To form a mathematical model for the problem, we suppose that there is an unknown
proportion p of the voters who support the proposition. We sample n 500 voters in
sequence and with replacement, thereby forming 500 independent Bernoulli trials with
success probability p. We can denote their responses by X1 , X2 , ... , X500 where each Xi = 1
or 0, respectively, according to whether the voter supports the proposition or not. Then the
sample proportion p who are in favor is the same as the sample mean X of the random
variables Xi . Therefore, the Central Limit Theorem applies to p. By properties of the
Bernoulli distribution, the common mean of the Xi 's is Μ = p and the variance is
Σ2 p1 p. Thus, the appropriate mean and standard deviation with which to standard-
ize p in the Central Limit Theorem are p and p1 p n , respectively. We therefore
have that
p p
Zn N0, 1 (4)
p1 p n
1.95996
Then,
326 Chapter 5 Asymptotic Theory
p p
.95 Pz Zn z Pz z
p1 pn
P p z p1 p n p p z p1 p n
Hence to give an interval estimate of p we can start with the point estimate p and then add
and subtract the margin of error term z p1 p n . The only trouble with this is that the
margin of error depends on the unknown p. But since our probability is only approximate
anyway, we don't lose much by approximating the unknown margin of error as well, by
replacing p by the sample proportion p. We let Mathematica finish the computations for
the given numerical data.
315
p
500
margin z p 1 p 500
p margin, p margin
63
100
0.0423189
0.587681, 0.672319
So the sample proportion is 63% and we are 95% sure that the true proportion of all voters
in support of the proposition is between about 59% and 67%.
The second question is important to the design of the poll. Our margin of error in the
first part of the question was about 4%; what should we have done to cut it to 2%? Leaving
n general for a moment, we want to satisfy
z p1 p n .02
We know that the quantile z is still 1.96 because the confidence level is the same as before,
but prior to sampling we again do not know p, hence we cannot solve for n. There are two
things that can be done: we can produce a pilot sample to estimate p by p, or we can observe
that the largest value that p1 p can take on is 1/4, when p 1 2. (Do the simple
calculus to verify this.) Then z p1 p n z 1 4 n and, as long as the right side is
less than or equal to .02, then the left side will be as well. Thus,
5.2 Central Limit Theorem 327
z z
.02 n
4n 2 .02
Squaring both sides of the last inequality gives the following threshhold value for the sample
size n:
z 2
2 .02
2400.91
We would need to poll 2401 people in order to achieve the .02 margin with 95% confidence.
Exercises 5.2
1. Use the Central Limit Theorem for means, Theorem 1, to formulate a similar limit
n
theorem for the partial sums Sn = Xi .
i1
2. The differential equation that Gauss arrived at for his density function of observed errors
is
f x
k
x f x
where Μ = n p and Σ2 n p1 p. Explain how this result follows as a special case of the
Central Limit Theorem.
4. The Central Limit Theorem has been extended to the case where X1 , X2 , X3 , ... are
independent but not necessarily identically distributed. Write what you think the theorem
statement would be in this case.
5. (Mathematica) The Central Limit Theorem allows the distribution being sampled from to
be discrete as well as continuous. Compare histograms of 300 sample means of samples of
sizes 20, 25, and 30 to appropriate normal densities for each of these two distributions: (a)
Poisson(3); (b) geometric(.5).
328 Chapter 5 Asymptotic Theory
6. (Mathematica) Suppose that your probability of winning a game is .4. Use the Central
Limit Theorem to approximate the probability that you win at least 150 out of 400 games.
7. (Mathematica) Do you find significant statistical evidence against the hypothesis that the
mean of a population of SAT scores is 540, if the sample mean among 100 randomly
selected scores is 550 and the sample standard deviation is 40?
Sample Problems from Actuarial Exam P
8. An insurance company issues 1250 vision care insurance policies. The number of claims
filed by a policyholder under a vision care insurance policy during one year is a Poisson
random variable with mean 2. Assume the numbers of claims filed by distinct policyholders
are independent of one another. What is the approximate probability that there is a total of
between 2450 and 2600 claims during a one-year period?
9. In an analysis of healthcare data, ages have been rounded to the nearest multiple of 5
years. The difference between the true age and the rounded age is assumed to be uniformly
distributed on the interval from -2.5 years to 2.5 years. The healthcare data are based on a
random sample of 48 people. What is the approximate probability that the mean of the
rounded ages is within 0.25 years of the mean of the true ages?
CHAPTER 6
STOCHASTIC PROCESSES AND
APPLICATIONS
This chapter is meant to introduce you to a few topics in applied probability that have been
the focus of much effort in the second half of the 20th century, and which will continue to
flourish in the 21st century. To do this we will take advantage of many of the foundational
results of probability from Chapters 15.
Probability theory has ridden the wave of the technological and telecommunications
revolution. The analysis of modern systems for production, inventory, distribution, comput-
ing, and communication involves studying the effect of random inputs on complex opera-
tions, predicting their behavior, and improving the operations. Many of these applications
involve modeling and predicting random phenomena through time.
A stochastic process is a family of random variables Xt tT thought of as observa-
tions of a random phenomenon through time. The random variable Xt is called the state of
the process at time t. The set of times T is typically the non-negative integers 0, 1, 2, ... or
the non-negative real numbers 0, . There are two points of view that can be taken
on stochastic processes. We might be interested in the probability distribution of the state of
the process at each particular time t, which emphasizes the role of the process as a collection
of random variables. Or we might be interested in properties of the path of the process, that
is, the function t Xt Ω for fixed outcome Ω and varying time. Is the process continuous,
does it reach a limit as t , what proportion of the time does it visit each state, etc.
329
330 Chapter 6 Stochastic Processes and Applications
You may also recall the random walk that we simulated in Section 1.3, which is a special
kind of Markov chain.
Activity 1 Try to think of more examples of time dependent random processes in the
world similar to 15 above.
A precise definition of a Markov chain is below. Note that the random states of the
system are modeled as random variables, and we impose an extra condition on the condi-
tional distributions that makes future states independent of the past, given the present.
Definition 1. A sequence X0 , X1 , X2 , ... of random variables is said to be a Markov chain if
for every n 0,
PXn1 j X0 i0 , X1 i1 , ... Xn in PXn1 j Xn in (1)
In other words, at each time n, Xn1 is conditionally independent of X0 , X1 , ... , Xn1 given
Xn .
This conditional independence condition (1) makes the prediction of future values of
the chain tractable, and yet it is not unreasonably restrictive. Consider the chain of store
inventory levels in the third example problem above. The next inventory level will depend
on the current level, and on random sales activity during the current time period, not on the
past sequence of levels. There are many systems in which past behavior has no bearing on
the future trajectory, given the present state, and for those systems, Markov chain modeling
is appropriate.
Many interesting results have been derived about Markov chains. But for our
purposes we will be content to introduce you to three problems: (1) finding the probability
law of Xn ; (2) finding the limit as n of this probability law, in order to predict the long
term average behavior of the chain; and (3) for chains in which there are states that are able
to absorb the chain forever, the likelihood that each such absorbing state is the final destina-
tion of the chain.
12
1 2
23
13 14 12
12 34
4 3
12
Figure 6.1 - The transition diagram of a Markov chain
6.1 Markov Chains 331
where each row refers to a possible state "now" (at time n) and each column corresponds to a
possible state "later" (at time n 1). For example, the row 3-column 2 entry is
PXn1 2 Xn 3 1 4
Furthermore, the discrete probability distribution of the initial state X0 , if known, can
be written as a row vector. For instance,
p0 1 2 1 2 0 0
is the initial distribution for which X0 is equally likely to be in states 1 and 2. Special row
vectors such as p0 0 1 0 0 apply when the inital state is certain; in this case it is state 2.
Activity 2 Try multiplying each row vector p0 above on the right by the transition
matrix T. Make note of not only the final answers for the entries of p0 T, but also what
they are composed of. Before reading on, try to guess at the meaning of the entries of
p0 T.
The Law of Total Probability is the tool to prove the main theorem about the
probability distribution of Xn . The nth power of the transition matrix gives the conditional
distribution of Xn given X0 , and you can pre-multiply by the initial distribution p0 to get the
unconditional distribution of Xn .
Theorem 1. (a) T n i, j PXn j X0 i, where T n denotes the nth power of the
transition matrix.
(b) p0 T n j PXn j, where p0 denotes the distribution of X0 in vector form.
332 Chapter 6 Stochastic Processes and Applications
Proof. (a) We work by induction on the power n of the transition matrix. When n 1 we
have for all i and j,
T 1 i, j PX1 j X0 i
by the construction of the transition matrix. This anchors the induction. For the inductive
step we assume for a particular n 1 and all states i and j that
T n1 i, j PXn1 j X0 i. By the Law of Total Probability,
PXn j X0 i PXn1 k X0 i PXn j Xn1 k, X0 i
k
where the sum is taken over all states k. The first factor in the sum is T n1 i, k by the
inductive hypothesis, and the second is Tk, j by the Markov property (1). Thus,
PXn j X0 i T n1 i, k Tk, j
k
By the definition of matrix multiplication, the sum on the right is now the i j component
of the product T n1 T T n , which finishes the proof of part (a).
(b) By part (a) and the Law of Total Probability,
p0 T n j k p0 k T n k, j
k PX0 k PXn j X0 k
PXn j
Activity 3 For the Markov chain whose transition matrix is as in (2), find
PX2 3 X0 1. Compare your answer to the transition diagram in Figure 1 to see
that it is reasonable. If p0 1 4 1 4 1 4 1 4, find the probability mass function
of X2 .
6.1 Markov Chains 333
Example 1 A drilling machine can be in any of five different states of alignment, labeled 1
for the best, 2 for the next best, etc., down to the worst state 5. From one week to the next, it
either stays in its current state with probability .95, or moves to the next lower state with
probability .05. At state 5 it is certain to remain there the next week. Let us compute the
probability distribution of the state of the machine at several times n, given that it started in
state 1. What is the smallest n (that is, the first time) such that it is at least 2% likely that the
machine has reached its worst state at time n?
By the assumptions of the problem, the chain of machine alignment states
X0 , X1 , X2 , ... has the transition matrix
MatrixFormT
0.95 0.05 0 0 0
0 0.95 0.05 0 0
0 0 0.95 0.05 0
0 0 0 0.95 0.05
0 0 0 0 1
By Theorem 1, PXn j X0 1 is row 1 of the transition matrix T raised to the nth power.
The conditional distribution of X1 given X0 1 is of course the first row .95, .05, 0, 0, 0
of the transition matrix itself. We compute these first rows for powers n 2, 3, and 4
below, using Mathematica's MatrixPower[mat, n] function. (You could also use the period
for matrix multiplication.)
MatrixPowerT, 21
MatrixPowerT, 31
MatrixPowerT, 41
For n 4, we notice that the likelihood is only 6.25 106 that the worst state 5 has been
reached yet. The following command allows us to inspect the 15 element of some of the
powers T n .
334 Chapter 6 Stochastic Processes and Applications
This output shows that it will be 22 weeks until it is at least .02 likely that the machine will
be in state 5, given that it started in state 1.
The jth entry of this limiting vector Π has the interpretation of the long-run proportion of
time the chain spends in state j. We will not prove it here, but a sufficient condition under
which this Π exists is the regularity of the chain.
Definition 2. A Markov chain is regular if there is a power T n of the transition matrix that
has all non-zero entries.
So, regularity of the chain means that there is a common time n such that all states
can reach all other states by a path of length n with positive probability. The two chains
with diagrams in Figure 2(a) and 2(b) are not regular. To simplify the diagrams we have
suppressed the transition probabilities. The arrows display the transitions that have positive
probability. Also in (b) we have suppressed a self-loop on state 4 indicating state 4 must
return to itself. In (a), state 1 can only reach state 3 by a path of length 2, or 5, or 8, etc.,
while state 1 can only reach itself in a path of length 3, or 6, or 9, etc. There can be no
common value of n so that both 1 reaches 3 and 1 reaches itself in n steps. In (b), state 4
does not reach any of the other states at all, so the chain cannot be regular.
3 2 1 2 3 4
(a) (b)
Figure 6.2 - Two non-regular Markov chains
The Markov chain whose transition diagram is in Figure 3 however is regular. You can
check that there is a path of length 3 of positive probability from every state to every other
6.1 Markov Chains 335
state. (For example, paths 1,3,2,1; 1,3,3,2; and 1,3,3,3 are paths of length 3 from state 1 to
all of the states.)
1 12 12
12
2 12 3
Activity 4 Use Mathematica to check that the Markov chain with transition matrix in
(2) is regular.
Let us look at a few of the powers of the transition matrix of the chain corresponding
to Figure 3.
0 0.5 0.5
1 0 0
0 0.5 0.5
MatrixPowerT, 2 MatrixForm,
MatrixPowerT, 3 MatrixForm,
MatrixPowerT, 4 MatrixForm,
MatrixPowerT, 5 MatrixForm,
MatrixPowerT, 6 MatrixForm
Rows 1 and 3 of T n actually are identical for each n (look at the transition diagram to see
why this is true), and row 2 is different from those rows. However all of the rows seem to be
in closer and closer agreement as the power grows. The matrix T n may be reaching a limit as
n , which has the form of a matrix of identical rows. This would suggest that
lim T n i, j lim PXn j X0 i Π j (4)
n n
independently of the initial state i. Let us find out what this limiting distribution must be, if
it does exist.
If Π is to be the limit in (4), then we must have
Π j limn T n1 i, j
limn k T n i, k Tk, j
(5)
k limn T n i, k Tk, j
k Πk Tk, j
By computation (5), the limiting distribution Π satisfies the system of matrix equations:
Π Π T and Π 1 1 (6)
Here 1 refers to a column vector of 1's of the same length as Π so that the equation Π 1 1
means that the sum of the entries of Π equals 1. We require this in order for Π to be a valid
discrete probability mass function.
We have motivated the following theorem.
Theorem 2. If a Markov chain X0 , X1 , X2 , ... is regular, then the limiting distribution Π j
in (4) exists and satisfies the system of equations Π Π T, Π 1 1.
In the next example we see how Theorem 2 can be used in the long-run prediction of
business revenue flow.
MatrixFormT
Example 2 A plumbing contractor services jobs of six types according to their time
demands, respectively 1 to 6 hours. The fee for the service is $40 per hour. Initially the
contractor believes that the durations of successive jobs are independent of one another, and
they occur with the following probabilities:
1 hr : .56; 2 hrs : .23; 3 hrs : .10; 4 hrs : .05; 5 hrs : .03; 6 hrs : .03
However, detailed data analysis suggests that the sequence of job times may instead be a
Markov chain, with slightly different column entries (next job duration probabilities) for
each row (current job duration) as in the matrix T above. By how much does the expected
long-run revenue per job change if the job times do form a Markov chain?
First, in the default case where job times are independent and identically distributed,
the average revenue per job is just Μ E40 X , where X is a discrete random variable
modeling the duration of the job, with the distribution given above. By the Strong Law of
Large Numbers, with probability 1 the actual average revenue per job X n will converge to Μ.
We compute:
Μ
40 1 .56 2 .23 3 .10 4 .05 5 .03 6 .03
74.
Now from Theorem 2 we can compute the long-run distribution of the Markov chain,
then make a similar computation of the long-run expected revenue per job. By Exercise 6,
the system of equations Π ΠT is a dependent system; therefore, we can afford to discard
one equation in it, and replace it with the condition that the sum of the entries of Π is 1. We
will discard the sixth equation. By setting Π x1 , x2 , x3 , x4 , x5 , x6 and multiplying out Π T
we get the following.
system
x1 .52 x1 .59 x2 .5 x3 .55 x4 .6 x5 .62 x6 ,
x2 .25 x1 .22 x2 .31 x3 .24 x4 .25 x5 .28 x6 ,
x3 .11 x1 .09 x2 .12 x3 .09 x4 .08 x5 .05 x6 ,
x4 .06 x1 .03 x2 .03 x3 .07 x4 .03 x5 .03 x6 ,
x5 .03 x1 .04 x2 .02 x3 .03 x4 .02 x5 .01 x6 ,
x1 x2 x3 x4 x5 x6 1;
Solvesystem, x1 , x2 , x3 , x4 , x5 , x6
Hence, the long-run average revenue per job under the Markov chain model is
338 Chapter 6 Stochastic Processes and Applications
74.3679
So the more sophisticated Markov chain model did not make a great change in the revenue
flow prediction; it predicts only about an extra 37 cents per job. This is perhaps to be
expected, because the differences between the row probability distributions and the distribu-
tion assumed for X in the i.i.d. case are small.
Absorbing Chains
The last problem type that we will study involves cases where the Markov chain has one or
more states j such that T j, j 1, so that once the chain enters that state it can never leave.
Such a state is called absorbing. The specific question that we would like to answer is this: if
a Markov chain has more than one absorbing state, how likely is it for the chain to be
absorbed by each such state, given that the chain begins in a non-absorbing state? There are
two types of states that we will consider here (not the only two types, by the way).
Definition 2. A state j in a Markov chain is called transient if, once the chain is there, there
is a positive probability that it will never be visited again. A state j is called absorbing if,
once the chain is there, with probability 1 the chain stays there forever.
13
1 2
13
13 1
4 3
Figure 6.4 - A simple absorbing Markov chain
Activity 5 Are any of the states 1, 2, or 3 in Figure 3 absorbing? Are any of them
transient?
The simple structure of the chain in Figure 4 makes it especially simple to analyze.
Starting from state 2, with certainty state 3 will absorb the chain. Starting from state 1,
either state 4 will immediately absorb the chain, which happens with probability 1/3, or
within two steps state 3 will absorb the chain, which will happen with total probability
1 3 1 3 2 3. Since there were just two absorbing states, this probability is complemen-
tary to the state 4 absorption probability. Let V j be the time (after time 0) of first visit to j;
then if j is absorbing, the event V j occurs if and only if the chain is absorbed at j. If we
denote
fi j PV j X0 i (7)
for transient states i and absorbing states j, then for the chain in Figure 4 we have found that
f23 1; f13 2 3; f14 1 3
It is relatively easy to use recursive thinking and the Law of Total Probability to
develop a system of linear equations using which we can compute the set of values fi j for i
transient and j absorbing. Activity 5 was attempting to get you to think about a third kind of
state, the recurrent state, which is a non-absorbing state that is certain to be visited again
and again by the chain. The states in a regular chain are recurrent. We consider chains that
have no such states in our next theorem.
Theorem 3. Let TR be the set of transient states of the Markov chain Xn whose transition
matrix is T. Assume that the state space of the chain consists only of transient states and
absorbing states. Let i TR, and let j be an absorbing state of the chain. Then
fi j Ti, k fk j Ti, j (8)
kTR
Proof. We will condition and uncondition on X1 . Let ABS denote the set of absorbing
states of the chain. If X0 i, then at time 1 the chain could move to another state in TR, or to
a state in ABS. If that state to which it moves in ABS is state j then, of course, the chain is
absorbed there, and if the state is any other absorbing state, it is impossible to be absorbed at
j. Thus, we have:
fi j PV j X0 i
kE PV j X1 k, X0 i PX1 k X0 i
kTR PV j X1 k, X0 i Ti, k kABS PV j X1 k, X0 i Ti, k
kTR fk j Ti, k 1 Ti, j .
340 Chapter 6 Stochastic Processes and Applications
The Markov property justifies the fourth line. This proves formula (8).
Formula (8) sets up a method for us to solve for the absorption probabilities fi j . For
each transient state i there is a linear equation in which all of the absorption probabilities fk j
are involved. We simply identify the coefficients and solve for the unknowns. It is particu-
larly easy for the chain in Figure 4; there we have two transient states 1 and 2, and for
absorbing state 3 we get the equations:
1 1
f13 T1, 1 f13 T1, 2 f23 T1, 3 0 f23
3 3
f23 T2, 1 f13 T2, 2 f23 T2, 3 0 0 1 1
Substituting the value f23 1 into the top equation gives f13 2 3. (Try to write a similar
system for absorbing state 4.)
We close with a less obvious example.
Example 3 Consider the six state random walk in Figure 5, in which states 0 and 5 are
absorbing. Find the probability, starting at each of states 1 through 4 of being absorbed at
state 5, and the probability of being absorbed at state 0.
1 p 1 p 1 p 1 p
0 1 2 3 4 5
p p p p
Formula (8) can be applied at each state to give the system of four equations below in
the unknowns f15 , f25 , f35 , f45 :
(Compare the equations to the diagram and make sure that you understand the intuition.) In
the first equation we have used the fact that f05 0 since state 0 is absorbing, and similarly
in the last equation we note that f55 1 since state 5 is absorbing. We let Mathematica
solve this system:
Solvef15 p f25 ,
f25 1 p f15 p f35 , f35 1 p f25 p f45 ,
f45 1 p f35 p, f15 , f25 , f35 , f45
6.1 Markov Chains 341
p 2 p2 2 p3 p2 p 3 p 4
f45 , f35 ,
1 3 p 4 p 2 2 p3 p4 1 3 p 4 p 2 2 p3 p4
p 4 p3
f15 , f25
1 3 p 4 p 2 2 p3 p4 1 3 p 4 p 2 2 p3 p4
p2 p 3 p 4
f35p_ : ;
1 3 p 4 p2 2 p3 p4
Plotf35p, p, 0, 1,
AxesLabel p, None, PlotStyle Black
1.0
0.8
0.6
0.4
0.2
p
0.2 0.4 0.6 0.8 1.0
Figure 6.6 - Probability of absorption at state 5 starting at state 3
Activity 6 In Example 3, how large must p be so that the probability that the random
walk is absorbed at state 5 given that it started at state 3 is at least .8? Approximate to at
least two decimal places.
342 Chapter 6 Stochastic Processes and Applications
Exercises 6.1
1. (Mathematica) For the Markov chain whose transition diagram is as in Figure 1, compute
the distribution of X3 if X0 is equally likely to equal each of the four states. Find the limiting
distribution.
2. (Mathematica) A store keeps a small inventory of at most four units of a large appliance,
which sells rather slowly. Each week, one unit will be sold with probability .40, or no units
will be sold with probability .60. When the inventory empties out, the store is immediately
able to order four replacements, so that from the state of no units, one goes next week to four
units if none sell, or three if one unit sells. If the store starts with four units, find the
distribution of the number of units in inventory at time 2. If the Markov chain of inventory
levels is regular, find its limiting distribution.
3. (Mathematica) Build a command in Mathematica which takes an initial state X0 , a
transition matrix T of a Markov chain, and a final time n, and returns a simulated list of
states X0 , X1 , ... , Xn . Assume that states are numbered consecutively: 1, 2, 3, ... .
4. Argue that an i.i.d. sequence of discrete random variables X0 , X1 , X2 , ... is a special case
of a Markov chain. What special structure would the transition matrix of this chain have?
5. (Mathematica) A communications system has six channels. When messages attempt to
access the system when all channels are full, they are turned away. Otherwise, because
messages neither arrive nor depart simultaneously, if there are free channels the next change
either increases the number of channels in use by 1 (with probability .3) or decreases the
number of channels in use by 1 (with probability .7). Of course, when all channels are free,
the next change must be an increase and when all channels are busy the next change must be
a decrease. Find the transition matrix for a Markov chain that models this situation. If the
chain is regular, find its limiting distribution. If the chain is not regular, explain why it is
not.
6. Show that for a regular Markov chain with transition matrix T, the system Π Π T has
infinitely many solutions. (Hint: Consider a constant times Π.)
7. (Mathematica) Consider the Markov chain in Figure 3. Find the limiting distribution, and
find n large enough that each entry of T n is within .01 of the corresponding entry of the
limiting distribution.
8. Let Xn be a general two-state Markov chain such that T1, 1 p and T2, 2 q. Find
the distribution of X3 if X0 is equally likely to be each state. Find the limiting distribution in
terms of p and q, and find, if possible, a combination of p and q values such that limiting
distribution is 1 3, 2 3
.
9. (Mathematica) A short-haul rental moving van can be borrowed from and returned to
three sites in a city. If it is borrowed from site 1, then it is returned to sites 1, 2, and 3 with
probabilities 2/3, 1/6, and 1/6, respectively. If it is borrowed from site 2, then it is returned
to sites 1, 2, and 3 with probabilities 1/2, 1/4, and 1/4, respectively. And if it is borrowed
6.1 Markov Chains 343
from site 3, then it is returned to sites 1, 2, and 3 with equal probabilities. Given that the van
starts at site 3, with what probability does it come to each site after four rentals? What is the
long-run proportion of times that it is returned to each site?
10. A Markov chain has states 1, 2, 3, and 4, and the transition matrix below.
1 0 0 0
.3 0 .7 0
T
0 .6 0 .4
0 0 0 1
Set up and solve a system of equations to find f i for all states i. Explain where the
equations in the system come from.
11. Let Xn be a Markov chain with the transition matrix below. Classify all states as
transient, absorbing, or neither.
13 23 0 0 0 0
12 12 0 0 0 0
0 12 0 16 13 0
0 0 0 13 13 0
0 0 0 0 1 0
0 0 0 0 0 1
12. Consider the general four state absorbing random walk below. Find all probabilities of
absorption to state 4 in terms of p. For what p is the absorption probability to state 4 from
state 2 increasing most rapidly?
1 p
1 p p
1 2 p
3 4
Exercise 12
13. A symmetric random walk is one in which steps to the right and steps to the left have
equal probability. If a 5-state symmetric random walk with an absorbing state on each end is
such that states are numbered sequentially 0, 1, 2, 3, 4 show that the absorption probability
at state 4 is a linear function of the state.
14. In a manufacturing process, items go through four stages of assembly before potentially
being sent out for sale. At each stage, the item is inspected, and either goes on to the next
stage with probability .9, or is disposed of because of defects. Model the situation as a
344 Chapter 6 Stochastic Processes and Applications
Markov chain. Find the proportion of items that complete all stages successfully and are sent
out for sale rather than being disposed of.
15. For the drilling machine in Example 1, compute the p.m.f. of the time to absorption at
state 5 given that the machine starts in state 1: gn PT5 n X0 1, where T5 is the first
time that the chain hits state 5.
3.0
2.5
2.0
1.5
1.0
0.5
0 1.2 2. 3.4 5
Figure 6.7 - Sample path of a Poisson process
Activity 1 Give some examples of physical phenomena for which a Poisson process
model might be appropriate.
Following our usual convention for stochastic processes, we typically write Nt for the
random variable Nt, Ω for fixed t. So, a Poisson arrival counting process begins at state 0,
waits for an exponentially distributed length of time S1 , then at time T1 S1 jumps to 1,
where it waits another exponential length of time S2 until at time T2 S1 S2 it jumps to
state 2, etc. Such processes are often used in the field of queueing theory to model arrivals of
customers to a service facility. The memorylessness of the exponential distribution means
that at any instant of time, regardless of how long we may have been waiting, we will still
wait an exponential length of time for the next arrival to come.
Since the inter-jump times determine the process completely, one would suppose that
it is now possible to derive the result stated earlier, namely that the probability distribution
of Nt , the total number of arrivals by time t, is Poisson. In fact, the following interesting
interplay between the exponential, gamma, and Poisson distributions comes about.
Theorem 1. (a) The distribution of the nth jump time Tn is (n, 1/Λ).
(b) The distribution of Nt = number of arrivals in [0, t] is Poisson(Λ t).
Proof. (a) Since the exp(Λ) distribution is also the (1, 1/Λ) distribution, Theorem 2(b) of
Section 4.5 implies that
Tn S1 S2 Sn n, 1 Λ
(b) It is easy to see that the event Nt n is the same as the event Tn t, since both of
these are true if and only if the nth arrival has come by time t. Then,
PNt n PNt n
PNt n 1
PTn t
PTn1 t.
Example 1 Suppose that users of a computer lab arrive according to a Poisson process with
rate Λ = 15/hr. Then the expected number of users in a 2-hour period is Λt = 15(2) = 30
(recall that the Poisson parameter is equal to the mean of the distribution). The probability
that there are 10 or fewer customers in the first hour is P[N1 10, i.e.:
10 15n
N E15
n0
n
0.118464
The probability that the second arrival comes before time .1 (i.e., in the first 6 minutes) is
P[T2
.1, which is
.1 152
N x21 E15 x x
0 1
0.442175
Activity 2 For the computer lab example above, what is the standard deviation of the
number of users in a 2-hour period? What is the probability that there are 5 or fewer
customers in the first half hour? What is the probability that the third arrival comes
somewhere between the first 10 and the first 20 minutes?
Example 2 Again in the context of the computer lab, let us find the smallest value of the
arrival rate Λ such that the probability of having at least 5 arrivals in the first hour is at least
.95. Measuring time in hours, we want the smallest Λ such that
PN1 5 .95 PN1 4 .05
by complementation. Since N1 has the PoissonΛ 1 distribution, the probability that there
are four or fewer arrivals is the sum of the following terms:
4 EΛ Λk
P4orfewerΛ_ :
k0
k
To get some idea of where the appropriate value of Λ lies, we graph the function for values
of Λ between 5 and 15.
6.2 Poisson Processes 347
PlotP4orfewerΛ, .05 ,
Λ, 5, 15 , PlotStyle Black, Gray
0.15
0.10
0.05
8 10 12 14
Figure 6.8 - Probability of 4 or fewer arrivals in first hour as a function of arrival rate
Λ 9.15352
The Poisson process satisfies a pair of other conditions that are both interesting and
useful in computations. We introduce these now.
Definition 2. (a) A stochastic process Xt t0 is called stationary if for all t 0, s 0, the
distribution of Xts
Xt does not depend on t;
(b) A stochastic process Xt t0 is said to have independent increments if for all
t 0, s 0 Xts
Xt is independent of Xr for all r t.
The stationarity property says that the distribution of the change in value of the process over
a time interval does not depend on when that time interval begins, only on its duration.
Applying this at t 0, the distribution of Xts
Xt is the same as that of Xs
X0 . For the
Poisson process, since N0 0, it follows that Nts
Nt PoissonΛ s for all s, t. The
independent increments property says that changes in value of the process on disjoint time
intervals are independent.
Theorem 2. The Poisson process has stationary, independent increments.
348 Chapter 6 Stochastic Processes and Applications
Now let N = Ne + Nw be the total number of customers by time t. Then its distribution is, by
the law of total probability,
g n PN n ni0 PNe i, Nw n
i
i n
i
e
2 t 2 t e
1.5 t 1.5 t
ni0
i n
i
e
3.5 t n
n
ni0 i n
i
2 ti 1.5 tn
i
e
3.5 t
n
2 t 1.5 tn
e
3.5 t 3.5 tn
n
6.2 Poisson Processes 349
Note that the fourth line of the derivation follows from the binomial theorem. This shows
that N, the total number of arrivals, has the Poisson(3.5t) distribution. We have an indica-
tion, though not a full proof, that the sum of two independent Poisson processes is also a
Poisson process, whose rate is the sum of the rates of the two components. See the next
activity for more evidence. We will assume this for the second part of the problem.
Write the total number of customers by time t as Nt . We want to compute for all
m n,
PNts m, Nt n
PNts m Nt n
PNt n
Now in order for both Nts m and Nt n to occur, there must be exactly n arrivals during
0, t and m
n arrivals during t, t s. Since these time intervals are disjoint, by the
independent increments property of the Poisson process the probability of the intersection of
the two events must factor. Therefore,
PNts
Nt m
n, Nt n
PNts m Nt n PNt n
PNts
Nt m
n PNt n
PNt n
PNts
Nt m
n
e
3.5 s 3.5 sm
n
, m n
m
n
The last line of the computation follows from the stationarity property of the Poisson
process.
Activity 3 In Example 3, starting at a particular time t, find the distribution of the time
until the next customer from either of the two directions (i.e., the minimum of the
waiting time for the next customer from the east and the next customer from the west).
Explain how your answer helps prove that the process that counts total customers is a
Poisson process.
Therefore,
350 Chapter 6 Stochastic Processes and Applications
Note that in the sum, n must be at least k, else not enough cars arrived. There are exactly k
cars still parked in the garage if and only if exactly k of the events Ti Vi t
occur. By
Theorem 3, conditioned on the number of arrivals n, the probability that exactly k of these
events occur is the same as the probability that exactly k of the events Ui Vi t
occur,
where U1 , U2 , …, Un are independent uniform random variables on 0, t, independent also
of the Vi 's. We thereby form a binomial experiment with n trials and success probability:
p PU V t (1)
where U unif 0, t, V expΜ, and U and V are independent. Therefore,
Pexactly k cars at time t
nk Pk successes Nt n PNt n
n k n
k e
Λ t Λ tn
nk p 1
p n
k
e
Λ t Λ t
k
1
pn
k Λ tn
k
pk
nk
k n
k
k
e
Λ t Λ t p 1
pΛ tl
˙
l 0
k l
6.2 Poisson Processes 351
k
e
Λ t Λ t p
Pexactly k cars at time t k
eΛ t1
p
k
(2)
e
Λ t p Λ t p
k
In the third line we remove factors having to do with k only; in the fourth line we change the
variable of summation to l n
k; and in the fifth line we use the well known Taylor series
formula for the exponential function. The algebraic simplification in line 6 shows that the
distribution of the number of cars remaining in the lot at time t is Poisson(Λ t p, where p is
the probability in formula (1).
Exercises 6.2
1. (Mathematica) If N = N(t, Ω) is a Poisson process with rate 6, find:
(a) P[N1.2 3; (b) P[T4 .25.
2. For a Poisson process of rate Λ 1, compute PT3
T1
2, T6
T4 2.
3. For a Poisson process of rate Λ 2, compute PN1.2 1, N2.7 3, N4.1 5.
4. (Mathematica) Assume that cars pull up to a drive-in window according to a Poisson
process of rate Λ per minute. What is the largest value of Λ such that the probability that the
second car arrival time is less than one minute is no more than .5?
5. (Mathematica) Implement a Mathematica command called SimulatePoissonProcess to
simulate and graph a Poisson process.
6. (Mathematica) Use the SimulatePoissonProcess command of Exercise 5 to simulate
around 40 paths of a Poisson process with rate 2 between times 0 and 4. Count how many of
your paths had exactly 0, 1, and 2 arrivals by time 1. How well do your simulations fit the
theoretical probabilities of 0, 1, and 2 arrivals?
7. Customer arrivals forming a Poisson process of rate 2/minute come to a single server. The
server takes a uniformly distributed amount of time between 20 and 30 seconds for each
customer to perform service. Compute the expected number of new arrivals during a service
time. (Hint: condition and uncondition on the duration of the service time.)
8. Show that for a Poisson process, if you take the fact that Nt PoissonΛ t for all t as a
primitive assumption, then it follows that the time of the first arrival must have the exponen-
tial distribution.
9. Show that given Nt 1, T1 is uniformly distributed on 0, t.
352 Chapter 6 Stochastic Processes and Applications
10. Cars arrive to a T-intersection according to a Poisson process at an overall rate of 5/min.
Each car turns left with probability 1/4 and right with probability 3/4, independently of other
cars. If Nt l is the number of cars that have turned left by time t, find the probability distribu-
tion of Nt l .
11. Suppose that arrivals of customers to a discount store form a Poisson process with rate 5
per hour. Each customer buys $1, $2, $3, or $4 with probabilities 1/8, 1/8, 1/4, and 1/2,
respectively. Successive customer purchases are independent. Let Xt be the total purchase
amount up through time t. Find EXt .
6.3 Queues
The word "queue" is a word used mostly in Great Britain and the former empire to refer to a
waiting line. We have all been in them at stores, banks, airports and the like. We the
customers come to a facility seeking some kind of service. Most often, our arrival times are
not able to be predicted with certainty beforehand. There is some number of servers present,
and we either join individual lines before them (as in grocery store checkout lanes) or we
form a single line waiting for the next available server (as in most post office counters and
amusement park rides). Some rule, most commonly first-in, first-out, is used to select the
next customer to be served. Servers take some amount of time, usually random, to serve a
customer. When a customer finishes, that customer may be directed to some other part of
the queueing system (as in a driver's license facility where there are several stations to visit),
or the customer may depart the system for good.
Predicting the behavior of human queueing systems is certainly an interesting
problem, to which it is easy to relate. However, it should be noted that it is the study of very
large, non-human queueing systems, especially those involved with computer and telecommu-
nications networks, which gave probability great impetus in the latter 20th century. Because
instead of human customers and servers we could just as easily consider computer jobs or
messages in a queue waiting for processing by a server or communication channel.
There is a wide variety of problems in queueing theory, generated by variations in
customer arrival and facility service patterns, breakdowns of servers, changes in the size of
the waiting space for customers, rules other than first-in, first-out for selecting customers,
etc. Our small section will just touch on the most basic of all queues: the so-called M/M/1
queue.
Activity 1 Think of three particular human queueing systems you have been involved
in. What main features did they have; for example, how many servers were there, were
customer arrivals and services random, was the queue a first-in, first-out queue, etc.?
Then think of three examples of non-human queueing systems.
6.3 Queues 353
Here are the assumptions that identify a queueing system as M/M/1. Customers
arrive singly at random times T1 , T2 , T3 , ... such that the times between arrivals
T1 , T2 T1 , T3 T2 , ... (1)
are i.i.d. exp(Λ) random variables. The parameter Λ is called the arrival rate of the queue.
Therefore, the process that counts the total number of customer arrivals by time t is a
Poisson process with rate Λ. There is a single server (the 1 in the notation M/M/1), and
customers form one first-come, first-served line in front of the server. Essentially infinite
waiting space exists to accommodate the customers in queue. The server requires an
exponentially distributed length of time with rate Μ for each service, independently of the
arrival process and other service times. Customers who finish service depart the system.
(By the way, the two "M" symbols in the notation stand for Markov, because the memoryless
nature of the exponential distribution makes the arrival and departure processes into
continuous-time versions of a Markov chain.)
One of the most important quantities to compute in queueing systems is the probabil-
ity distribution of the number of customers in the queue or in service, at any time t. Such
knowledge can help the operators of the system to know whether it will be necessary to take
action to speed up service in order to avoid congestion. From the customer's point of view,
another important quantity is the distribution of the customer's waiting time in the queue.
We will consider each of these questions in turn for the M/M/1 queue, producing exact
results for the long-run distributions. But exact computations become difficult even for
apparently simple queueing situations, so very often analysts must resort to simulating the
queue many times to obtain statistical estimates of such important quantities as these. Entire
languages, such as GPSS, exist to make it easy to program such simulations, but as we will
show, it is not so hard to write queueing simulation programs even in a very general purpose
language like Mathematica which has not been specifically designed for queueing simula-
tion.
Activity 2 How would you simulate the process of customer arrivals in an M/M/1
queue? Try writing a Mathematica command to do this, ignoring the service issue for the
moment.
If for each k, the limit as t of fk t can be found, then these limits fk form the limiting
probability distribution of system size. Actually, the functions fk t can be computed
354 Chapter 6 Stochastic Processes and Applications
themselves, which would give us the short run system size distribution, but this is much
more difficult than finding the limiting distribution and we will not attempt this problem
here.
The unique approach to the problem of finding the limiting distribution is to
construct a system of differential equations satisfied by the functions fk t. This illustrates
beautifully the power of blending apparently different areas of mathematics to solve a real
problem. In the interest of space we will leave a few details out and not be completely
rigorous. The interested reader can try to fill in some gaps.
We first would like to argue that the probability that two or more arrival or service
events happen in a very short time interval is negligibly small. Consider for example the
probability that there are two or more arrivals in a short time interval t, t h. Because
arrivals to an M/M/1 queue form a Poisson process, we can complement, use the Poisson
p.m.f., and rewrite the exponential terms in a Taylor series as follows:
P2 or more arrivals in t, t h 1 P0 or 1 arrival in t, t h
0 1
eΛ h Λ h eΛ h Λ h
1 0
1
1 eΛ h Λ h eΛ h
1
1 1 Λ h 2
Λ h2
1
Λ h1 Λ h 2
Λ h2
1
2
Λ h2 higher powers of h
So this probability is a small enough function of h such that if we divide by h and then send
h to zero, the limit is 0. Recall from calculus that such a function is generically called oh.
We can argue similarly that if 2 or more customers are present in the system at time t, then
the probability that at least 2 customers will be served during t, t h is an oh function.
(Try it.) The activity below leads you through another case of two queueing events in a short
time.
Activity 3 Suppose there is at least one customer in the queue and we look at a time
interval t, t h again. Write one expression for the probability of exactly one arrival in
t, t h, and another for the probability of exactly one service in t, t h. Use them to
argue that the probability of having both an arrival and a service in t, t h is oh.
Assuming that you are now convinced that the probability of two or more queueing
events in a time interval of length h is oh, let us proceed to derive a differential equation
for f0 t PMt 0 by setting up a difference quotient
f0 t h f0 t
h
6.3 Queues 355
Now f0 t h PMth 0, so let us consider how the system size could have come to be 0
at time t h, conditioning on the system size at time t. There are two main cases, 1 and 2
below, and a collection of others described in case 3 that involve two or more queueing
events in t, t h, which therefore have probability oh:
1. Mt 0, and there were no arrivals in t, t h;
2. Mt 1, and there was one service and no new arrivals in t, t h;
3. Neither 1 nor 2 hold, and all existing and newly arriving customers are served
in t, t h.
The total probability PMth 0 is the sum of the probabilities of the three cases. But the
probability of no arrivals in t, t h is eΛ h 1 Λ h oh and the probability of exactly
one service is Μ h eΜ h Μ h1 Μ h oh Μ h oh. By independence, the probability
of one service and no new arrivals is therefore 1 Λ h oh Μ h oh Μ h oh.
Thus, by the Law of Total Probability,
f0 t h PMth 0 PMt 0 1 Λ h PMt 1 Μ h oh
(3)
1 Λ h f0 t Μ h f1 t oh
Subtracting f0 t from both sides, and then dividing by h and sending h to zero yields
f0 t h f0 t oh
Λ f0 t Μ f1 t
h h (4)
f0 t Λ f0 t Μ f1 t
Equation (4) is one of the equations in the infinite system of equations we seek. You
are asked to show in Exercise 3 that the rest of the differential equations have the form:
fk t Λ Μ fk t Λ fk1 t Μ fk1 t, k 1 (5)
Intuitively, the first term on the right side of (5) represents the case of neither an arrival nor a
service during t, t h, the second term is for the case where there were k 1 customers at
time t and one new arrival appeared in t, t h, and the third term corresponds to the case
where there were k 1 customers at time t, and one customer was served in t, t h.
The system of differential equations (4) and (5) can be solved with a lot of effort (see
[Gross and Harris, p. 129]), but we will just use the system to derive and solve an ordinary
system of linear equations for the limiting probabilities fk limt fk t
limt PMt k.
Observe that if the fk t functions are approaching a limit, there is reason to believe
that the derivatives fk t approach zero. Presuming that this is so, we can send t to in (4)
and (5) to obtain the system of linear equations:
0 Λ f0 Μ f1
(6)
0 Λ Μ fk Λ fk1 Μ fk1 , k 1
356 Chapter 6 Stochastic Processes and Applications
We can adjoin to this system the condition that f0 f1 f2 1 in order to force the
fk 's to form a valid probability distribution. Now the top equation in (6) implies
Λ
f1 f0
Μ
Λ Λ2
Μ f2 Λ Μ f1 Λ f0 Λ Μ f0 Λ f0 f0
Μ Μ
Λ2
f2 f0
Μ2
Λ3
Activity 4 Check that f3 f0 .
Μ3
Λk
fk f0 (7)
Μk
Clearly though, in order for the sum of the fk 's to be 1, or to be finite at all, the ratio Ρ = Λ/Μ
must be less than 1. This ratio Ρ is called the traffic intensity of the M/M/1 queue, a good
name because it compares the arrival rate to the service rate. If Ρ < 1, then Μ > Λ, and the
server works fast enough to keep up with the arrivals. I will leave it to you to show that
under the condition fk 1, the limiting M/M/1 queue system size probabilities are
k0
So we see that the limiting system size has the geometric distribution with parameter
1 Ρ (see formula (1) of Section 2.3). An interesting consequence is that the average
number of customers in the system is
Ρ
L (9)
1Ρ
From this expression, notice that as the traffic intensity Ρ increases toward 1, the average
system size increases to , which is intuitively reasonable.
Example 1 At a shopping mall amusement area, customers wanting to buy tokens for the
machines form a first-in, first-out line in front of a booth staffed by a single clerk. The
customers arrive according to a Poisson process with rate 3/min., and the server can serve at
an average time of 15 sec. per customer. Find the distribution and average value of the
number of customers awaiting service. Suppose that replacing the booth clerk by a machine
6.3 Queues 357
costs $2000 for the machine itself plus an estimated $.50 per hour for power and mainte-
nance, and with this self-service plan customers could serve themselves at an average time of
12 sec. apiece. The human server is paid $5/hr. Also, an estimate of the intangible cost of
inconvenience to customers is $.20 per waiting customer per minute. Is it worthwhile to
replace the clerk by the machine?
The given information lets us assume that the times between customer arrivals are
exponentially distributed with rate parameter Λ = 3/min. If we are willing to assume that
service times are independent exponential random variables, which are furthermore indepen-
dent of arrivals, then the M/M/1 assumptions are met. The service rate is the reciprocal of
the mean service time, which is 1/(1/4 min.) = 4/min. Thus the queue traffic intensity is
Ρ Λ Μ 3 4. By formula (8) the long-run probability of k customers in the system is
k
1 3
fk , k 0, 1, 2, ...
4 4
and by formula (9) the average number of customers in the system is
3 4 3 4
L 3
1 3 4 1 4
To answer the second question, we must compute the average cost per minute Ch for
the human server system and the average cost per minute of the machine system Cm . Note
that the total cost for x minutes of operations under the human system is just Ch x, but
because of the fixed purchase cost of the machine, the total cost for x minutes is Cm x 2000
under the machine scenario. For consistency let us express all monetary units in dollars and
all times in minutes. Then for instance the clerk's pay rate is ($5/hr.)/(60 min./hr.) =
$(1/12)/min., and the cost of operation of the machine is ($.50/hr.)/(60 min./hr) =
$(1/120)/min. Now in calculating Ch and Cm there are two aspects of cost to consider:
operating cost and the overall cost to waiting customers. The latter would be the average
number of customers waiting, times the cost rate of $.20/min. per customer. For the human
server system, the overall cost per minute would be
Ch $1 12 min.3 customers $ .20 customer min. $ .683 min.
Since the machine average service time is 12 sec. = 1/5 min., its service rate is 5/min. and the
resulting queue traffic intensity is Ρ Λ Μ 3 5. The average number of customers in the
system under the machine scenario is therefore
3 5 3 5 3
L
1 3 5 2 5 2
Thus the overall cost per minute for the machine system is
Ch $1 120 min.3 2 customers $ .20 customer min. $ .308 min.
358 Chapter 6 Stochastic Processes and Applications
The cost rate for the machine is less than half that of the human server. And notice that the
bigger factor in the improvement was not the running cost but the savings in customer
waiting cost. Taking into account the purchase cost, the use of the machine becomes
economical after it has been used for at least x minutes, where x satisfies
Ch x Cm x 2000
.683 x .308 x 2000
.375 x 2000
The solution value for x turns out to be about 5333 minutes, or about 89 hours. If the arcade
is open for 10/hrs. per day, then before 9 working days have elapsed the machine will have
begun to be less costly than the human server.
which we recognize as the expΜ1 Ρ distribution. It follows that the long-run mean
waiting time is EW 1 Μ1 Ρ, which is intuitively reasonable. As the traffic intensity
Ρ increases toward 1, the equilibrium queue length increases, thus increasing the average
waiting time of the new customer. Also, as the service rate Μ increases, the average waiting
time decreases. Note also that Μ1 Ρ Μ1 Λ Μ Μ Λ, so that the closer Λ is to Μ, the
longer is the expected wait.
Example 2 Suppose that a warehouse processes orders for shipment one at a time. The
orders come in at a rate of 4 per hour, forming a Poisson process. Under normal working
conditions, a team of order fillers working on preparing the order can complete an order in
an exponential amount of time with rate 4.2 per hour. Hence the team can just barely keep
up with input. What is the average time for a newly arriving order to be completed? If more
people can be assigned to the team, the service rate can be increased. To what must the
service rate be increased so that orders require no more than one hour of waiting on average
from start to finish?
We are given that Λ = 4.0/hr., and the default service rate is Μ = 4.2/hr. Hence the
traffic intensity is Ρ 4.0 4.2 .952, and the expected waiting time in equilibrium is
1 4.2 1 .952 4.96. This is an excessive amount of time for an order. To answer the
second question, we need to choose Μ such that
1 1
EW 1
Μ 4 1
Μ 5
ΜΛ Μ4
So an increase in service rate from 4.2 to just 5 per hour is enough to cut the average waiting
time from about 5 hours to 1 hour.
Simulating Queues
Consider a single-server first-in, first-out queue as before except that we generalize: (1) the
distribution of customer interarrival times; and (2) the distribution of customer service times.
We have so far treated only the case where both of these distributions are exponential. It is
much harder, and sometimes impossible, to get analytical results about waiting times and
360 Chapter 6 Stochastic Processes and Applications
SimArrivalsarrdist_, numarrs_ :
Modulearrtimelist, currtime, nextinterarr,
arrtimelist ;
currtime 0;
Donextinterarr RandomRealarrdist;
currtime currtime nextinterarr;
AppendToarrtimelist, currtime, numarrs;
arrtimelist
SeedRandom787 632
SimArrivalsExponentialDistribution3, 4
It is even easier to simulate a list of service times for those customers. We just
simulate a table of the desired number of observations from the desired distribution, as in the
next command.
SimServiceTimesservicedist_, numservices_ :
RandomRealservicedist, numservices
SimServiceTimesExponentialDistribution4, 4
All of the behavior of the queue is determined by these lists of arrival and service
times, but there is still the considerable problem of extracting information from the lists. As
an example let us consider the relatively easy problem of finding from a simulated queueing
process a list of customer waiting times.
Think of the queueing behavior for the simulated output above. Customer 1 arrives
at time .519352. It takes .429965 time units to serve this customer, who therefore departs at
time .519352 .429965 .949317. Customer 2 arrives at time .770897, at which time
customer 1 is still in the queue, so he must wait. Since his service time was .130504, he
departs at time .949317 .130504 1.079821. Customer 3 arrives at time .933421 and
must line up behind customer 2, because at that point customer 1 is still not finished with
service. Customer 3 requires .250825 time units to serve, so this customer departs at time
1.079821 .250825 1.330646. Meanwhile, customer 4 has come in at time 1.08503, so
he has to wait until time 1.330646 to begin service. The departure time of customer 4 is then
1.330646 .53062 1.861266.
Activity 5 If five successive customer arrival times are: 2.1, 3.4, 4.2, 6.8, 7.4 and their
service times are: 1.0, 2.3, 1.7, 3.4, 0.6, find the departure times of each customer, and
find the amount of time each customer is in the system. What is the longest queue
length?
Notice that any customer's waiting time in the system is just his departure time minus
his arrival time, so that producing a list of waiting times will be simple if we can produce a
list of departure times. This too is not hard, because a customer's departure time is his
service time plus either his arrival time, or the departure time of the previous customer,
whichever is later. We can generate this list of departure times iteratively by this observa-
tion, beginning with the first customer, whose departure time is just his arrival time plus his
service time. The next function does this. It takes the arrival time and service time lists as
inputs, and returns the list of departure times using the strategy we have discussed.
DepartureTimesarrtimelist_, servtimelist_ :
Module
numcustomers, deptimelist, customer, nextdeptime,
numcustomers Lengthservtimelist;
deptimelist
arrtimelist1 servtimelist1;
Donextdeptime servtimelistcustomer
Maxdeptimelistcustomer 1,
arrtimelistcustomer;
AppendTodeptimelist, nextdeptime,
customer, 2, numcustomers;
deptimelist
362 Chapter 6 Stochastic Processes and Applications
Here we apply the function to the arrival and service lists in the example above.
Now waiting times are departure times minus arrival times; so we can build a
function which calls on the functions above, plots a histogram of waiting times, and com-
putes the mean and variance.
Needs"KnoxProb7`Utilities`";
SimulateWaitingTimes
arrdist_, servdist_, numcustomers_ :
Modulearrtimelist, servtimelist,
deptimelist, waitimelist,
arrtimelist SimArrivalsarrdist, numcustomers;
servtimelist
SimServiceTimesservdist, numcustomers;
deptimelist DepartureTimes
arrtimelist, servtimelist;
waitimelist deptimelist arrtimelist;
Print"Mean waiting time: ", NMeanwaitimelist,
" Variance of waiting times: ",
NVariancewaitimelist;
Histogramwaitimelist, 6, "Probability",
ChartStyle Gray, BaseStyle 8, PlotRange All
Here is an example run in a setting that we already know about: an M/M/1 queue
with arrival rate Λ 3 and service rate Μ 4. The distribution suggested by the histogram in
Figure 9(a) does look exponential, and the mean waiting time of 1.0601 did come out near
the theoretical value of 1 Μ Λ 1 in this run. You should try running the command
several more times to get an idea of the variability. Also try both increasing and decreasing
the number of customers. What do you expect to happen as the number of customers gets
larger and larger?
g1 SimulateWaitingTimesExponentialDistribution3,
ExponentialDistribution4, 1000;
g2 SimulateWaitingTimesUniformDistribution0, 2,
UniformDistribution0, 1.5, 1000;
GraphicsRowg1, g2
0.6 0.4
0.5
0.4 0.3
0.3 0.2
0.2
0.1
0.1
1 2 3 4 5 6 1 2 3 4 5 6
Our command allows us to use any interarrival and service distribution we choose.
To see another example, we repeat the simulation for a uniform0, 2 interarrival distribution
and a uniform0, 1.5 service distribution. The result is shown in Figure 9(b). Note that the
service rate is the reciprocal of the mean time 1/.75, and the arrival rate is the reciprocal of
the interarrival mean 1/1, so the traffic intensity is Ρ .75. The distribution of waiting times
is rather right-skewed, reminiscent of a gamma distribution, and the mean time in system is
around 1.42; in particular, it is more than the mean time in the M/M/1 simulation even
though the traffic intensity is the same.
Exercises 6.3
1. Customers waiting in a post office line experience a frustration cost of 10 cents a minute
per customer while they wait. What target traffic intensity should the postal supervisors aim
for so that the average frustration cost per unit time of the whole queue is 20 cents a minute?
2. Customers arrive to a department store service desk according to a Poisson process with
rate 8 per hour. If it takes 5 minutes on average to serve a customer, what is the limiting
probability of having at least 2 people in the system? What is the limiting probability that an
arriving customer can go directly into service?
3. Derive the differential equation (5) for the short-run M/M/1 system size probabilities.
4. Verify by mathematical induction expression (7) for the limiting M/M/1 system size
probabilities.
364 Chapter 6 Stochastic Processes and Applications
5. What, in general, should be the relationship between the expected waiting time in the
system EW and the expected waiting time in the queue E
Wq ?
6. Derive the long-run probability distribution of the time Wq that a customer spends in
queue prior to entering service. Be careful: this random variable has a mixture of a discrete
and continuous distribution, because there is non-zero probability that Wq equals 0 exactly,
but for states greater than zero there is a sub-density, which when integrated from 0 to
gives the complementary probability to that of the event Wq 0.
7. People come to a street vendor selling hot pretzels at a rate of 1 every 2 minutes. The
vendor serves them at an average time of 1 minute and 30 seconds. Find the long-run
probability that a customer will take at least 2 minutes to get a pretzel.
8. The operator of a queueing system must solve an optimization problem in which he must
balance the competing costs of speeding up service, and making his customers wait. There is
a fixed arrival rate of Λ, operating costs are proportional to the service rate, and customer
inconvenience cost is proportional to the expected waiting time of a customer. Find the
optimal service rate in terms of Λ and the proportionality constants.
9. (Mathematica) Use the commands of the section to study the long-run waiting time
distribution and average waiting time when the interarrival and service time distributions are
approximately normal. What do you have to be careful of when you choose the parameters
of your normal distributions?
10. (Mathematica) Build a command that takes a list of arrival and service times and a fixed
time t, and returns the number of people in the system at time t. Combine your command
with the arrival and service time simulators of this section to produce a command that
simulates and plots the system size as a function of t.
11. One case in which it is possible to derive the short-run distribution of system size is the
M/M/1/1 queue. The extra 1 in the notation stands for 1 waiting space for the customer in
service; hence any new customer arriving when someone is already being served is turned
away. Therefore, the system size Mt can only take on the values 0 or 1. Derive and solve a
differential equation for f1 t PMt 1 1 f0 t 1 PMt 0. (You will need to
make the assumption that the system is empty at time 0.)
12. A self-service queueing system, such as a parking garage or a self-service drink area in a
restaurant, can be thought of as an M/M/ queue, i.e., a queue with an unlimited number of
servers. All customers therefore go into service immediately on arrival. If n customers are
now in such a queue, what is the distribution of the time until the next customer is served?
(Assume customers serve themselves at the same rate Μ.) Obtain a system of differential
equations analogous to (5) (case k 1 only) for the short-run system size probabilities. You
need not attempt to solve either the differential equations or the corresponding linear
equations for the limiting probabilities.
6.4 Brownian Motion 365
IBMpricefulldata FinancialData"IBM",
"Feb. 1, 2006", "Feb. 28, 2006", "Week"
The ith price in the list would be referred to as the second member of the ith sublist, as below.
In this way we can strip off the date lists to create a list of only prices.
IBMpricedata
TableIBMpricefulldatai, 2, i, 1, 5
IBMpricedata2
77.97
366 Chapter 6 Stochastic Processes and Applications
Now consider the series of weekly closing stock prices Pn for General Electric, and
the ratios of successive prices Rn Pn1 Pn graphed in Figure 10(a) and 10(b), respectively,
for a period of three years beginning in January 2005. The two types of data show somewhat
different characteristics, with the sequence of prices appearing to grow with time, perhaps
superlinearly, and the ratios moving up and down in an apparently random way, centered
near 1.
GEpricefulldata FinancialData"GE",
"Jan. 1, 2005", "Dec. 31, 2007", "Week";
GEprices TableGEpricefulldataj, 2,
j, 1, LengthGEpricefulldata;
GErates TableGEpricesi 1 GEpricesi,
i, 1, LengthGEprices 1;
GraphicsRowListPlotGEprices,
Joined True, PlotStyle Black,
ListPlotGErates, Joined True, PlotStyle Black
40 1.06
38 1.04
36 1.02
34 1.00
32 0.98
(a) Weekly closing prices of GE, 20052007 (b) Weekly ratios of return on GE stock
Figure 6.10
Here we look for correlations between the ratios and their nearest neighbors, by forming a
list GErate1 of all but the last ratio, and a lagged list GErate2 of all but the first ratio. So the
GErate1 list would have price ratios R1 , R2 , ... , Rn1 and the GErate2 list would have
price ratios R2 , R2 , ... , Rn This enables us to study the pairs Ri , Ri1 , looking particularly
for statistical dependence. The computation below shows that the correlation between ratios
and their nearest neighbors is very small.
GErate1 TableGEratesi,
i, 1, LengthGErates 1; GErate2
TableGEratesi, i, 2, LengthGErates;
CorrelationGErate1, GErate2
0.00790478
6.4 Brownian Motion 367
Activity 1 Use ListPlot to look at the graphical relationship between the GErate1 and
GErate2 variables. Do you see any dependence?
In order to make a point below, notice from Figure 11 that the logs of the price ratios also
seem to be approximately normally distributed.
Needs"KnoxProb7`Utilities`"
HistogramLogGErates, .01, "Probability",
ChartStyle Gray, PlotRange All, BaseStyle 8
0.30
0.25
0.20
0.15
0.10
0.05
As above, denote by Pn the price at week n, and Rn Pn1 Pn the price ratio
between weeks n and n 1. Then we can telescope Pn multiplicatively as
P2 P3 Pn
Pn P1 P1 R1 R2 Rn1 (1)
P1 P2 Pn1
If we plot the logs of the prices as in Figure 12, we get a stochastic process that shows signs
of linear increase on the average, with variability. By formula (1),
n1
logPn logP1 logR1 R2 Rn1 logP1 logRk (2)
k1
So the logged price is a constant times a total of random variables that empirically seem to
be independent and normally distributed. The increment of the logged price process is
logPn logPn1 logP1 logRn1 . Roughly stated, between successive weeks the log
price seems to change by a random amount that is normally distributed, independent of other
increments of price.
ListPlotLogGEprices,
Joined True, PlotStyle Black
368 Chapter 6 Stochastic Processes and Applications
3.65
3.60
3.55
3.50
3.45
50 100 150
Figure 6.12 - TIme series graph of logged GE prices
Activity 2 What does the definition of Brownian motion imply about the probability
distribution of Xts Xt ?
This means that, beginning at a starting state x, we can simulate a sequence of new states by
successively adding a random value sampled from the appropriate normal distribution.
Below is the algorithm. The input parameters are the starting state x, the final time t,
the number of time intervals n, and the drift rate and variance rate. The function first
computes the subinterval length, and then computes a list called times of all the partition
points. It successively adjoins to a list called states the initial state, the intial state plus the
first random normal increment, etc., by simply appending to the growing list the most recent
state (i.e., Last[states]) plus a new normal increment.
Figure 13 shows the result of one simulation for the case where x 0, t 1, n 100
time intervals, and Μ 0, Σ2 1, that is, standard Brownian motion with initial state zero. In
the electronic book, you should set the seed randomly and repeat the simulation to see how
the paths behave. Sometimes variability can give the appearance of a trend, as in this
particular picture. But for the standard Brownian motion, over many simulatons you will see
false downward trends about as often as false upward trends, since the distribution of Xt is
symmetric about zero.
370 Chapter 6 Stochastic Processes and Applications
SeedRandom567 763;
path SimulateBrownianMotion0, 1, 100, 0, 1;
ListPlotpath, Joined True, PlotStyle Black
0.8
0.6
0.4
0.2
Figure 6.13 - Simulated values of a standard Brownian motion with starting state 0
Example 2 Let Xt t0 be a standard Brownian motion starting at x 0. Find (a) PX1.2 .5
; (b) PX5.4 X2.3 1.6, X1 0; (c) PX2.6 2 X1.1 1.
For part (a), we simply use Definition 2(c) to observe that X1.2 N0, 1.2, since
Μ 0 and Σ2 1 here. Consequently, PX1.2 .5 is:
0.324038
In part (b), the increment of the process X5.4 X2.3 in the time interval 2.3, 5.4 is indepen-
dent of previous values of the process, so that
6.4 Brownian Motion 371
Since X is a standard Brownian motion, PX1 0 .5, and also by the stationarity property,
X5.4 X2.3 has the same distribution as X3.1 , namely, the N0, 3.1 distribution. Conse-
quently, PX5.4 X2.3 1.6, X1 0 is:
0.090872
Now in the conditional probability in part (c), subtract the given value X1.1 1 from both
sides of the inequality and use stationarity and independent increments to get
PX2.6 2 X1.1 1 PX2.6 X1.1 2 1 X1.1 1
PX2.6 X1.1 1 X1.1 1
PX2.6 X1.1 1
PX1.5 1
Since X1.5 has the N0, 1.5 distribution, this probability is:
1 CDFNormalDistribution0, 1.5 , 1
0.207108
Example 3 Suppose that the number of people in a large community who are infected by a
certain variety of the flu can be approximately modeled by a Brownian motion process
Xt t0 . If the initial state is 50, if the parameters are Μ 0, Σ2 4, and if the flu disappears
forever when the number who are infected reaches zero, find the probability that at time
t 20 there are at least 25 individuals still infected.
The problem statement indicates that we have a different version of the Brownian
motion here than usual: the flu infection process stops at 0 once the population is clear.
Define the Brownian motion Xt to be the unconstrained process with constant drift rate 0
and variance rate 4. It is convenient to introduce the time of first visit by the X process to
state 0: T0 min
s 0 : Xs 0. If we define a new stochastic process whose tth member is:
Xt if t T0
Xt
(3)
0 otherwise
372 Chapter 6 Stochastic Processes and Applications
then the X
process captures the absorption property at 0. We are interested in finding
PX20
25. Actually, we will compute more generally PXt
25 for each t.
First, by complementation and the multiplication rule we have
PXt
25 PXt 25, T0 t
PXt 25 PXt 25, T0 t (4)
PXt 25 PXt 25 T0 t PT0 t
At time T0 , the value of X is zero. Since the X process has no drift, given the event T0 t, X
is as likely to go up 25 units in the next t T0 time units as it is to go down 25 units. Thus,
the conditional probability that Xt 25 in the second term in formula (4) can be replaced by
the conditional probability that Xt 25. But then the event
Xt 25 is contained in the
event
T0 t, so that we can write
PXt
25 PXt 25 PXt 25 T0 t PT0 t
PXt 25 PXt 25, T0 t (5)
PXt 25 PXt 25
Applying formula (5) at time t 20, and observing that X20 N50, 4 20 yields
0.997406
Notice that with the given problem parameters, X20 has standard deviation a little less than 9,
so it is very likely to take a value within 18 units of the mean of 50. Hence it is highly likely
for X20 to exceed 25. In fact, we worked hard to cope with the possibility of absorption
prior to time 20, which is what introduced the extra term PXt 25 in formula (5), but that
possibility did not change the answer in any significant way, as shown in the computation
below.
NCDFNormalDistribution50, 4 20 , 25
2.5308 1017
As time wears on however, variability increases the chance of absorption, as you will
discover in the next activity.
Activity 3 Use the general expression for PXt
25 in Example 3 to graph it as a
function of t for values of t 200. On the same graph, show PXt 25.
6.4 Brownian Motion 373
Example 4 Assume that the logged GE prices as in Figure 12 do form a Brownian motion.
Estimate the parameters Μ and Σ2 , and use the estimates to compute the probability that the
price after 100 weeks is at least 35, given that the initial price is 32.
Let Pt t0 denote the process of prices, and let Xt t0 denote the process of logged
prices, i.e., for each t 0, Xt logPt . We are assuming that Xt t0 forms a Brownian
motion with parameters Μ and Σ2 , hence the increments X2 X1 , X3 X2 , X4 X3 , ..., for
which the length of the time interval is exactly 1 week, must have the NΜ 1, Σ2 1 distribu-
tion, and they are independent of each other. We can therefore estimate the parameters by
the sample mean and sample variance of these increments.
Xtprocess LogGEprices;
Increments
TableXtprocessi Xtprocessi 1,
i, 2, LengthXtprocess;
xbar MeanIncrements
var VarianceIncrements
0.000738278
0.000315158
Now if the initial price is 32, then the log of the initial price is log(32), and at time t 100
the log of the GE price has approximately the Nlog32 xbar 100, var 100 distribution.
Thus, the probability that the price is at least 35 is
PPt 35 PXt logPt log35
1 CDFNormalDistribution
Log32 xbar 100, var 100 , Log35
0.464576
Definition 3. A stochastic process Yt t0 is called a geometric Brownian motion with
parameters Μ and Σ2 if its log forms a Brownian motion Xt t0 with drift rate Μ and variance
rate Σ2 . In other words, Yt t0 is a geometric Brownian motion if Yt e Xt for all t 0,
where Xt t0 is a Brownian motion.
The data presented at the start of the section on the GE price process suggested that
the logged prices might be well modeled as Brownian motion, hence the price process itself
can be modeled as a geometric Brownian motion. We will say a bit more about this and give
another example in a moment.
Example 5 We know the mean and variance of a Brownian motion as a function of time,
namely Μt EXt x Μ t, Σt 2 VarXt Σ2 t. What are the mean and variance functions
for geometric Brownian motion?
Recall that the moment-generating function of a NΜ, Σ2 random variable is
M s e Μ s Σ
2 s2
2
, s
Suppose that Yt e Xt is our geometric Brownian motion, where the drift and variance
parameters are Μ and Σ2 . Then Xt Nx Μ t, Σ2 t , where x is the state at time 0. The mean
function for the Y process can be considered as an instance of the m.g.f. of Xt , with real
parameter s 1:
EYt E e Xt
E e1Xt
M Xt 1
exΜ t 1 Σ t 12
2
2
1
Μ Σ2 t
ex e 2
2
Similarly, to find the variance function we can compute EYt 2 E e Xt E e2 Xt .
Following along the lines of the previous calculation,
EYt 2 E e2Xt
M Xt 2
exΜ t 2 Σ t 22
2
2
e2 x e2 Μ2 Σ
2 t
1 2
Μ Σ2 t
Hence, VarYt e2 x e2 Μ2 Σ
2 t
e2 x e2 Μ2 Σ
2 t
e2 ΜΣ t .
2
ex e 2
6.4 Brownian Motion 375
It is easy to use the cumulative distribution function technique to find the p.d.f. of Yt .
It should be pointed out that even though Theorem 1 below gives a formula for the density,
the m.g.f. approach that we used in Example 5 to compute the moments is simpler than the
definition.
Theorem 1. If Yt t0 is a geometric Brownian motion, then the p.d.f. of Yt is
1 1
elogyxΜ t
2
2 Σ2
gy t
(6)
y
2 ΠΣ2 t
where F Xt x is the c.d.f. of Xt . The p.d.f. of Yt is the derivative of this c.d.f., which is
gy f Xt logy 1 y, where f Xt is the density of Xt . This density is Nx Μ t, Σ2 t ,
hence the density of Yt is
1 1
elogyxΜ t
2
2 Σ2
gy t
y
2 ΠΣ2 t
Example 6 An investor has entered into a deal in which it becomes profitable for him to sell
1000 shares of Microsoft stock if the price exceeds $42 at time 10 (weeks); otherwise, he
will not sell. He is willing to assume that the stock price is in geometric Brownian motion.
The current price of the stock is $40 and the investor has historical data on weekly rates of
return as below (in the Mathematica list MicrosoftRofR, viewable in the electronic file) for a
three year period in order to estimate parameters. Here, the rate of return on an investment
over a time period refers to the ratio of the growth in value over that time period to the intial
value. What is the probability that it will be profitable for him to sell? What is the estimated
expected value of his sale proceeds (taking account of the chance that he will not sell at all)?
Microsoftfulldata FinancialData"MSFT",
"Jan. 1, 2005", "Dec. 31, 2007", "Week";
Microsoftprices TableMicrosoftfulldataj, 2,
j, 1, LengthMicrosoftfulldata;
MicrosoftRofR Table
Microsoftpricesi 1 Microsoftpricesi
Microsoftpricesi,
i, 1, LengthMicrosoftprices 1;
GraphicsRowListPlotMicrosoftprices,
Joined True, PlotStyle Black,
ListPlotMicrosoftRofR, Joined True,
PlotStyle Black
376 Chapter 6 Stochastic Processes and Applications
36
34 0.05
32
30
28 50 100 150
26 0.05
24
0.10
50 100 150
(a) Weekly closing prices of Microsoft, 2005-2007 (b) Weekly rates of return on Microsoft
Figure 6.14
Figure 14 shows time series graphs of both the weekly prices themselves and the
weekly rates of return. The price graph shows signs of being an exponentiated Brownian
motion, and the rate of return graph suggests noise centered near zero. (Do the Activity
following this example to check that it is reasonable that the rates of return could be
normally distributed with no significant dependence on one another.) First let us take a look
back at the geometric Brownian motion model, and relate the model to rates of return on the
stock.
If the price process is related to the underlying Brownian motion by Yt e Xt , then the
rate of return Ri for week i is
Yi1 Yi e Xi1 e Xi
Ri e Xi1 Xi 1 log1 Ri Xi1 Xi (7)
Yi e Xi
Thus, by the defining properties of Brownian motion, the collection of random variables
log1 Ri form a random sample from the NΜ 1, Σ2 1 distribution (the time interval is 1
week). Hence we can use the sample mean and variance of these data to estimate the drift
rate Μ and the variance rate Σ2 , respectively:
0.00210681
0.000812501
We can now answer the questions posed in the problem. The stock will be profitable to sell
if the price Y10 at time 10 is greater than 42, and remember that the current price is 40.
Because of the latter assumption, the initial value x of the Brownian motion satisfies
40 ex x log40. Using the density derived in Theorem 1, we obtain:
6.4 Brownian Motion 377
PY10 42
1 1
1 NIntegrate
y 2 Π sigsq 10
E Logy Log40Μ 10
2 sigsq 10
, y, 0, 42
2
0.379213
The second question asks for the expected value of the sale proceeds. The investor earns
1000 Y10 if he sells, that is if Y10 42, and 0 otherwise. Thus, the usual integral that defines
the expectation must be restricted to the interval of integration 42, , and we find:
Expected proceeds
1 1
1000 NIntegratey
y 2 Π sigsq 10
E Logy Log40Μ 10
2 sigsq 10
, y, 42,
2
16 979.2
Activity 4 Use graphical and numerical techniques to check the assumption of indepen-
dence of the weekly rates of return for Microsoft. Check also the assumption of normal-
ity of the distribution of the logs of the rates of return +1.
Exercises 6.4
1. Suppose that Xt t0 is a Brownian motion starting at x 0, with drift rate Μ 1 and
variance rate Σ2 1. Find: (a) P0 X3.1 2; (b) PX3.1 3 X1.4 2; (c)
PX1.4 2, X3.1 3.
2. (Mathematica) In Example 1 we showed how to simulate paths of a Brownian motion Xt .
Modify the SimulateBrownianMotion command to study the simulated distribution of Xt for
fixed t. Specifically, for (a) a standard Brownian motion starting at x 0, and (b) the
Brownian motion in Exercise 1, simulate 500 replications of X2 and produce a histogram.
Comment on the relationship of the data histogram to the true distribution of X2 in each
case.
3. (Mathematica) A two-dimensional stochastic process X Xt , Yt t0 is called a two-
dimensional (independent) Brownian motion if each of the component processes Xt t0 and
Yt t0 is a Brownian motion and all X observations are independent of all Y observations.
378 Chapter 6 Stochastic Processes and Applications
(a) Write a simulation program that simulates paths of a two-dimensional Brownian motion.
Your program should return a list in which each element is a triple
t, x, y indicating the
time, the state of the Xt process, and the state of the Yt process. You may use the utility
function ShowLineGraph below to plot the path in three-dimensional space, formatted as a
list of triples.
(b) For the case where each component process is standard Brownian motion starting at 0,
write another program that simulates 200 paths of the two-dimensional process up to the
time when the process exits the open disk S x, y : x2 y2 10. Let the program
compute the proportion of replications in which the departure from the circle occurred in
each of the first, second, third, and fourth quadrants of the plane. What do you expect those
proportions to be?
ShowLineGraphdatalist_ : Graphics3DJoin
TableLinedatalisti, datalisti 1,
i, 1, Lengthdatalist 1,
TablePointdatalisti,
i, 1, Lengthdatalist,
Axes True, AxesLabel "t", "x", "y";
4. Compute the covariance function of a Brownian motion Xt t0 , that is,
Ct, s CovXt , Xts . (For simplicity, suppose that the starting state is x 0. Does this
matter?) Does this function depend on t?
5. Brownian motion can be viewed as a limit of discrete-time, discrete-state random walks
as the time between jumps and the jump size approach zero together. Consider a sequence of
random walks on the real line that start at 0, and such that the nth random walk has right
move probability p, moves every t 1 n time units, and jumps either to the left of or the
right of its current state by a distance x c t . Argue that the position of the random
walk at time t is approximately normally distributed for large n, find the parameters of that
distribution, and argue informally that each random walk has stationary, independent
increments.
6. (Mathematica) A town whose population is currently 30,000 knows that when the
popuation reaches 35,000 they will need to upgrade their water distribution network. If the
population is in Brownian motion with drift rate 1,000/yr and variance rate 900/year, in how
many years is the expected population 35,000? What is the smallest time t such that the
probability that the population exceeds 35,000 at that time is at least .9?
7. In Example 3 we introduced the hitting time T0 of state 0 by a Brownian motion with no
drift. Now consider a Brownian motion with initial state x 0, drift rate Μ 0, and variance
rate Σ2 . Find the c.d.f. of Ta , the first time that the Brownian motion reaches state a 0.
(Hint: relate the event Xt a to the event Ta t.)
6.4 Brownian Motion 379
1. An investor must decide how to allocate wealth between a risky asset such as a stock, and
a non-risky asset such as a bond.
2. An option on a stock is a contract that can be taken out which lets the option owner buy a
share of the stock at a fixed price at a later time. What is the fair value of this option?
3. A firm must decide how to balance its equity (money raised by issuance of stock to
buyers, who then own a part of the company) and debt (money raised by selling bonds which
are essentially loans made by the bond-holders to the company). The main considerations
are the tax-deductibility of the interest on the bonds, and the chance of default penalties if
the company is unable to make payments on the bonds. The goal is to maximize the total
value of the money raised.
4. How should an individual consume money from a fund which appreciates with interest
rates that change randomly as time progresses, if the person wants a high probability of
consumption of at least a specified amount for a specified number of years?
Activity 1 The insurance industry has always been filled with financial problems in
which randomness is a factor. Write descriptions of at least two problems of this kind.
In the interest of brevity we will just look at problems 1 and 2 above, on finding an
optimal combination of two assets in a portfolio, and pricing a stock option.
Optimal Portfolios
All other things being equal, you would surely prefer an investment whose expected return is
6% to one whose expected return is 4%. But all other things are rarely equal. If you knew
that the standard deviation of the return on the first asset was 4% and that of the second was
1% it might change your decision. Chebyshev's inequality says that most of the time,
random variables take values within 2 standard deviations of their means, so that the first
asset has a range of returns in [2%, 14%] most of the time, and the second asset will give a
return in [2%, 6%] most of the time. The more risk averse you are, the more you would tend
to prefer the surer, but smaller, return given by the second asset to the more variable return
on the first.
Let us pose a simple mathematical model for investment. There are just two times
under consideration: "now" and "later." The investor will make a decision now about what
proportion of wealth to devote to each of two potential holdings, and then a return will be
collected later. One of these two holdings is a bond with a certain rate of return r1 , meaning
that the total dollars earned in this single period is certain to be r1 times the dollars invested
in the bond. The other potential investment is a risky stock whose rate of return R2 is a
random variable with known mean Μ2 and standard deviation Σ2 . The investor starts with
wealth W and is to choose the proportion q of that wealth to invest in the stock, leaving the
proportion 1 q in the bond. The dollar return on this portfolio of assets is a combination of
a deterministic part and a random part:
6.5 Financial Mathematics 381
W1 q r1 W q R2
Hence the rate of return on the portfolio of two assets, i.e., the dollar return divided by the
number of dollars W invested, is
R 1 q r1 q R2 (1)
Because R2 is random, the investor cannot tell in advance what values this portfolio
return R will take. So, it is not a well-specified problem to optimize the rate of return given
by (1). The investor might decide to optimize expected return, but this is likely to be a
trivial problem, because the risky stock will probably have a higher expected return
Μ2 ER2 than the return r1 on the bond; so the investor should simply put everything into
the stock if this criterion is used. Moreover, this optimization criterion takes no account of
the riskiness of the stock, as measured by Σ2 2 , the variance of the rate of return.
There are various reasonable ways to construct a function to be optimized which
penalizes variability according to the degree of the investor's aversion to risk. One easy way
is based on the following idea. Suppose we are able to query the investor and determine
how many units of extra expected rate of return he would demand in exchange for an extra
unit of risk on the rate of return. Suppose that number of units of extra return is a (for
aversion). Then the investor would be indifferent between a portfolio with mean rate of
return Μ and variance Σ2 and another with mean Μ a and variance Σ2 1, or another with
mean Μ 2 a and variance Σ2 2, etc. Such risk averse behavior follows if the investor
attempts to maximize
Μ p a Σ p2 (2)
where Μ p is the mean portfolio rate of return and Σ p 2 is the variance of the rate of return.
This is because with objective (2), the value to this investor of a portfolio with mean Μ k a
and variance Σ2 k is
Μ k a aΣ2 k Μ a Σ2
which is the value of the portfolio of mean Μ and variance Σ2 . So we will take formula (2)
as our objective function.
From formula (1), the mean portfolio rate of return as a function of q is
Μ p E1 q r1 q R2 1 q r1 q Μ2 (3)
Combining (2), (3), and (4), an investor with risk aversion constant a will maximize as a
function of q:
f q r1 Μ2 r1 q a Σ2 2 q2 (5)
382 Chapter 6 Stochastic Processes and Applications
Notice from (6) the intuitively satisfying implications that as either the risk aversion
a or the stock variance Σ2 2 increase, the fraction q of wealth invested in the stock
decreases, and as the gap Μ2 r1 between the expected rate of return on the stock and the
certain rate of return on the bond increases, q increases.
Example 1 Suppose that the riskless rate of return is r1 .04, and the investor has the
opportunity of investing in a stock whose expected return is Μ2 .07, and whose standard
deviation is .03. Let us look at the behavior of the optimal proportion q as a function of the
risk aversion a.
First, here is a Mathematica function that gives q in terms of all problem parame-
ters.
Μ2 r1
qstara_, r1_, Μ2_ , Σ2_ :
2 a Σ2 2
The plot of this function in Figure 15 shows something rather surprising. The optimal
proportion q does not sink below 1 = 100% until the risk aversion becomes about 17 or
greater. (Try zooming in on the graph to find more precisely where q intersects the
horizontal line through 1.) There are two ways to make sense of this: investors whose risk
aversions are less than 17 hold all of their wealth in the stock and none in the bond, or if it is
possible to borrow cash (which is called holding a short position in the bond), then an
investor with low risk aversion borrows an amount 1 q W of cash and places his
original wealth plus this borrowed amount in the stock. His overall wealth is still W ,
because he owns W q 1 W of stock and q 1 W of cash. He hopes that the stock
grows enough to let him pay off his cash debt, and still profit by the growth of his holding in
stock.
q
5
a
0 5 10 15 20
Figure 6.15- Optimal proportion of wealth in risky asset as a function of risk aversion
Let us simulate the experiment of making this investment 200 times, and keep track
of the investor's profit under the optimal choice q , and then under a less than optimal
choice. This will give us numerical evidence that this borrowing is a good thing for the
investor. We will plot a histogram, and compute a sample mean and standard deviation of
profit in each case. The distribution of the stock rate of return will be assumed normal. The
rate of return information will be as described in the problem, and we will take an initial
$100 investment and an investor with risk aversion parameter a 10. First we define a
simulator of total returns given initial wealth W, portfolio coefficient q, the rates of return r1
and Μ2 , the standard deviation of the stock return Σ2 , and the number of simulations desired.
Needs"KnoxProb7`Utilities`"
Here is a typical result using the optimal strategy. The first output is the optimal q , and the
sample mean and standard deviation of total returns corresponding to it. Then we let
Mathematica compute the sample mean and standard deviation for the portfolio which puts
100% of wealth in the stock, without borrowing. Beneath these are histograms of total
returns for each simulation.
384 Chapter 6 Stochastic Processes and Applications
1.66667
8.47672, 4.83981
q1
returns2 TotalReturn100, q, .04, .07, .03, 200;
Meanreturns2, StandardDeviationreturns2
6.73963, 3.03797
0.30 0.30
0.25 0.25
0.20 0.20
0.15 0.15
0.10 0.10
0.05 0.05
0 5 10 15 20 0 2 4 6 8 10 12 14
(a) optimal strategy q 1.6667, a 10, W 100 (b) q 1(no borrowing), a 10, W 100
Figure 6.16 - Distribution of profits under two strategies
Since the first run began with more money in the stock, it is to be expected that the mean
total return of about 8.5 exceeds the mean total return for the second scenario of about 6.7.
It is also to be expected that the standard deviation of the return is a little higher in the first
case than the second. But the distribution of returns in Figure 16 is noticeably more
desirable in the optimal case than in the q 1 case, since the center of the distribution is
higher, and its highest category is higher than the one for the non-optimal case, while the
lowest categories are comparable.
Activity 3 Compare the distribution, mean, and standard deviation in the optimal case
to some suboptimal cases for an investor with high risk aversion of 30.
6.5 Financial Mathematics 385
Extensions of this basic problem are possible. Exercise 4 investigates what happens
when there are two stocks of different means and variances. An important result in the
multiple stock case is the Separation Theorem, which says that risk aversion affects only the
balance between the bond and the group of all stocks; all investors hold the stocks them-
selves in the same relative proportions. One can also consider multiple time periods over
which the portfolio is allowed to move. Mathematical economists have also studied different
objective criteria than the mean-variance criterion we have used. One of the main alterna-
tives is to maximize a utility function UR of the rate of return on the portfolio, where U is
designed to capture both an investor's preference for high return, and aversion to risk.
Option Pricing
Our second problem is the option pricing problem. An asset called a call option of Euro-
pean type can be purchased, which is a contract that gives its owner the right to buy a share
(actually, it is more commonly 100 shares) of another risky asset at a fixed price E per share
at a particular time T in the future. The price E is called the exercise price of the option.
The question is, what is the present value of such a call option contract? (The "European"
designation means that it is not possible to exercise the option until time T, while options of
American type permit the option to be exercised at any time up to T. )
The value of the European call option depends on how the risky asset, whose price at
time t we denote by Pt, moves in time. If the final price PT exceeds E, the owner of the
option can exercise it, buy the share at price E, then immediately resell it at the higher price
PT for a profit of PT E. But if PT E then it does not pay to exercise the option to
buy, so the option becomes valueless.
Suppose that the value of money discounts by a percentage of r per time period. This
means that a dollar now is worth 1 r times the worth of a dollar next time period; equiva-
lently a dollar in the next time period is worth only 1 r1 times a dollar in present value
terms. Similarly, a dollar at time T is only worth 1 rT times a dollar now. Therefore,
the present value of the call option is the expectation
1
E maxPT E, 0 (7)
1 rT
Activity 4 A European put option is a contract enabling the owner to sell a share of a
risky asset for a price E at time T. Write an expression similar to (7) for the value of a
put option.
To proceed, we need to make some assumptions about the risky asset price process
Pt. Nobel winners Fischer Black and Myron Scholes in their seminal 1973 paper chose a
continuous time model in which logPt has a normal distribution with known parameters,
386 Chapter 6 Stochastic Processes and Applications
implying that Pt has a distribution called the log-normal distribution. (See also Section 6.4
on the Geometric Brownian motion process.) The Black-Scholes solution, which has
dominated financial theory since its inception, is expressed in terms of the standard normal
c.d.f. and some parameters which govern the stock price dynamics. But there is quite a lot of
machinery involved in setting up the model process, and finding the expectation in (7) for
this continuous time problem. So we will use the simpler Cox, Ross, and Rubinstein model
(see Lamberton and LaPeyre, p. 12) in discrete time. It turns out that, in the limit as the
number of time periods approaches infinity while the length of a period goes to 0, the Cox,
Ross, and Rubinstein call option value converges to the Black-Scholes call option value.
In the Cox, Ross, and Rubinstein framework, we assume that from one time to the
next the asset price changes by:
1 a Pt with probability p
Pt 1
(8)
1 b Pt with probability 1 p
where a b. So there are two possible growth rates for the price in one time period: a or b,
and these occur with probabilities p and 1 p. By the way, a need not be positive; in fact,
it is probably useful to think of the a transition as a "down" movement of the price and the b
transition as an "up" price movement. One other assumption is that the ratios
Yt Pt Pt 1 are independent random variables. By (8),
1 b with probability p
Yt
and Pt P0 Y1 Y2 Y t (9)
1 a with probability 1 p
A tree diagram summarizing the possible paths of the asset for two time periods is in Figure
17.
x1b2
p
x1b 1 p
p
x 1 p x1a1b
p
x1a 1 p
x1a2
Figure 6.17- C-R-R asset price model
If the initial price of the risky asset is x, then in the time periods 1, 2, ... , T the price
will undergo some number j of "ups" and consequently T j "downs" to produce a final
price of x1 b j 1 aT j . By our assumptions, the number of "ups" has the binomial
distribution with parameters T and p. Therefore, the expectation in (7), i.e., the present value
of the call option, is a function of the initial price x, the exercise price E, and the exercise
time T by
6.5 Financial Mathematics 387
V x, T, E
1 T T T j
p j 1 p maxx1 b j 1 aT j E, 0
j
1 rT j0
It appears that formula (10) solves the option valuation problem, but there is a little
more to the story. Economic restrictions against the ability of investors to arbitrage, i.e., to
earn unlimited risk-free returns at rates above the standard risk-free rate r, imply that we may
assume not only that a r b, but also that
Pt
E EYt 1 r
Pt 1
The argument that justifies this is actually rather subtle, and can be found in most finance
books, including Lamberton and LaPeyre mentioned above. It hinges on the fact that if the
option were valued in any other way, a portfolio of the stock and the option could be
constructed which would hedge away all risk, and yet would return more than the riskless
rate. We will omit the details. By construction of Yt , we can use the anti-arbitrage condition
EYt 1 r to conclude that
ra
1 b p 1 a 1 p 1 r
p (11)
ba
after some easy algebra. So with the "up" and "down" percentages b and a, and the riskless
rate r as given parameters, equations (10) and (11) form a complete description of the fair
option value.
Also, the same reasoning as before applies to the problem of finding the present
value at time t of the same option at the time t between 0 and T. All of the T's in formula
(10) would be replaced by T t, which is the remaining number of time steps.
Example 2 Let us define the option price function V as a Mathematica function that we call
OptionValue, and examine some of its characteristics. In addition to current time t, current
price x, T, and E, we will also put b, a, and r as arguments, and let the function compute p.
We use variable name EP instead of E for the exercise price, since E is a reserved name.
Here is one example run. Think carefully about the question in the activity that follows.
6.45092
Activity 5 In the output above the difference between the stock price and the option
exercise price is P0 E $22 $18 $4. Why is $6.45 a reasonable answer for the
value of the option?
Figure 18(a) shows the graph of option price against stock price, for fixed time t 0,
time horizon T = 8, and exercise price E = 18. We take b .09, a .03, and r .05. We
observe that the option is worthless until the stock price is larger than 11 or so, which is to
be expected because if the stock price is so much lower than E, and can only rise at best by a
little less than 10% for each of 8 time periods, it has little if any chance of exceeding the
exercise price by time T. As the initial stock price rises, the price has a much better chance
of exceeding 18 at time T 8, so the amount the buyer expects to receive increases, and the
price of the option therefore increases.
option option
10 12
8 10
6 8
4 6
4
2 2
stock E
10 15 20 20 25 30 35
(a) T = 8, E = 18, b .09, a .03, r .05 (b) T 8, b .09, a .03, r .05, stock price $22.
Figure 6.18 - (a) Option price at time 0 as a function of stock price; (b) Option price at time 0 as a function of
exercise price
We see the same phenomenon in a different way in Figure 18(b), in which the option
price is plotted against the exercise price E, for fixed initial stock price $22 (the other
6.5 Financial Mathematics 389
parameters have the same values as before). As the exercise price increases, it becomes
more likely that the stock price will not exceed it, lowering the price of the option. Thus, the
price of the option goes down as the exercise price increases. To complete the study, in
Exercise 6 you are asked to study the effect of lengthening the time horizon on the option
price.
We can use the OptValue function in a simple way to simulate a stock price and its
option price together. The function SimStockAndOption simulates a series of "ups" and
"downs" of the stock price, and updates and returns lists of stock prices and corresponding
option prices at each time up to T.
Here is an example simulation, with a starting stock price of $15, exercise time
T 10, exercise price E $18, multipliers b .08, a .02, and riskless rate r .04. Feel
free to run the command several times with this choice of parameters, and then to change the
parameters to find patterns. One thing that you will always see is that the difference between
the stock price and the option price is the exercise price at the final time, if the option is
valuable enough to be exercised. (Why?)
15
10
2 4 6 8 10
0, 15, 1, 16.2, 2, 15.876, 3, 15.5585,
4, 15.2473, 5, 16.4671, 6, 16.1378, 7, 17.4288,
8, 17.0802, 9, 18.4466, 10, 18.0777,
0, 2.91316, 1, 3.58582, 2, 2.79178, 3, 2.02018,
4, 1.30134, 5, 1.79938, 6, 1.02902, 7, 1.51765,
8, 0.674307, 9, 1.13892, 10, 0.0776817
Figure 6.19 - Simulated stock price (upper curve) and option price (lower curve) together with list of {time,
stockprice} pairs and list of {time, optionprice} pairs
This chapter has only scratched the surface of the many interesting extensions and
applications of probability. Other important areas, such as reliability theory, stochastic
optimization, information theory, the analysis of algorithms, and inventory theory, await
you. Or, you might decide you want to study Markov chains, Brownian motion and other
random processes, queues, or finance more deeply. Probably the most important next step is
the subject of statistics, which we have previewed a number of times. It is my hope that your
experience with this book has prepared you well for all of these studies, as well as enhancing
your problem-solving abilities, and giving you a new view of a world fraught with uncertain-
ties.
Exercises 6.5
1. If your investment strategy is to pick just one stock from among the following three to
invest in, and your risk aversion is a 5, which stock would you pick?
Stock 1: Μ1 .05, Σ1 2 .008; Stock 2: Μ2 .06, Σ1 2 .01;
Stock 3: Μ1 .09, Σ1 2 .02
2. Find the optimal portfolio of one stock and one bond for an investor with risk aversion
a 12, if the non-risky rate of return is 4%, and the stock rate of return is random with mean
5% and standard deviation 4%.
3. (Mathematica) Simulate the total returns for 200 replications of the investment problem
in Exercise 2, for the optimal portfolio, for a portfolio consisting of 100% stock, and for a
portfolio consisting of 50% stock and 50% bonds. Assume an initial wealth of $1000, and
assume that the stock rate of return is normally distributed. Produce and compare his-
tograms and summary statistics of the simulated returns.
6.5 Financial Mathematics 391
4. Suppose that two independent stocks with mean rates of return Μ1 and Μ2 and standard
deviations Σ1 and Σ2 are available, as well as the non-risky bond with rate of return r. Find
expressions for the optimal proportions of wealth that an investor with risk aversion a is to
hold in each asset. Does the relative balance between the two risky assets depend on the risk
aversion?
5. Suppose that the non-risky rate of return is 5%, and we are interested in a two period
portfolio problem in which we collect our profit at the end of the second time period. If the
stock follows a binomial model as described in the second subsection, with b .10, a 0,
and p .4, and the investor's risk aversion is 15, find the optimal portfolio.
6. (Mathematica) In Example 2, produce a graph of the option price at time 0 as a function
of the exercise time T, with the initial stock price = $14, and other parameters as in the
example: E = 18, b .09, a .03, r .05. Then see how the graph changes as you vary the
initial stock price.
7. A stock follows the binomial model with initial price $30, b .08, a 0, and the riskless
rate is r .04. Find (by hand) the probability p, the distribution of the stock price two time
units later, and the value of an option whose exercise time is 2 and whose exercise price is
$32.
8. Express the value of the European call option (10) in terms of the binomial cumulative
distribution function.
9. An American call option is one that can be exercised at any time prior to the termination
time T of the contract. Argue that a lower bound for the value of an American call option
V S, t, E is maxS E, 0. Sketch the graph of this lower bound as a function of S.
(Incidentally, it can be shown that the value of an American call is the same as that of a
European call. So this lower bound holds for the European call option as well.)
10. Consider a single time period option problem with a general binomial model expressed
by (8) holding for the stock upon which the option is based. Again let r be the non-risky
rate. Show that the investment action of holding one option at time 0 can be replicated by
holding a certain number N of shares of the stock, and borrowing an amount B at the non-
risky rate, i.e., whether the stock goes up or down in the single time period, this replicating
portfolio performs exactly as the option would perform. Find expressions for the N and B
that do this replication.
11. In the context of options pricing it was claimed in the section that opportunities for
limitless risk-free profits could ensue if the anti-arbitrage condition a r b was not in
force, where r is the interest rate on a risk-free asset and a and b are the parameters in
formula (8) governing the possible states of the risky asset. Explain how this can happen.
392 Chapter 6 Stochastic Processes and Applications
12. The perception of a firm's value by its stockholders and potential stockholders is an
influential factor in the price of a share of its stock on the market. There have been several
models put forth by finance specialists for the value of a firm, but a relatively simple one
involves the present value of the expected total profit. Consider a firm whose profits last
year were $40 million. They are expected to grow each year by a random factor 1 Xi ,
where Xi is normally distributed with mean .04 and variance .0004. Assume that these
random variables are independent of each other. Suppose that the value of money discounts
by a percentage of r .02 per year. Compute the present value of the expected total profit
through year 10.
13. (Mathematica) A man is planning to retire in two years, at which time he will have saved
an amount W in his retirement account. Suppose that he is allowed to lock in the interest
rate R on his retirement account at the time of retirement, but R is unknown to him now. He
estimates that R is normally distributed with mean .035 and variance .000095. He would like
to be able to consume a fixed income of at least $50,000 for at least 20 years. How large
must W be in order for him to achieve his financial goal with probability at least .9?
Appendices
Introduction to Mathematica 395
Appendix A
Introduction to Mathematica
This text takes pedagogical advantage of many of the features of Mathematica, the computer
algebra system from Wolfram Research, version 6.0 or higher. If you are a relative new-
comer to Mathematica, this short introductory section focuses on the most important basic
aspects of the software, including programming tools. Appendix B documents commands
specific to probability that are defined in the book itself, in the accompanying package
KnoxProb7`Utilities`, or in the Mathematica kernel or standard packages. This appendix is
intended to be a brief review for those with at least some experience, not a comprehensive
course, but it should suffice to prepare you for the way in which Mathematica is used in
problem solving in this book.
2 16
1
8
396 Appendix A
25
10
t 9; s 5;
ts
ts
2t1
45
45
19
6^2
36
Notice that when you carriage return in an input cell, Mathematica understands that the next
line is a new calculation. But if a line is followed by a semicolon, then output is suppressed.
If you want to reuse a symbol name, such as t and s above for a different purpose, it is
usually enough to just reassign its value, but you should also know about the Clear[symbol]
function, which clears out previous instances of the given symbol.
Mathematica is quite particular in the way that the various types of brackets are to be
used: open parentheses () only surround algebraic expressions for purposes of grouping;
square brackets [] only surround arguments of functions, both in definitions and evaluations
of functions; and curly braces {} only surround lists. Here are a few correct usages of
brackets. Observe that in the last calculation, the two lists are being added componentwise.
Expand16 x 1
16 16 x
fx_ : x2 ;
f3
9
Introduction to Mathematica 397
1, 4, 3 3, 2, 1
4, 6, 4
Some of the most useful predefined constants and mathematical functions are Pi, E
(the Euler number), Max, Min, (the maximum and minimum value in a list of objects), Abs
(absolute value), Exp (exponentiation with base equal to the Euler number, also written as
E^), Log (the natural logarithm), Sqrt (square root function), and Mod[n,k] (modular
arithmetic remainder on division of n by k). Some functions and constants are illustrated
next.
Mod5, 3
LogE2
Abs 4
Max1, 8, 7, 4
ExpLog2.5
2.5
SinPi
0
398 Appendix A
Sqrt16
There are tools to shorten and beautify expressions. Depending on what Mathematica
palettes you have installed, you may be able to select Greek and mathematical symbols and
expression templates by clicking on a button on the palette. The BasicTypesetting palette is
particularly useful. Certain shortcut key combinations are also very helpful to know about:
Holding down the CTRL key and typing the dash/underscore key produces a subscript
expression b ; similarly holding down CTRL while typing the 6^ key gives a superscript, or
exponential form a . The CTRL / key combination produces a fraction template . Greek
letters can be produced from the keyboard with ESC combinations; for instance typing ESC,
then p, then ESC again gives Π, and the ESC a ESC combination gives Α.
f1x_ : 2 Sqrtx 1;
Notice that the argument of a function in such a definition is surrounded by square brackets,
and is followed by the underscore symbol. The syntax := is used to indicate a definition;
evaluation of the expression to the right of this sign is delayed until the function is called for
a particular input value x. When a definition cell is given to Mathematica, no output is
produced.
Plotting a real function of a real variable is very easy using the Plot command, which
takes as its first argument the name of the function or an expression defining the function,
and as its second argument a list such as x, a, b indicating the name of the domain
variable, and the left and right endpoints of the domain. For example:
1 2 3 4
Introduction to Mathematica 399
1.0
0.8
0.6
0.4
0.2
ylabel
2
y2
1 2 3 4 5 6
1
2
Mathematica also has the capability to graph functions of two variables in three-
dimensional space, and do corresponding contour plots, that is, curves in the plane defined
by constant values of the function. The Plot3D and ContourPlot commands work much like
Plot, with an additional variable, as shown below.
400 Appendix A
0.6
0.4 1.0
0.2 0.5
0.0
1.0 0.0
0.5
0.0 0.5
0.5
1.0
1.0
1.0
0.5
0.0
0.5
1.0
1.0 0.5 0.0 0.5 1.0
Simplify
x2 2 x 1 x 6 7 x2 2
Introduction to Mathematica 401
3 x 8 x2
Factorx3 3 x2 3 x 1
1 x3
Expandx 1 x 2 x 4
8 6 x 3 x 2 x3
x1 x1
Together
x2 x2
2 2 x2
2 x 2 x
2 x2 4
Apart
x 2 x 2
3 3
2
2 x 2x
There are functions that evaluate finite sums and products as well. Sums are com-
puted as Sum[expression,{variable, lowerlimit, upperlimit}] and similarly for products, but
symbolic templates are also available by clicking the appropriate button on the BasicMathIn-
put palette and filling in the blanks. We illustrate both approaches below.
Sumi2 , i, 1, 3
3
i2
i1
14
14
402 Appendix A
48
48
Mathematica can solve many equations and systems of equations in closed form
using the Solve command, whose basic syntax is Solve[list of equations, list of variables].
The answer is reported as a list of so-called rules of the form {symbol->value}. A word is in
order about syntax for equations in Mathematica however. Mathematica distinguishes
carefully between three usages of the equal sign that humans usually blur together. Ordinary
"=" is used when assigning a value to a variable immediately. From above, the ":=" notation
is used for definitions that are not immediately evaluated. And finally, in logical equations
we use the notation "==", that is, repeated equal signs. The following examples illustrate
how to solve a simple quadratic equation for constant values, how to solve for one variable
in terms of another in an equation, and how to solve a system of equations, expressed as a
list.
Solvex2 6 x 8 0, x
2yx
Solve 6, y
x1
1
y 6 5 x
2
5 2
x ,y
3 3
Introduction to Mathematica 403
Mathematica can solve many polynomial equations, but for other kinds of equations
numerical techniques are necessary. The system provides a number of numerical operations.
Some that you will find useful in this book are: N, NSolve, FindRoot, and NSum. The N[]
function can apply to any numerical-valued expression to give an approximate decimal
value, and it can be given an optional second argument to request desired digits of precision.
It can be quite helpful in cases where Mathematica insists on giving closed form answers
when a numerical estimation is far more helpful. Here are a few examples.
NCos16
NCos16, 10
NPi, 20
0.957659
0.9576594803
3.1415926535897932385
16 857 23 594
N16 857 23 594
16 857
23 594
0.714461
The NSolve command is a version of Solve that uses numerical techniques to look for
solutions, including complex values, of polynomial equations. FindRoot is more general; its
syntax is FindRoot[function, {variable, starting value}] and it uses iterative methods starting
with the given starting value to locate zeros of the function close to that value. FindRoot
works for a wide variety of non-linear equations. Examples are below.
NSolvex6 3 x4 6 x 12 0, x
x 1.42196
x 0.619061
Finally, the NSum command is much like Sum, but it reports a decimal approxima-
tion to the result as below.
535.2
Calculus
Mathematica commands for calculus are abundant, but only a few are most relevant for this
book. In continuous probability we find ourselves doing a great deal of integrating, so the
most important of all are the Integrate[function, {x,a,b}] closed form definite integral and
the similar NIntegrate command for numerical integration. (Integrate has a version for
indefinite integration without limits of integration a and b, but we don't have as much cause
to use it here.) The BasicMathInput palette has an integral template, which we also demon-
strate in the examples below.
3
Log
4
4
3
Log
4
4
6 s Sins
gs_ : ;
Es
NIntegrategs, s, 0, 1
0.878724
21
2
21
2
Other useful commands for calculus problems are D[function, variable] for the
derivative of a function with respect to a variable; Limit[function, variable->value] for the
limit of a function at a value; and the commands Sum and NSum from above as applied to
infinite series. The derivative command D also has variant forms: D[f, {x, n}] for the nth
derivative of the function f relative to the variable x, and D[f, x1, x2,...] for the mixed partial
of the several-variable function f with respect to the listed variables. Here are some examples
of the use of D, Limit, and the sum commands.
Clearf, t
ft_ : Cost Logt;
Dft, t
Dft, t, 2
Cos
t
Log
t Sin
t
t
Cos
t 2 Sin
t
Cos
t Log
t
t2 t
Dx2 y2 , x, x, y
this is the third order partial
x x y
4y
x2 1
Limit , x 1
x1
406 Appendix A
1.64493
Π2
6
1.64493
Notice from the last example that for infinite series for which Mathematica is able to
compute a closed form, the N[] function can still be applied to get a numerical approxima-
tion.
Lists
Lists in Mathematica are sequences of symbols, separated by commas and demarcated by
braces {}. They can contain any type of object, and list members may themselves be lists. In
probability we frequently store samples of data in lists, or simulate data into lists, which we
then process in some way. Hence a facility with list manipulation in Mathematica is quite
important for this book.
Some very simple operations on lists are: Length[list], which returns the number of
elements in the list, Last[list] and First[list], which return the last and first element of the
list, and Sort[list] which sorts the list into increasing (numerical or lexicographical) order.
You should be able to easily verify the correctness of the output in all of the inputs below.
Introduction to Mathematica 407
0.2
a, b, c
You can pick out individual elements of a list by referring to listname[[i]], which returns the
ith list member (note that list subscripts start at 1). For example,
list13
list22
6.8
Lists can be defined as above by just enumerating their elements, but also patterned
lists can be obtained by using the Table command. The standard form is Table[function of i,
{i, imin, imax}], which lets the variable i successively take on values from imin to imax in
increments of 1, and builds a list whose ith member is the function applied to i. We illustrate,
and show some variants on the table iteration list below.
1, 4, 9, 16
0, 0, 0, 0, 0, 0
The list called list3 is an ordinary list of squares of integers from 1 through 4. In the
definition of list4 we have added an extra value .25 to the table iteration list, which controls
the size of the step taken from one value of the variable k to the next. So k will range
through the values 0, .25, .50, .75, and 1.0 in this example. In list5 we abandon the list
iteration variable altogether and ask for a list of zeros of length 6. The variable list6 is
created by Table using two iteration variables i and j, producing a list of lists. First, i takes
on the value 1, and Mathematica computes the products of 1 with j 2, 3, and 4 and
assembles those into a sublist to create the first entry of list6. Then i becomes 2, and the next
sublist entry of list6 is the list of products of i 2 with j 2, 3, and 4. Similarly, the last
sublist entry of list6 is the product of i 3 with the values of j.
The last example leads us to nested lists and matrices in Mathematica. List members
can themselves be lists, and this gives us a data structure to use to represent matrices. Each
sublist is a row in the matrix. To display the matrix in the usual rectangular format, Mathe-
matica provides the TableForm[matrix] and MatrixForm[matrix] commands. These are very
similar, the main difference being in whether the output has parentheses surrounding the
matrix entries. The arguments are lists, usually with nested sublists of equal length. For
example,
TableFormlist6
2 3 4
4 6 8
6 9 12
1 1 0 0
0 0 1 1
1 0 1 0
matrix71, 2
matrix73, 4
There are a number of useful operations on lists and matrices that you should know
about. The arithmetical operations +, ,*, and / can be applied to two lists of the same
length, which applies the operation componentwise, as shown below.
1, 0, 2 3, 4, 5
2, 4, 6 2, 2, 2
3, 0, 10
1, 2, 3
The dot product of vectors or matrix product of two matrices for which the product is
defined can be done with the period playing the role of the dot in the usual mathematical
notation, as the next examples indicate.
1, 0, 2.3, 4, 5
matrix8 1, 2, 3, 4;
matrix9 0, 1, 2, 0;
MatrixFormmatrix8
MatrixFormmatrix9
MatrixFormmatrix8.matrix9
13
1 2
3 4
410 Appendix A
0 1
2 0
4 1
8 3
1 2
3 4
1 3
2 4
Very often in this book we build lists by simulating values. Each new value must be
appended to the end of the existing list. For this, Mathematica provides the AppendTo[list-
name, newvalue] command. This command modifies the given list by placing the new value
at the end.
Finally, for plotting numerical valued lists, we have the ListPlot[listname] command.
It can be used in two ways: if its input is a one-dimensional list of numbers, say
a, b, c, ..., it assumes that these are y-values to be paired up with x-values of 1, 2, 3, ...
and plotted as points in the x y plane. If the list that is input is already a list of pairs x, y,
then ListPlot simply plots these points in the plane. Two useful options to know about for
ListPlot are Joined->True, which connects the points with line segments, and PlotStyle-
>PointSize[value], which changes the size of the points. Here are some examples of ListPlot.
14
12
10
8
6
2 3 4 5
3.0
2.5
2.0
1.5
2 3 4 5
3.0
2.5
2.0
1.5
2 3 4 5
Notice from the last example that when Joined is set to True, the points do not plot.
412 Appendix A
Programming Tools
One of the pedagogical strategies that is highlighted in this book is to teach the concept of
random variation by having the reader program simulations of various random phenomena.
Programming is also an important problem-solving device to gain intution about and to
approximate solutions to problems that are not friendly to purely analytical approaches. So it
is necessary for the reader to acquire some skill in writing short programs in Mathematica.
When writing programs of any kind, the student should first clearly understand the problem
and its assumptions, and then specify exactly what is to be input to the program and what is
to be output. Good programming includes documentation of what the lines of code are
intended to do, in any but the most short and simple programs. Mathematica provides the
bracketing pair: (* *) for documentation; anything enclosed by this pair of symbols is
considered a comment and ignored when the program is run.
A program is comprised essentially of header information including identification of
input, a list of variables used internally by the program and not defined externally to it, and a
sequence of statements producing some output. (This description really applies to a program-
ming style called imperative programming that is used in this book. It must be acknowl-
edged that many, if not most, Mathematica programmers prefer a functional programming
style, in which problems are solved by sequences of function compositions, rather than
statements executed one after another. I am of the opinion that for simulation programming,
the imperative style is more straightforward.) In Mathematica, there is a function called
Module that allows the programmer to group statements together and introduce local
variables. When actually carrying out computations, the program needs the ability to react
one way under one configuration of data and another way under a different configuration.
So, Mathematica has boolean operators on logical statements, and conditional functions like
If and Which to make the computation depend on the current state of the data. Also, most
programming problems require repetitive operations to be done. Some problems require a
fixed number of operations, such as replicating a phenomenon a fixed number of times,
while others need to continue executing until some stopping condition goes into effect. For
these repetitive code fragments, or loops, Mathematica includes the Do and While com-
mands. We talk about modules, conditional functions, and loops in turn below.
Modules
A module in Mathematica is of the form Module[{local variable list}, statements]. The first
argument, i.e., a list of local variables, allows the user to give names to temporary variables
used in the computation that is to be done by the module, which are undefined outside of the
module. The second argument is any sequence of valid Mathematica commands, separated
by semicolons. The output of the module is whatever is computed in its last line. A typical
template for using a module to solve a problem is to define a function
MyFunction[arg1_, arg2_, ...] := Module[{locals}, statements]
Introduction to Mathematica 413
When the function is called, arguments are given to the module, whose values can be used as
the program executes.
For example, here is a program to reverse the order of the first two elements of a list.
We assume that the input list has at least two elements in it, otherwise the references to the
1st and 2nd elements will not be valid. In this module, a copy of the input list is made in the
local variable listcopy. We then replace the first and second elements of the copy of the list
by the second and first elements of the original list, and return the copy.
ReverseTwothelist_ :
Modulelistcopy,
listcopy thelist; Mathematica
doesn't allow changes to input variables,
so work on a copy of the input list
listcopy1 thelist2;
swap into the first element
listcopy2 thelist1;
swap into the second element
listcopy
ReverseTwo1, 2, 3
2, 1, 3
The function call above tests out the command on a small list, and the output is appropriate.
MemberCheckmylist_, element_ :
IfMemberQmylist, element,
Print"Yes", Print"No"
414 Appendix A
Below, MemberCheck replies that 4 is not an element of the list 1, 2, 3, and the letter "c" is
an element of the list a, b, c, d
MemberCheck1, 2, 3, 4
MemberCheck"a", "b", "c", "d", "c"
No
Yes
1 2 && 2 3
True
True
False
1 2 3 2
True
Introduction to Mathematica 415
True
x2 9;
Notx2
10
True
a piecewisedefined function
fx_ : Whichx 0, 0, 0 x 1, 2, x
1, 4;
f 1, f0, f2
Plotfx, x, 1, 2, PlotStyle Black
0, 2, 4
0.25
n 20;
thelist ;
DoAppendTothelist, RandomReal, n;
thelist
By contrast, the next code fragment continues to simulate random real numbers on 1, 2
until their total is 20 or more (which can take no more than 20 passes through the While
loop). It then returns the total of the numbers.
sum 0;
Whilesum 20,
sum sum RandomReal1, 2;
sum
20.4392
Introduction to Mathematica 417
1.0 1.0
0.5 0.5
1 2 3 4 5 6 1 2 3 4 5 6
0.5 0.5
1.0 1.0
Superposition of graphs is often possible from within the graphical command itself;
for example, if we wanted to superpose instead of juxtapose the two function graphs above
we could simply Plot the list of the two functions:
1.0
0.5
1 2 3 4 5 6
0.5
1.0
Alternatively, when the graphics have already been produced and named, or when the
graphics must be generated from two different commands such as Plot and ListPlot, the
Show[{list of graphs}] command can be used to superpose the graphs. Here we use it on the
two sine curves:
Showg1, g2
1.0
0.5
1 2 3 4 5 6
0.5
1.0
In a moment we will use Show to combine several kinds of primitive graphical objects.
Animation is a very powerful pedagogical device that can allow the user a deep
understanding of how graphs change as their input parameters change. Mathematica has two
ways of doing animations, using Animate and Manipulate. The former simply runs an
animation, while the latter allows the user to control the animation. The basic syntax for both
is illustrated by the two input cells below: the first argument to Animate or Manipulate is the
graphics command, within which is a variable that makes the graph change (here it is the
constant coefficient c of the linear term in the cubic function). The second argument is a
range of values variable, lowvalue, highvalue that you want the graphics control variable
to take. If this list contains a fourth member called increment, then animation frames will be
generated by adding this increment to the most recent value of the graphics control variable.
Clearx, c;
AnimatePlotx3 c x 1, x, 2, 2,
PlotRange 4, 4, PlotStyle Black ,
c, 2, 2, ControlPlacement Top
Introduction to Mathematica 419
2 1 1 2
2
4
Clearx, c;
ManipulatePlotx3 c x 1, x, 2, 2,
PlotRange 4, 4, PlotStyle Black ,
c, 2, 2, ControlPlacement Top
0 93
2 1 1 2
2
4
420 Appendix A
Notice two things about the commands: in order that each frame in the animation is plotted
on a consistent scale, we have set the PlotRange option. Also, the control panels are a bit
different. In the Animate panel, there is a start and stop button (the right arrow), buttons to
speed up or slow down the animation, and a toggle button to run the animation forward,
backward, or back and forth. In the Manipulate panel, the user is given a drag bar to position
the animation to whatever frame is desired, and there are step forward and step backward
buttons. So the ability of the user to exert more control makes Manipulate animations more
interactive than animations produced with Animate.
Although it is not absolutely necessary for this book, you may want to use graphics
primitives to create your own diagrams. These graphics primitives, such as Point, Line,
Rectangle, Circle, and Arrow are used as arguments of the Graphics command in order to
produce the diagram. Point[{x, y}] is a point with coordinates x, y. Point can also be
applied to a list of such coordinates to produce several points at once. Line[{list of points}]
represents a connected line graph in which each point in the list is connected to its successor
in the list. Rectangle[lower left point, upper right point] is a filled rectangle with the given
two corner points. Circle[{x, y}, radius] produces a circle with center x, y and the given
radius. The graphics primitive Arrow[{startpoint,endpoint}] stands for an arrow beginning
and ending at the two given points.
Below we use Show to combine a unit circle with center at the origin with points at
the origin and at 2 2, 2 2
on the circle, and with an arrow connecting these two
points. Notice that all of the graphics primitive objects are in a list inside Graphics, and the
first element of that list specifies that points are to be given a larger size for better visibility.
The Axes->True option for Show is used so that the coordinate axes are displayed.
1.0
0.5
0.5
1.0
Introduction to Mathematica 421
External Packages
At times you have need of commands that are not loaded automatically by Mathematica
when its kernel starts. Mathematica has a few external packages of commands, including
MultivariateStatistics` and Combinatorica`, that are important for the study of probability
and statistics, and also this book comes with its own package, KnoxProb7`Utilities`.
Packages can be loaded in with the Needs command. The syntax is simply Needs["package-
name"], but you must remember that the package names follow a syntax in which directories
are terminated by the backquote character. The commands below illustrate how two of the
aforementioned packages are loaded. No output is generated by the Needs command unless
an error is encountered in loading the package, but once the package is loaded, all of its
commands and objects are available for use.
Needs"MultivariateStatistics`"
Needs"KnoxProb7`Utilities`"
If you attempt to use a symbol from a package before loading the package, and then
try to load, you may find that your spurious use of the undefined symbol masks the true
definition of the symbol in the package, and you get unexpected output or error messages. If
this happens, you can correct matters by quitting the kernel, restarting it, and then making
sure that you load the package before using the symbol.
Mathematica Commands 423
Appendix B
DotPlot[list] KnoxProb7`Utilities`
(* draws a dot plot of the given list of data. It takes some of the options of ListPlot,
although in the interest of getting a good dotplot it suppresses AspectRatio, AxesOrigin,
Axes, PlotRange, and AxesLabel. It has four options of its own: VariableName, set to be a
string with which to label the horizontal axis, NumCategories, initialized to 30, which gives
the number of categories for stacking dots, DotSize, initially .02, and DotColor, initially
RGBColor[0,0,0], that is black. *)
DrawIntegerSample[a,b, n] Section 1.1, KnoxProb7`Utilities`
(* selects a sample of size n from the set of integers a, ... , b with optional Boolean
arguments Ordered->True, and Replacement->False determining the four possible sampling
scenarios. *)
ExpectedHighTank[n, Θ] Section 2.1
(* computes the expected value of the highest tank number in a random sample of
size n from a uniformly distributed set of tank numbers in {1,2,...Θ} *)
ExponentialDistribution[Λ] kernel
(* object representing the exp(Λ) distribution *)
FRatioDistribution[m, n] kernel
(* object representing the Fm, n distribution *)
Gamma[r] kernel
(* returns the value of the gamma function at r *)
GammaDistribution[Α, Β] kernel
(* object representing the gamma distribution with the given parameters *)
GeometricDistribution[p] kernel
(* an object representing the geometric distribution with success parameter p *)
Histogram[datalist, numberofrectangles] kernel
or Histogram[datalist, {rectanglewidth}]
(* plots a histogram of frequencies of a list of data, with a desired number of
rectangles, or with rectangles of a desired width. An optional third argument has either of the
values "Probability" or "ProbabilityDensity" depending on whether you want bar heights to
be relative frequencies or relative frequencies divided by interval length. If the second
argument is replaced by a, b, d x, then a histogram extending from a to b with rectangle
widths d x is returned. *)
HypergeometricDistribution[n, M, N] kernel
(* an object representing the hypergeometric distribution, where a sample of size n is
taken from a population of size N, in which M individuals are of a certain type *)
KPermutations[list, k] KnoxProb7`Utilities`
(* returns all permutations of k objects from the given list *)
Mathematica Commands 425
KSubsets[list, k] Combinatorica`
(* lists all subsets of size k from the given list *)
MatrixPower[T, n] kernel
(* returns the nth power of the matrix T *)
Mean[datalist] kernel
(* returns the sample mean of the list of data *)
Mean[dist] kernel
(* returns the mean of the given distribution *)
Multinomial[x1, x2, ... , xk] kernel
(* returns the multinomial coefficient *)
MultinormalDistribution[meanvector, covariancematrix] MultivariateStatistics`
(* an object representing the bivariate normal distribution with the given two-
component vector of means, and the given covariance matrix. Generalizes to many dimen-
sions *)
MyRandomArray[initseed, n] Section 1.3
(* returns a list of uniform[0,1] random numbers of length n using the given initial
seed value *)
NegativeBinomialDistribution[r, p] kernel
(* an object representing the negative binomial distribution with r successes required
and success parameter p *)
NormalDistribution[Μ, Σ] kernel
(* an object representing the normal distribution with mean Μ and variance Σ2 *)
OptValue[t, x, T, EP, b, a, r] Section 6.5
(* returns the present value at time t of a European call option on a stock whose
current price is x. The exercise time is T, exercise price is EP, b and a are the up and down
proportions for the binomial stock price model, and r is the riskless rate of interest *)
PDF[dist, x] kernel
(* returns the value of the probability mass function or density function of the given
distribution at state x. Also defined in MultivariateStatistics` package in the form PDF[multi-
variatedist,{x,y}] it gives the joint density function value at the point x, y *)
PlotContsProb[density, domain, between] KnoxProb7`Utilities`
(* plots the area under the given function on the given domain between the points in
the list between, which is assumed to consist of two points in increasing order. Options are
the options that make sense for Show, and ShadingStyle->RGBColor[1,0,0] which can be
used to give a style to the shaded area region. *)
426 Appendix B
Variance[dist] kernel
(* returns the variance of the given distribution *)
WeibullDistribution[Α, Β] kernel
(* object representing the Weibull distribution with the given parameters *)
XYPairFrequencies[numpairs] Section 2.5
(* in Example 3, returns a frequency table of X , Y pairs *)
Short Answers to Selected Exercises 431
Appendix C
Section 1.2
2. The probability of a hit of any kind is 59 216 .273. This batter should have home run
symbols in a set of squares whose probability totals to 11/216; for instance in squares red=1,
whitesum=6 and red=3, whitesum=8, and red=5, whitesum=2 .
3. (a) = {(r1 , r2 , r3 where each ri a, b, c, d. Examples: (a,c,d) (c,b,b). (c) P[at least 2
right] = 10/64.
4. (c) P[Bubba or Erma is in the sample] = 9/10.
6. P[10 is not in the sample] = 72/90.
7. (a) P[less than 20 .335; P[between 20 and 30 .152; P[between 30 and 40 .121;
P[between 40 and 100 .313; P[over 100 10.5 133.9 .078. P[at least 30] .512.
(b) The probability that at least one of the two earned more than $100,000 is roughly .14976.
8. (b) P[3rd quadrant] = 1/4; (c) P[not in circle radius 2] = 1 Π 4; (d) P[either 3rd
quadrant or not in circle radius 2] = 1 3Π/16.
9. P[A Bc ] = .26.
13. P[Ω1 ] = .2, P[Ω2 ] = .4, P[Ω3 ] = .4.
15. 15/24.
432 Appendix C
16. 1/13.
17. 13/30
18. .05.
19. 4.
20. 880.
21. 1/2.
Section 1.3
1. P[ c, d ] = d c b a = P[ c, d ] and P[ d ] = 0.
2. Notice that because the first few seed values are small in comparison to the modulus, the
first few random numbers are small.
9. The area under y 1 x is infinite and also there is no bounded rectangle in which to
enclose the region. For y 1
x the second problem remains but at least the area is finite.
10. One sees a rather symmetric hill shaped frequency distribution.
12. One usually gets a mean lifetime of about 13, and an asymmetrical histogram with most
of its weight on the left and a peak around 12 or 13.
13. Starting at state 1: p2 1 p
p2
. Starting at state 2: p 1 p
p2
.
Section 1.4
1.(a) There are 5! lineups. (b) The probability is 1 5. (c) The probability is 4/5.
2. 544/625.
3. 3125.
15. 0.252901
16. 0.283604
17. 0.315008
18. 0.346911
19. 0.379119
20. 0.411438
21. 0.443688
22. 0.475695
5. 23. 0.507297
24. 0.538344
25. 0.5687
26. 0.598241
27. 0.626859
28. 0.654461
29. 0.680969
30. 0.706316
31. 0.730455
Short Answers to Selected Exercises 433
32. 0.753348
33. 0.774972
34. 0.795317
35. 0.814383
8. 2n .
9. Cn,3 n n 1 n 2 6.
11. The probability of a full house is about .00144. The probability of a flush is about
.00197. The full house is more valuable.
12. 19,199,220,203,520,000.
14. (a) 243/490. (b) 10 bad packages suffices. (c) n = 19 is the smallest such n.
15. .515.
16. The probability that subject 1 is in group 1 is n1 n.
17. 10080.
19. P[3 or fewer runs] = 1/429.
20. P[individual 1 in the first substrata of the first stratum] = 16/180.
Section 1.5
1. (a) 126/218. (b) 25/114. (c) 295/380.
3. .11425.
4. .34.
5. (a) P[all Republicans] = 4 35. (b) P[at least 2 Rep | at least 1 Rep] = 132 204.
6. 4/7.
9. 5/12.
10. 9/64.
11. The proportion should approach 1/4.
13. P[lands in 1] = 1 8 ; P[lands in 2] = 3 8 ; P[lands in 3] = 3 8 ; P[lands in 4] = 1 8.
15. P[at least C | score between 10 and 20] = 68 74.
17. .1728.
18. .20.
19. .0140845.
20. .4.
Section 1.6
1. P[no more than 1 right] .376.
2. The two events are independent.
4. A and B cannot be independent.
8. It is impossible to make A and B independent.
9. (b) P[HHH] = .493 , P[HTH] = .492 (.51) = P[THH] = P[HHT], P[TTH] = .512 (.49) =
P[THT] = P[HTT], P[TTT] = .513 .
434 Appendix C
0 if x1
.16 if 1
x2
.47 if 2
x3
F3 x
.65 if 3
x4
.75 if 4
x5
1 if x5
x 2 3 4 5 6 7 8 9 10 11 12
1 2 3 4 5 6 5 4 3 2 1
f x
36 36 36 36 36 36 36 36 36 36 36
x 1 2 3 4 5 6
1 3 5 7 9 11
f x
36 36 36 36 36 36
m 5 m1 5 m 5
5. If m 1, 2, ..., 20, Fm 1 1 20
and f m 1 20
1 20
.
7. For m in 5, 6, ..., 20,
m1
4
PX m 20
.
5
3. EX p3 = 2 p3 3 p2
p.
6. .382281.
7. E[X 2 ] = np(1 p) + np2 .
8. The expected number of cabs is 36, and the probability that at least 30 cabs arrive is
.889029.
9. P[at least 300 advances] = .999999. The mean and variance are 350 and 105, respec-
tively.
12. The expected number of hits is 150. The standard deviation of batting average is .0205.
14. The probability that the promoters must issue at least one refund is .0774339. The
expected total refund is $10.21.
16. 1/9.
18. .096.
19. .0488.
20. .469154.
Section 2.3
1. The necessary probability p is approximately .0001.
2. Fx P[X
x] = 1 1 px
1 .
3. PX m
n X n 1 pm .
4. .23999.
6. (a) .537441 (b) .518249 (c) .38064 (d) mean = 30, standard deviation = 420 20.5.
8. p is about .285.
2
11. expected time = 16 , variance = 38.9, probability of between 15 and 30 jumps around
3
.554.
14. The expected number of games played is 70, the variance is 46.66... and the standard
deviation is 6.83.
15. 1/25.
16. $2694.40.
Section 2.4
1. (a) .0888353; (b) .440907; (c) .686714.
436 Appendix C
2.
Section 3.2
1. 1/2; 1/2.
2. .952637.
3. C k, Fx 1 ek x , x 0.
xa a
b
4. For x a, b, Fx ; PX 1 2.
ba 2
6. PT 10 2 e.
7. P1.5
X
3.5 3 4.
Α1 1Α
9. x Β Α
.
10. Y is uniformly distributed on 1, 1.
11. .752796; $30111.80.
12. PV 40 = .622146; v is about 54.6.
13. 1/48.
14. PV U 1 4 9 16.
16. PY X = .432332.
18. PV 3 4 U 1 2 = 1/2.
20. PY 5 4 X 3 2 .716737; PX 3 2 Y 6 5 .546565.
21. (a) 2/27; (b) 8/27.
24. 5/8.
25. .416667.
26. .578125.
Section 3.3
20. 1.70667.
21. 5.72549.
22. 1/12.
Short Answers to Selected Exercises 439
Section 4.1
5. 1
Σ 2 Π .
7. (a) .841345; (b) .97725; (c) .99865.
8. About .026.
9. About .0001.
11. (a) .182; (b) .8680; (c) .0069; (d) .1251.
12. .1111.
14. (a) About .32; (b) About .95.
15. 1.28155, 0.841621, 0.524401, 0.253347, 0., 0.253347, 0.524401, 0.841621,
1.28155.
17. $711.61.
18. (a) .308538; (b) .598706.
Section 4.2
2. The probability is roughly .218.
3. (a) .351828; (b) .0102; (c) 17.3616.
4. (a) 96.0; (b) 99.7; (c) 100.3.
6. There does appear to be a good linear relationship between the logged variables. For the
logged SO2 variable the mean estimate is 3.19721 and the standard deviation is 1.49761.
For the logged mortality variable the mean estimate is 6.84411 and the standard deviation is
0.0662898. The correlation estimate is 0.414563. The conditional variance and standard
deviation come out to be .00363912 and .0603251, respectively.
8. (a) .551128; (b) .53446.
9. .307806.
11. (a) .692077; (b) .956638.
13. If you divide f x, y by px for each fixed x the resulting function will integrate to 1.
14. Ρ.
1 1
16. exp z2 2 Ρ z w
w2
.
2Π 1Ρ2 2 1Ρ2
Section 4.3
3
1. (a) The density of Y is gY y 2
y12 , y 0, 1. (b) The density of Y is
gY y 9 y , y 0, 1.
8
1
2. gU u u12 eu2 , u 0.
2Π
17. 10560.
Section 4.4
2. (a) .755859; (b) .755859; (c) .527344.
3 3
4. f y 2
y2 4
y3 , y 1, 1.
6. .167772.
7. .03125.
8. .653671.
9. 1 e32 .
10. 1 e136 .
7
12. hx x 1.
6
13. 2.22656.
14. $2,025.
Section 4.5
1. (a) .65697; (b) .135353; (c) .125731.
4. The probability P[T > .5] is .342547.
5. The probability that the project will not be finished is .199689.
7. ET = 10, and PT 10 = e1 .
8. (a) 2.13773; (b) .919174.
9. Fx 1 eΛ x Λ x eΛ x .
10. .714943; .691408.
11. Β is around .748.
13. The jump times are 1.1, 3.4, 4.9, and 9.1; the states are 0, 1, 2, and 3; estimate of Λ:
.43956.
14. .336664.
16. 173.287.
17. .4.
Section 4.6
1. (a) .34378; (b) .132062; (c) 6.7372; (d) a 3.24697, b 20.4832.
2. The m.g.f. of a Χ2 r random variable is Mt 1 2 t r2 .
Short Answers to Selected Exercises 441
3. Let a and b be points for the Χ2 n 1 distribution for which PX a PX b .025.
The confidence interval is n 1 S 2 b
Σ2
n 1 S 2 a.
4. [191.271,396.663].
5. It is nearly certain that the standard deviation exceeds 2.1.
6. .228418.
7. If Μ 6 is true, it is only about .01 likely to observe such a small sample mean, so we
have strong evidence that the true Μ is smaller than 6.
9. (a) .0234488; (b) .0616232; (c) t 1.31946.
10. The data are not sufficiently unusual to cast doubt on the hypothesis Μ = 98.7.
11. Write t.05 for the 95th percentile of the tn 1 distribution, i.e., the point such that
S
PT
t.05 .95. The confidence interval is X t.05 .
n
15. Some graphical experimentation produces r 14.
16. (a) .49425; (b) .467569; (c) a .379201, b 2.53424.
17. We do not have strong evidence against equality of variances.
20. The 90% confidence interval is [.437126, 1.7022].
Section 4.7
1
e12 x1 2.5
2
1.04167 x2 2 2 .104167 x2 x3 1.8
.260417 x3 1.82
1.
2 Π32 3.84
5. The marginals are N2, 3, N1, 1, and N0, 4.
6. Χ2 1.
9. The Χ2 r1
r2 distribution.
Section 5.1
1. We find that n 25 is the smallest number of rolls such that the desired probability
exceeds 1/2.
5. (a) In all three cases, the Chebyshev upper bound is 1 4. When X 3, 4, the actual
probability is about .044; (b) When X uniform0, 1 the actual probability is 0; (c) When
X N2, 4 the actual probability is about .0455.
7. A sample of size 2560 suffices.
442 Appendix C
Section 5.2
k ek x
2 2
2. f x
2Π .
6. Approximately .846.
7. The probability that the sample mean could have come out as large as 550 or larger is
about .006. Since this probability is so small, we have significant evidence against Μ = 540.
8. .818595.
9. .769861.
Section 6.1
1. The distribution of X3 is 5 96, 5 64, 67 192, 25 48. The limiting distribution is
3 71, 6 71, 24 71, 38 71.
2. The distribution of X2 given X0 4 is {0, 0, .16, .48, .36}. The limiting distribution is
.1, .25, .25, .25, .15.
5. The chain is not regular.
7. The limiting distribution is 1 3, 1 3, 1 3. The smallest power is n 8.
1q 1 p
8. The limiting distribution is 2 p
q , 2 p
q
. Among many solutions is p 1 2, q 3 4.
9. The conditional distribution of X4 given X0 3 is 53 96, 43 192, 43 192. The
limiting distribution is 5 9, 2 9, 2 9.
10. The solutions are f 2 .588235, f 3 .411765.
12. 0, p2 1 p
p2
, p2 1 p
p2
, and 1. The point of most rapid increase is about
.653.
14. .6561.
n1
15. gn .054 .95n4 , n 4, 5, 6, ...
3
Section 6.2
1. (a) .0464436; (b) .0656425.
2. .241165.
3. .0116277.
4. To the hundredths place Λ = 1.67.
7. 5/6.
10. Nt l has the Poisson distribution with parameter 5 4 t.
11. 125 8 t.
Section 6.3
1. Ρ 2 3.
2. The probability of at least 2 in the system is 4/9. The probability that a customer goes
directly into service is 1/3.
Short Answers to Selected Exercises 443
5. EW EWq
1 Μ.
6. The c.d.f. of Wq is Fq s 1 eΜ1Ρ s
1 Ρ eΜ s .
7. The long-run probability that a customer will take at least 2 minutes is around .716.
8. If C is the weighting factor on service rate, and D is the weighting factor on waiting time,
then the optimal Μ Λ
DC .
Λ Λ
11. f1 t
Λ
Μ
Λ
Μ
eΛ
Μ t .
12. The distribution of the time until the next customer is served is expn Μ. The differential
equation is fk t Λ
Μ k fk t
Λ fk1 t
Μ k
1 fk
1 t, k 1.
Section 6.4
1. (a) .226919; (b) .295677; (c) .429893.
4. Ct, s Σ2 t.
6. About 5.1 years.
a
1
ex
2 2 Σ2
7. 2 1 t
dx
2 ΠΣ2 t
y
1 2
ex
2 2 t
ey
2 2 t
9. Gy 2 d x 1, y 0; gy , y 0; EYt 2 tΠ .
2Πt 2Πt
10. .344509.
13. .011981.
Section 6.5
1. You would be indifferent between the first two stocks.
2. The optimal proportion q in the stock is around 26%.
4. The optimal solutions are
Μ1 r Μ2 r
q1 , q2
2 a Σ1 2 2 a Σ2 2
5. The optimal proportion in the stock is about 13%; so we see in this case that because the
non-risky asset performs so much better, it is wise to borrow on the stock and invest all
proceeds in that asset.
7. p .5. In two time units, the stock will either go up to 34.99 with probability 1/4, or go
to 32.40 with probability 1/2, or it will stay at 30 with the remaining probability 1/4. The
option value is about 87.6 cents.
10. The portfolio weights are
max 1
b SE,0max 1
a SE,0 N S1
bmax 1
b SE,0
N , B
Sba 1
r
12. $445,777,000.
13. The amount to be saved is about $798,500.
References
Abell, Martha L. and James P. Braselton, Mathematica by Example, 4th ed., Elsevier/Aca-
demic Press, Burlington, MA, 2009.
Billingsley, Patrick, Probability and Measure. John Wiley & Sons, New York, 1979.
Burton, David M., The History of Mathematics: An Introduction. Allyn & Bacon Inc.,
Boston, 1985.
Chung, Kai Lai, Elementary Probability Theory with Stochastic Processes. Springer-
Verlag, New York, 1979.
Çinlar, Erhan. Introduction To Stochastic Processes. Prentice-Hall, Englewood Cliffs, NJ,
1975.
Cooke, Roger, The History of Mathematics: A Brief Course. John Wiley & Sons, New
York, 1997.
Feller, William, An Introduction to Probability Theory and Its Applications, Vol. I and II,
3rd ed., John Wiley & Sons, New York, 1968.
Gross, Donald and Carl M. Harris, Fundamentals of Queueing Theory, 2nd ed., John Wiley
& Sons, New York, 1985.
Hacking, Ian, The Emergence of Probability. Cambridge University Press, London, 1975.
Hoel, Paul G., Sidney C. Port, and Charles J. Stone, Introduction to Probability Theory.
Houghton Mifflin, Boston, 1971.
Hogg, Robert V. and Allen T. Craig, Introduction to Mathematical Statistics, 4th ed.,
Macmillan, New York, 1978.
Karlin, Samuel and Howard M. Taylor, A First Course in Stochastic Processes, 2nd ed.,
Academic Press, New York, 1975.
Katz, Victor J., A History of Mathematics: An Introduction. HarperCollins, New York, 1993.
Lamberton, Damien and Bernard Lapeyre, Introduction to Stochastic Calculus Applied to
Finance. Chapman & Hall, London, 1996.
Loève, Michel, Probability Theory I, 4th ed., Springer-Verlag, New York, 1977.
Parzen, Emanuel, Modern Probability Theory and Its Applications. John Wiley & Sons,
New York, 1960.
Roman, Steven, Introduction to the Mathematics of Finance. Springer, New York, 2004.
Ross, Sheldon M., Stochastic Processes. John Wiley & Sons, New York, 1983.
Rubinstein, Reuven, Simulation and the Monte Carlo Method. John Wiley & Sons, New
York, 1981.
445
446 References
Saaty, Thomas L., Elements of Queueing Theory with Applications. Dover Publications,
New York, 1961.
Sharpe, William, Portfolio Theory and Capital Markets. McGraw-Hill, New York, 1970.
Snell, J. Laurie, Introduction to Probability. Random House, New York, 1988.
Society of Actuaries and Casualty Actuarial Society. Exam P Sample Questions, 2005.
http://www.soa.org/files/pdf/P-09-05ques.pdf.
Index
(Mathematica commands in bold)
447
448 Index
Partitioning 43 M M 1 353
Pascal, Blaise 314 M M 1 1 364
PDF 89, 173, 425 M M 364
Percentile 220 service rate 353
of normal distribution 220 simulating 359 63
Permutation 12, 34 traffic intensity 356
PlotContsProb 167, 425 waiting time distribution
PlotStepFunction 76, 425 358 – 359
PoissonDistribution 116, 426
Poisson distribution 83, 115 16 Random 2, 23
distribution of sum 258 RandomInteger 84, 426
mean 119 RandomKPermutation 34, 426
m.g.f. 258 RandomKSubset 38, 426
variance 119 RandomReal 24, 426
Poisson process 120 21, 344 Random sample 187
arrival times 345 Random variable 2, 4, 73
covariance function 349 continuous 171 –72
exponential distribution 344 discrete 73
gamma distribution 345 distribution 73, 172
queues 353 independence 127, 133, 135,
rate 121, 344 186, 201
simulation 351 Random walk 27, 31, 59, 114, 340
Poisson, Simeon Denis 315 Range 36, 260
Portfolio 380 81 Rate parameter 121, 344
optimal 382 Regression 237
rate of return 381 Runs 47
risk aversion 381
separation theorem 385 Sample correlation coefficient 233
Probability 2, 15 Sample mean 84, 138
conditional 49 distribution 139, 256, 322
empirical 11 mean 138, 145
measure 15, 166 variance 138
properties 16 18 Sample median 260
Probability density function 172 Sample range 260
conditional 184 Sample space 2, 14
joint 178 continuous 159
marginal 181 discrete 14
Probability distribution 2, 73 74, 172 reduced 49
ProbabilityHistogram 81, 426 Sample variance 84, 213
Probability mass function 73 74 relation to chi–square
conditional 126 distribution 284
joint 125 Sampling 33
marginal 126 in a batch 38
ProportionMInSample 40, 426 in order with replacement 42
Pseudo–random numbers 25 in order without
P4orfewer 346, 426 replacement 34
stratified 47
Quadratic forms 306 Scatter plot 226
normal vectors 308 SeedRandom 27, 426
Quantile 251, 426 Separation theorem 385
Queues 352 Σ–algebra 160
arrival rate 353 ShowLineGraph 378, 426
long–run system size 353 56 SimArrivals 360, 426
Index 451
t – distribution 289 – 90
degrees of freedom 290
graphs 290
relation to sample mean 291
Traffic intensity 356
Transformations 78, 246
Transition diagram 330
Transition matrix 331