Probability and Computing Randomization and Probabilistic Techniques in Algorithms and Data Analysis, 2nd Edition
Probability and Computing Randomization and Probabilistic Techniques in Algorithms and Data Analysis, 2nd Edition
Probability and Computing Randomization and Probabilistic Techniques in Algorithms and Data Analysis, 2nd Edition
Second Edition
www.cambridge.org
Information on this title: www.cambridge.org/9781107154889
10.1017/9781316651124
© Michael Mitzenmacher and Eli Upfal 2017
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2017
Printed in the United States of America by Sheridan Books, Inc.
A catalogue record for this publication is available from the British Library.
Library of Congress Cataloging in Publication Data
Names: Mitzenmacher, Michael, 1969– author. | Upfal, Eli, 1954– author.
Title: Probability and computing / Michael Mitzenmacher Eli Upfal.
Description: Second edition. | Cambridge, United Kingdom ;
New York, NY, USA : Cambridge University Press, [2017] |
Includes bibliographical references and index.
Identiiers: LCCN 2016041654 | ISBN 9781107154889
Subjects: LCSH: Algorithms. | Probabilities. | Stochastic analysis.
Classiication: LCC QA274.M574 2017 | DDC 518/.1 – dc23
LC record available at https://lccn.loc.gov/2016041654
ISBN 978-1-107-15488-9 Hardback
Additional resources for this publication at www.cambridge.org/Mitzenmacher.
Cambridge University Press has no responsibility for the persistence or accuracy
of URLs for external or third-party Internet Web sites referred to in this publication
and does not guarantee that any content on such Web sites is, or will remain,
accurate or appropriate.
To
vii
contents
viii
contents
x
contents
13 Martingales 341
13.1 Martingales 341
13.2 Stopping Times 343
13.2.1 Example: A Ballot Theorem 345
13.3 Wald’s Equation 346
13.4 Tail Inequalities for Martingales 349
13.5 Applications of the Azuma–Hoeffding Inequality 351
13.5.1 General Formalization 351
13.5.2 Application: Pattern Matching 353
13.5.3 Application: Balls and Bins 354
13.5.4 Application: Chromatic Number 355
13.6 Exercises 355
xi
contents
xii
contents
xiii
Preface to the Second Edition
In the ten years since the publication of the irst edition of this book, probabilistic
methods have become even more central to computer science, rising with the growing
importance of massive data analysis, machine learning, and data mining. Many of the
successful applications of these areas rely on algorithms and heuristics that build on
sophisticated probabilistic and statistical insights. Judicious use of these tools requires
a thorough understanding of the underlying mathematical concepts. Most of the new
material in this second edition focuses on these concepts.
The ability in recent years to create, collect, and store massive data sets, such as
the World Wide Web, social networks, and genome data, lead to new challenges in
modeling and analyzing such structures. A good foundation for models and analysis
comes from understanding some standard distributions. Our new chapter on the nor-
mal distribution (also known as the Gaussian distribution) covers the most common
statistical distribution, as usual with an emphasis on how it is used in settings in com-
puter science, such as for tail bounds. However, an interesting phenomenon is that in
many modern data sets, including social networks and the World Wide Web, we do not
see normal distributions, but instead we see distributions with very different proper-
ties, most notably unusually heavy tails. For example, some pages in the World Wide
Web have an unusually large number of pages that link to them, orders of magnitude
larger than the average. The new chapter on power laws and related distributions covers
speciic distributions that are important for modeling and understanding these kinds of
modern data sets.
Machine learning is one of the great successes of computer science in recent years,
providing eficient tools for modeling, understanding, and making predictions based on
large data sets. A question that is often overlooked in practical applications of machine
learning is the accuracy of the predictions, and in particular the relation between accu-
racy and the sample size. A rigorous introduction to approaches to these important
questions is presented in a new chapter on sample complexity, VC dimension, and
Rademacher averages.
xv
preface to the second edition
We have also used the new edition to enhance some of our previous material. For
example, we present some of the recent advances on algorithmic variations of the pow-
erful Lovász local lemma, and we have a new section covering the wonderfully named
and increasingly useful hashing approach known as cuckoo hashing. Finally, in addi-
tion to all of this new material, the new edition includes updates and corrections, and
many new exercises.
We thank the many readers who sent us corrections over the years – unfortunately,
too many to list here!
xvi
Preface to the First Edition
Why Randomness?
Why should computer scientists study and use randomness? Computers appear to
behave far too unpredictably as it is! Adding randomness would seemingly be a dis-
advantage, adding further complications to the already challenging task of eficiently
utilizing computers.
Science has learned in the last century to accept randomness as an essential com-
ponent in modeling and analyzing nature. In physics, for example, Newton’s laws led
people to believe that the universe was a deterministic place; given a big enough calcu-
lator and the appropriate initial conditions, one could determine the location of planets
years from now. The development of quantum theory suggests a rather different view;
the universe still behaves according to laws, but the backbone of these laws is proba-
bilistic. “God does not play dice with the universe” was Einstein’s anecdotal objection
to modern quantum mechanics. Nevertheless, the prevailing theory today for subparti-
cle physics is based on random behavior and statistical laws, and randomness plays a
signiicant role in almost every other ield of science ranging from genetics and evolu-
tion in biology to modeling price luctuations in a free-market economy.
Computer science is no exception. From the highly theoretical notion of probabilis-
tic theorem proving to the very practical design of PC Ethernet cards, randomness
and probabilistic methods play a key role in modern computer science. The last two
decades have witnessed a tremendous growth in the use of probability theory in comput-
ing. Increasingly more advanced and sophisticated probabilistic techniques have been
developed for use within broader and more challenging computer science applications.
In this book, we study the fundamental ways in which randomness comes to bear on
computer science: randomized algorithms and the probabilistic analysis of algorithms.
Randomized algorithms: Randomized algorithms are algorithms that make random
choices during their execution. In practice, a randomized program would use values
generated by a random number generator to decide the next step at several branches
of its execution. For example, the protocol implemented in an Ethernet card uses ran-
dom numbers to decide when it next tries to access the shared Ethernet communication
xvii
preface to the first edition
medium. The randomness is useful for breaking symmetry, preventing different cards
from repeatedly accessing the medium at the same time. Other commonly used applica-
tions of randomized algorithms include Monte Carlo simulations and primality testing
in cryptography. In these and many other important applications, randomized algo-
rithms are signiicantly more eficient than the best known deterministic solutions.
Furthermore, in most cases the randomized algorithms are also simpler and easier to
program.
These gains come at a price; the answer may have some probability of being incor-
rect, or the eficiency is guaranteed only with some probability. Although it may seem
unusual to design an algorithm that may be incorrect, if the probability of error is suf-
iciently small then the improvement in speed or memory requirements may well be
worthwhile.
Probabilistic analysis of algorithms: Complexity theory tries to classify computa-
tion problems according to their computational complexity, in particular distinguishing
between easy and hard problems. For example, complexity theory shows that the Trav-
eling Salesman problem is NP-hard. It is therefore very unlikely that we will ever know
an algorithm that can solve any instance of the Traveling Salesman problem in time that
is subexponential in the number of cities. An embarrassing phenomenon for the clas-
sical worst-case complexity theory is that the problems it classiies as hard to compute
are often easy to solve in practice. Probabilistic analysis gives a theoretical explanation
for this phenomenon. Although these problems may be hard to solve on some set of
pathological inputs, on most inputs (in particular, those that occur in real-life applica-
tions) the problem is actually easy to solve. More precisely, if we think of the input as
being randomly selected according to some probability distribution on the collection of
all possible inputs, we are very likely to obtain a problem instance that is easy to solve,
and instances that are hard to solve appear with relatively small probability. Probabilis-
tic analysis of algorithms is the method of studying how algorithms perform when the
input is taken from a well-deined probabilistic space. As we will see, even NP-hard
problems might have algorithms that are extremely eficient on almost all inputs.
The Book
with continuous probability and the Poisson process (Chapter 8). The material from
Chapter 4 on Chernoff bounds, however, is needed for most of the remaining material.
Most of the exercises in the book are theoretical, but we have included some pro-
gramming exercises – including two more extensive exploratory assignments that
require some programming. We have found that occasional programming exercises are
often helpful in reinforcing the book’s ideas and in adding some variety to the course.
We have decided to restrict the material in this book to methods and techniques based
on rigorous mathematical analysis; with few exceptions, all claims in this book are fol-
lowed by full proofs. Obviously, many extremely useful probabilistic methods do not
fall within this strict category. For example, in the important area of Monte Carlo meth-
ods, most practical solutions are heuristics that have been demonstrated to be effective
and eficient by experimental evaluation rather than by rigorous mathematical analy-
sis. We have taken the view that, in order to best apply and understand the strengths
and weaknesses of heuristic methods, a irm grasp of underlying probability theory and
rigorous techniques – as we present in this book – is necessary. We hope that students
will appreciate this point of view by the end of the course.
Acknowledgments
Our irst thanks go to the many probabilists and computer scientists who developed
the beautiful material covered in this book. We chose not to overload the textbook
with numerous references to the original papers. Instead, we provide a reference list
that includes a number of excellent books giving background material as well as more
advanced discussion of the topics covered here.
The book owes a great deal to the comments and feedback of students and teaching
assistants who took the courses CS 155 at Brown and CS 223 at Harvard. In particular
we wish to thank Aris Anagnostopoulos, Eden Hochbaum, Rob Hunter, and Adam
Kirsch, all of whom read and commented on early drafts of the book.
Special thanks to Dick Karp, who used a draft of the book in teaching CS 174 at
Berkeley during fall 2003. His early comments and corrections were most valuable in
improving the manuscript. Peter Bartlett taught CS 174 at Berkeley in spring 2004, also
providing many corrections and useful comments.
We thank our colleagues who carefully read parts of the manuscript, pointed out
many errors, and suggested important improvements in content and presentation: Artur
Czumaj, Alan Frieze, Claire Kenyon, Joe Marks, Salil Vadhan, Eric Vigoda, and the
anonymous reviewers who read the manuscript for the publisher.
We also thank Rajeev Motwani and Prabhakar Raghavan for allowing us to use some
of the exercises in their excellent book Randomized Algorithms.
We are grateful to Lauren Cowles of Cambridge University Press for her editorial
help and advice in preparing and organizing the manuscript.
Writing of this book was supported in part by NSF ITR Grant no. CCR-0121154.
xx
chapter one
Events and Probability
This chapter introduces the notion of randomized algorithms and reviews some basic
concepts of probability theory in the context of analyzing the performance of simple
randomized algorithms for verifying algebraic identities and inding a minimum cut-set
in a graph.
Computers can sometimes make mistakes, due for example to incorrect programming
or hardware failure. It would be useful to have simple ways to double-check the results
of computations. For some problems, we can use randomness to eficiently verify the
correctness of an output.
Suppose we have a program that multiplies together monomials. Consider the prob-
lem of verifying the following identity, which might be output by our program:
?
(x + 1)(x − 2)(x + 3)(x − 4)(x + 5)(x − 6) ≡ x6 − 7x3 + 25.
There is an easy way to verify whether the identity is correct: multiply together the
terms on the left-hand side and see if the resulting polynomial matches the right-hand
side. In this example, when we multiply all the constant terms on the left, the result
does not match the constant term on the right, so the identity cannot be valid. More
generally, given two polynomials F (x) and G(x), we can verify the identity
?
F (x) ≡ G(x)
d i
by converting the two polynomials to their canonical forms i=0 ci x ; two polynomi-
als are equivalent if and only if all the coeficients in their canonical forms are equal.
From thisdpoint on let us assume that, as in our example, F (x) is given as a product
F (x) = i=1 (x − ai ) and G(x) is given in its canonical form. Transforming F (x) to
its canonical form by consecutively multiplying the ith monomial with the product of
1
events and probability
We turn now to a formal mathematical setting for analyzing the randomized algorithm.
Any probabilistic statement must refer to the underlying probability space.
Deinition 1.1: A probability space has three components:
1. a sample space , which is the set of all possible outcomes of the random process
modeled by the probability space;
2. a family of sets F representing the allowable events, where each set in F is a subset1
of the sample space ; and
3. a probability function Pr : F → R satisfying Deinition 1.2.
An element of is called a simple or elementary event.
In the randomized algorithm for verifying polynomial identities, the sample space
is the set of integers {1, . . . , 100d}. Each choice of an integer r in this range is a simple
event.
Deinition 1.2: A probability function is any function Pr : F → R that satisies the
following conditions:
1. for any event E, 0 ≤ Pr(E ) ≤ 1;
2. Pr() = 1; and
3. for any inite or countably ininite sequence of pairwise mutually disjoint events
E1 , E2 , E3 , . . . ,
Pr Ei = Pr(Ei ).
i≥1 i≥1
In most of this book we will use discrete probability spaces. In a discrete probability
space the sample space is inite or countably ininite, and the family F of allow-
able events consists of all subsets of . In a discrete probability space, the probability
function is uniquely deined by the probabilities of the simple events.
Again, in the randomized algorithm for verifying polynomial identities, each choice
of an integer r is a simple event. Since the algorithm chooses the integer uniformly at
random, all simple events have equal probability. The sample space has 100d simple
events, and the sum of the probabilities of all simple events must be 1. Therefore each
simple event has probability 1/100d.
Because events are sets, we use standard set theory notation to express combinations
of events. We write E1 ∩ E2 for the occurrence of both E1 and E2 and write E1 ∪ E2 for
the occurrence of either E1 or E2 (or both). For example, suppose we roll two dice. If
E1 is the event that the irst die is a 1 and E2 is the event that the second die is a 1, then
E1 ∩ E2 denotes the event that both dice are 1 while E1 ∪ E2 denotes the event that at
least one of the two dice lands on 1. Similarly, we write E1 − E2 for the occurrence
1 In a discrete probability space F = 2 . Otherwise, and introductory readers may skip this point, since the events
need to be measurable, F must include the empty set and be closed under complement and union and intersection
of countably many sets (a σ -algebra).
3
events and probability
of an event that is in E1 but not in E2 . With the same dice example, E1 − E2 consists
of the event where the irst die is a 1 and the second die is not. We use the notation Ē
as shorthand for − E; for example, if E is the event that we obtain an even number
when rolling a die, then Ē is the event that we obtain an odd number.
Deinition 1.2 yields the following obvious lemma.
Notice that Lemma 1.2 differs from the third part of Deinition 1.2 in that Deinition
1.2 is an equality and requires the events to be pairwise mutually disjoint.
Lemma 1.1 can be generalized to the following equality, often referred to as the
inclusion–exclusion principle.
ℓ+1
− · · · + (−1) Pr Eir + ··· .
i1 <i2 <···<iℓ r=1
If k = 2, it seems that the probability that the irst iteration inds a root is at most 1/100
and the probability that the second iteration inds a root is at most 1/100, so the prob-
ability that both iterations ind a root is at most (1/100)2 . Generalizing, for any k, the
probability of choosing roots for k iterations would be at most (1/100)k .
To formalize this, we introduce the notion of independence.
Deinition 1.3: Two events E and F are independent if and only if
Pr(E ∩ F ) = Pr(E ) · Pr(F ).
More generally, events E1 , E2 , . . . , Ek are mutually independent if and only if, for any
subset I ⊆ [1, k],
Pr Ei = Pr(Ei ).
i∈I i∈I
If our algorithm samples with replacement then in each iteration the algorithm chooses
a random number uniformly at random from the set {1, . . . , 100d}, and thus the choice
in one iteration is independent of the choices in previous iterations. For the case where
the polynomials are not equivalent, let Ei be the event that, on the ith run of the algo-
rithm, we choose a root ri such that F (ri ) − G(ri ) = 0. The probability that the algo-
rithm returns the wrong answer is given by
Pr(E1 ∩ E2 ∩ · · · ∩ Ek ).
Since Pr(Ei ) is at most d/100d and since the events E1 , E2 , . . . , Ek are independent,
the probability that the algorithm gives the wrong answer after k iterations is
k k
1 k
d
Pr(E1 ∩ E2 ∩ · · · ∩ Ek ) = Pr(Ei ) ≤ = .
i=1 i=1
100d 100
The probability of making an error is therefore at most exponentially small in the num-
ber of trials.
Now let us consider the case where sampling is done without replacement. In this
case the probability of choosing a given number is conditioned on the events of the
previous iterations.
Deinition 1.4: The conditional probability that event E occurs given that event F
occurs is
Pr(E ∩ F )
Pr(E | F ) = .
Pr(F )
The conditional probability is well-deined only if Pr(F ) > 0.
Intuitively, we are looking for the probability of E ∩ F within the set of events deined
by F. Because F deines our restricted sample space, we normalize the probabilities
by dividing by Pr(F ), so that the sum of the probabilities of all events is 1. When
Pr(F ) > 0, the deinition can also be written in the useful form
Pr(E | F ) Pr(F ) = Pr(E ∩ F ).
6
1.2 axioms of probability
Because (d − ( j − 1))/(100d − ( j − 1)) < d/100d when j > 1, our bounds on the
probability of making an error are actually slightly better without replacement. You
may also notice that, if we take d + 1 samples without replacement and the two poly-
nomials are not equivalent, then we are guaranteed to ind an r such that F (r) − G(r) =
0. Thus, in d + 1 iterations we are guaranteed to output the correct answer. However,
computing the value of the polynomial at d + 1 points takes (d 2 ) time using the stan-
dard approach, which is no faster than inding the canonical form deterministically.
Since sampling without replacement appears to give better bounds on the probability
of error, why would we ever want to consider sampling with replacement? In some
cases, sampling with replacement is signiicantly easier to analyze, so it may be worth
7
events and probability
We now consider another example where randomness can be used to verify an equality
more quickly than the known deterministic algorithms. Suppose we are given three
n × n matrices A, B, and C. For convenience, assume we are working over the integers
modulo 2. We want to verify whether
AB = C.
One way to accomplish this is to multiply A and B and compare the result to C. The sim-
ple matrix multiplication algorithm takes (n3 ) operations. There exist more sophisti-
cated algorithms that are known to take roughly (n2.37 ) operations.
Once again, we use a randomized algorithm that allows for faster veriication – at the
expense of possibly returning a wrong answer with small probability. The algorithm is
similar in spirit to our randomized algorithm for checking polynomial identities. The
algorithm chooses a random vector r̄ = (r1 , r2 , . . . , rn ) ∈ {0, 1}n . It then computes ABr̄
by irst computing Br̄ and then A(Br̄), and it also computes Cr̄. If A(Br̄) = Cr̄, then
AB = C. Otherwise, it returns that AB = C.
The algorithm requires three matrix-vector multiplications, which can be done in
time (n2 ) in the obvious way. The probability that the algorithm returns that AB = C
when they are actually not equal is bounded by the following theorem.
Theorem 1.4: If AB = C and if r̄ is chosen uniformly at random from {0, 1}n , then
1
Pr(ABr̄ = Cr̄) ≤ .
2
Proof: Before beginning, we point out that the sample space for the vector r̄ is the set
{0, 1}n and that the event under consideration is ABr̄ = Cr̄. We also make note of the
following simple but useful lemma.
Lemma 1.5: Choosing r̄ = (r1 , r2 , . . . , rn ) ∈ {0, 1}n uniformly at random is equiva-
lent to choosing each ri independently and uniformly from {0, 1}.
Proof: If each ri is chosen independently and uniformly at random, then each of the
2n possible vectors r̄ is chosen with probability 2−n , giving the lemma.
Let D = AB − C = 0. Then ABr̄ = Cr̄ implies that Dr̄ = 0. Since D = 0 it must have
some nonzero entry; without loss of generality, let that entry be d11 .
For Dr̄ = 0, it must be the case that
n
d1 j r j = 0
j=1
8
1.3 application: verifying matrix multiplication
or, equivalently,
n
j=2 d1 j r j
r1 = − . (1.1)
d11
Now we introduce a helpful idea. Instead of reasoning about the vector r̄, suppose
that we choose the rk independently and uniformly at random from {0, 1} in order,
from rn down to r1 . Lemma 1.5 says that choosing the rk in this way is equivalent to
choosing a vector r̄ uniformly at random. Now consider the situation just before r1 is
chosen. At this point, the right-hand side of Eqn. (1.1) is determined, and there is at
most one choice for r1 that will make that equality hold. Since there are two choices
for r1 , the equality holds with probability at most 1/2, and hence the probability that
ABr̄ = Cr̄ is at most 1/2. By considering all variables besides r1 as having been set, we
have reduced the sample space to the set of two values {0, 1} for r1 and have changed
the event being considered to whether Eqn. (1.1) holds.
This idea is called the principle of deferred decisions. When there are several random
variables, such as the ri of the vector r̄, it often helps to think of some of them as being
set at one point in the algorithm with the rest of them being left random – or deferred –
until some further point in the analysis. Formally, this corresponds to conditioning on
the revealed values; when some of the random variables are revealed, we must condition
on the revealed values for the rest of the analysis. We will see further examples of the
principle of deferred decisions later in the book.
To formalize this argument, we irst introduce a simple fact, known as the law of
total probability.
Proof: Since the events Ei (i = 1, . . . , n) are disjoint and cover the entire sample space
, it follows that
n
Pr(B) = Pr(B ∩ Ei ).
i=1
Further,
n
n
Pr(B ∩ Ei ) = Pr(B | Ei ) Pr(Ei )
i=1 i=1
9
events and probability
Now, using this law and summing over all collections of values (x2 , x3 , x4 , . . . , xn ) ∈
{0, 1}n−1 yields
Pr(ABr̄ = Cr̄)
= Pr (ABr̄ = Cr̄) ∩ ((r2 , . . . , rn ) = (x2 , . . . , xn ))
(x2 ,...,xn )∈{0,1}n−1
n
j=2 d1 j r j
≤ Pr r1 = − ∩ ((r2 , . . . , rn ) = (x2 , . . . , xn ))
d11
(x2 ,...,xn )∈{0,1}n−1
n
j=2 d1 j r j
= Pr r1 = − · Pr((r2 , . . . , rn ) = (x2 , . . . , xn ))
d11
(x2 ,...,xn )∈{0,1}n−1
1
≤ Pr((r2 , . . . , rn ) = (x2 , . . . , xn ))
2
(x2 ,...,xn )∈{0,1}n−1
1
= .
2
Here we have used the independence of r1 and (r2 , . . . , rn ) in the fourth line.
To improve on the error probability of Theorem 1.4, we can again use the fact that
the algorithm has a one-sided error and run the algorithm multiple times. If we ever
ind an r̄ such that ABr̄ = Cr̄, then the algorithm will correctly return that AB = C. If
we always ind ABr̄ = Cr̄, then the algorithm returns that AB = C and there is some
probability of a mistake. Choosing r̄ with replacement from {0, 1}n for each trial, we
obtain that, after k trials, the probability of error is at most 2−k . Repeated trials increase
the running time to (kn2 ).
Suppose we attempt this veriication 100 times. The running time of the random-
ized checking algorithm is still (n2 ), which is faster than the known deterministic
algorithms for matrix multiplication for suficiently large n. The probability that an
incorrect algorithm passes the veriication test 100 times is at most 2−100 , an astronom-
ically small number. In practice, the computer is much more likely to crash during the
execution of the algorithm than to return a wrong answer.
An interesting related problem is to evaluate the gradual change in our conidence in
the correctness of the matrix multiplication as we repeat the randomized test. Toward
that end we introduce Bayes’ law.
Theorem 1.7 [Bayes’ Law]: Assume that E1 , E2 , . . . , En are mutually disjoint events
in the sample space such that ni=1 Ei = . Then
As a simple application of Bayes’ law, consider the following problem. We are given
three coins and are told that two of the coins are fair and the third coin is biased, landing
heads with probability 2/3. We are not told which of the three coins is biased. We
permute the coins randomly, and then lip each of the coins. The irst and second coins
come up heads, and the third comes up tails. What is the probability that the irst coin
is the biased one?
10
1.3 application: verifying matrix multiplication
The coins are in a random order and so, before our observing the outcomes of the
coin lips, each of the three coins is equally likely to be the biased one. Let Ei be the
event that the ith coin lipped is the biased one, and let B be the event that the three coin
lips came up heads, heads, and tails.
Before we lip the coins we have Pr(Ei ) = 1/3 for all i. We can also compute the
probability of the event B conditioned on Ei :
2 1 1 1
Pr(B | E1 ) = Pr(B | E2 ) = · · = ,
3 2 2 6
and
1 1 1 1
Pr(B | E3 ) = · · = .
2 2 3 12
Applying Bayes’ law, we have
Pr(B | E1 ) Pr(E1 ) 2
Pr(E1 | B) = 3 = .
i=1 Pr(B | Ei ) Pr(Ei )
5
Thus, the outcome of the three coin lips increases the likelihood that the irst coin is
the biased one from 1/3 to 2/5.
Returning now to our randomized matrix multiplication test, we want to evaluate
the increase in conidence in the matrix identity obtained through repeated tests. In the
Bayesian approach one starts with a prior model, giving some initial value to the model
parameters. This model is then modiied, by incorporating new observations, to obtain
a posterior model that captures the new information.
In the matrix multiplication case, if we have no information about the process that
generated the identity then a reasonable prior assumption is that the identity is correct
with probability 1/2. If we run the randomized test once and it returns that the matrix
identity is correct, how does this change our conidence in the identity?
Let E be the event that the identity is correct, and let B be the event that the test
returns that the identity is correct. We start with Pr(E ) = Pr(Ē ) = 1/2, and since the
test has a one-sided error bounded by 1/2, we have Pr(B | E ) = 1 and Pr(B | Ē ) ≤ 1/2.
Applying Bayes’ law yields
Assume now that we run the randomized test again and it again returns that the
identity is correct. After the irst test, I may naturally have revised my prior model, so
that I believe Pr(E ) ≥ 2/3 and Pr(Ē ) ≤ 1/3. Now let B be the event that the new test
returns that the identity is correct; since the tests are independent, as before we have
Pr(B | E ) = 1 and Pr(B | Ē ) ≤ 1/2. Applying Bayes’ law then yields
2/3 4
Pr(E | B) ≥ = .
2/3 + 1/3 · 1/2 5
11
events and probability
In general: if our prior model (before running the test) is that Pr(E ) ≥ 2i /(2i + 1)
and if the test returns that the identity is correct (event B), then
2i
2i + 1 2i+1 1
Pr(E | B) ≥ = i+1 + 1
= 1 − i+1 .
2i
1 1 2 2 +1
+
2i + 1 2 2i + 1
Thus, if all 100 calls to the matrix identity test return that the identity is correct, our
conidence in the correctness of this identity is at least 1 − 1/(2101 + 1).
classiied C j :
{|i : xi = y, c(Di ) = c j |}
py, j = .
{|i : xi = y)|}
Assuming that a new object D∗ with a features vector x∗ has the same distribution as
the training set, then px∗ , j is an empirical estimate for the conditional probability
Pr(c(D∗ ) = c j | x∗ = (x1∗ , . . . , xm
∗
)).
Indeed, we could compute these values ahead of time in a large lookup table and simply
return the vector (z1 , z2 , . . . , zt ) = (px∗ ,1 , px∗ ,2 , . . . , px∗ ,t ) after computing the features
vector x∗ from the object.
The dificulty in this approach is that we need to obtain accurate estimates of a large
collection of conditional probabilities, corresponding to all possible combination of
values of the m features. Even if each feature has just two values we would need to esti-
mate 2m conditional probabilities per class, which would generally require (|C|2m )
samples.
The training process is faster and requires signiicantly fewer examples if we assume
a “naïve” model in which the m features are independent. In that case we have for
Pr(x∗ | c(D∗ ) = c j ) · Pr(c(D∗ ) = c j )
Pr(c(D∗ ) = c j | x∗ ) = (1.2)
Pr(x∗ )
m
Pr(xk∗ = xi | c(D∗ ) = c j ) · Pr(c(D∗ ) = c j )
= k=1 . (1.3)
Pr(x∗ )
Here xk∗ is the kth component of the features vector x∗ of object D∗ . Notice that the
denominator is independent of c j , and can be treated as just a normalizing constant
factor.
With a constant number of possible values per feature, we only need to learn esti-
mates for O(m|C|) probabilities. In what follows, we use Pr ˆ to denote empirical prob-
abilities, which are the relative frequency of events in our training set of examples. This
notation emphasizes that we are taking estimates of these probabilities as determined
from the training set. (In practice, one often makes slight modiications, such as adding
1/2 to the numerator in each of the fractions to guarantee that no empirical probability
equals 0.)
The training process is simple:
r For each classiication class c j , keep track of the fraction of objects classiied as c j
to compute
ˆ ∗ |{i | c(Di ) = c j }|
Pr(c(D ) = cj) = ,
|D|
where |D| is the number of objects in the training set.
r For each feature Xk and feature value xk keep track of the fraction of objects with that
feature value that are classiied as c j , to compute
i
ˆ k∗ = xk | c(D∗ ) = c j ) = |{i : xk = xk , c(Di ) = c j }| .
Pr(x
{i | c(Di ) = c j }|
13
events and probability
Once we train the classiier, the classiication of a new object D∗ with features vector
x∗ = (x1∗ , . . . , xm
∗
) is computed by calculating
m
ˆ k = xk | c(D ) = c j ) Pr(c(D
Pr(x ∗ ∗ ˆ ∗
) = cj)
k=1
for each c j and taking the classiication with the highest value.
In practice, the products may lead to underlow values; an easy solution to that prob-
lem is to instead compute the logarithm of the above expression. Estimates of the
entire probability vector can be found by normalizing appropriately. (Alternatively,
instead of normalizing, one could provide probability estimates by also computing
estimates for Pr(x∗ = x) from the sample data. Under our independence assumption
Pr(x∗ = (x1∗ , . . . , xm
∗
)) = m ∗
k=1 Pr(xk = xk ), and one could estimate the denominator
of Equation 1.2 with the product of the corresponding estimates.)
The naïve Bayesian classiier is eficient and simple to implement due to the “naïve”
assumption of independence. This assumption may lead to misleading outcomes when
the classiication depends on combinations of features. As a simple example consider
14
1.5 application: a randomized min-cut algorithm
A cut-set in a graph is a set of edges whose removal breaks the graph into two or
more connected components. Given a graph G = (V, E ) with n vertices, the minimum
cut – or min-cut – problem is to ind a minimum cardinality cut-set in G. Minimum
cut problems arise in many contexts, including the study of network reliability. In the
case where nodes correspond to machines in the network and edges correspond to con-
nections between machines, the min-cut is the smallest number of edges that can fail
before some pair of machines cannot communicate. Minimum cuts also arise in clus-
tering problems. For example, if nodes represent Web pages (or any documents in a
hypertext-based system) and two nodes have an edge between them if the correspond-
ing nodes have a hyperlink between them, then small cuts divide the graph into clus-
ters of documents with few links between clusters. Documents in different clusters are
likely to be unrelated.
We shall proceed by making use of the deinitions and techniques presented so far
in order to analyze a simple randomized algorithm for the min-cut problem. The main
operation in the algorithm is edge contraction. In contracting an edge (u, v) we merge
the two vertices u and v into one vertex, eliminate all edges connecting u and v, and
retain all other edges in the graph. The new graph may have parallel edges but no
self-loops. Examples appear in Figure 1.1, where in each step the dark edge is being
contracted.
The algorithm consists of n − 2 iterations. In each iteration, the algorithm picks an
edge from the existing edges in the graph and contracts that edge. There are many pos-
sible ways one could choose the edge at each step. Our randomized algorithm chooses
the edge uniformly at random from the remaining edges.
Each iteration reduces the number of vertices in the graph by one. After n − 2 iter-
ations, the graph consists of two vertices. The algorithm outputs the set of edges con-
necting the two remaining vertices.
It is easy to verify that any cut-set of a graph in an intermediate iteration of the
algorithm is also a cut-set of the original graph. On the other hand, not every cut-set of
the original graph is a cut-set of a graph in an intermediate iteration, since some edges
of the cut-set may have been contracted in previous iterations. As a result, the output of
the algorithm is always a cut-set of the original graph but not necessarily the minimum
cardinality cut-set (see Figure 1.1).
We now establish a lower bound on the probability that the algorithm returns a cor-
rect output.
15
events and probability
1 3 1
5 5 5 5
3,4 1,3,4 1,2,3,4
2 4 2 2
(a) A successful run of min-cut.
1 3 1 1 1
5 5
3,4 3,4,5
2,3,4,5
2 4 2 2
(b) An unsuccessful run of min-cut.
Figure 1.1: An example of two executions of min-cut in a graph with minimum cut-set of size 2.
Theorem 1.8: The algorithm outputs a min-cut set with probability at least
2/(n(n − 1)).
Proof: Let k be the size of the min-cut set of G. The graph may have several cut-sets
of minimum size. We compute the probability of inding one speciic such set C.
Since C is a cut-set in the graph, removal of the set C partitions the set of vertices into
two sets, S and V − S, such that there are no edges connecting vertices in S to vertices in
V − S. Assume that, throughout an execution of the algorithm, we contract only edges
that connect two vertices in S or two vertices in V − S, but not edges in C. In that case,
all the edges eliminated throughout the execution will be edges connecting vertices in
S or vertices in V − S, and after n − 2 iterations the algorithm returns a graph with two
vertices connected by the edges in C. We may therefore conclude that, if the algorithm
never chooses an edge of C in its n − 2 iterations, then the algorithm returns C as the
minimum cut-set.
This argument gives some intuition for why we choose the edge at each iteration
uniformly at random from the remaining existing edges. If the size of the cut C is
small and if the algorithm chooses the edge uniformly at each step, then the probability
that the algorithm chooses an edge of C is small – at least when the number of edges
remaining is large compared to C.
iLet Ei be the event that the edge contracted in iteration i is not in C, and let Fi =
j=1 E j be the event that no edge of C was contracted in the irst i iterations. We need
to compute Pr(Fn−2 ).
We start by computing Pr(E1 ) = Pr(F1 ). Since the minimum cut-set has k edges, all
vertices in the graph must have degree k or larger. If each vertex is adjacent to at least k
edges, then the graph must have at least nk/2 edges. The irst contracted edge is chosen
uniformly at random from the set of all edges. Since there are at least nk/2 edges in the
graph and since C has k edges, the probability that we do not choose an edge of C in
the irst iteration is given by
2k 2
Pr(E1 ) = Pr(F1 ) ≥ 1 − =1− .
nk n
16
1.6 exercises
Let us suppose that the irst contraction did not eliminate an edge of C. In other
words, we condition on the event F1 . Then, after the irst iteration, we are left with an
(n − 1)-node graph with minimum cut-set of size k. Again, the degree of each vertex
in the graph must be at least k, and the graph must have at least k(n − 1)/2 edges.
Thus,
k 2
Pr(E2 | F1 ) ≥ 1 − =1− .
k(n − 1)/2 n−1
Similarly,
k 2
Pr(Ei | Fi−1 ) ≥ 1 − =1− .
k(n − i + 1)/2 n−i+1
To compute Pr(Fn−2 ), we use
Pr(Fn−2 ) = Pr(En−2 ∩ Fn−3 ) = Pr(En−2 | Fn−3 ) · Pr(Fn−3 )
= Pr(En−2 | Fn−3 ) · Pr(En−3 | Fn−4 ) · · · Pr(E2 | F1 ) · Pr(F1 )
n−2
n−2
2 n−i−1
≥ 1− =
i=1
n−i+1 i=1
n−i+1
n−2 n−3 n−4 4 3 2 1
= ...
n n−1 n−2 6 5 4 3
2
= .
n(n − 1)
Since the algorithm has a one-sided error, we can reduce the error probability by repeat-
ing the algorithm. Assume that we run the randomized min-cut algorithm n(n − 1) ln n
times and output the minimum size cut-set found in all the iterations. The probability
that the output is not a min-cut set is bounded by
n(n−1) ln n
2 1
1− ≤ e−2 ln n = 2 .
n(n − 1) n
In the irst inequality we have used the fact that 1 − x ≤ e−x .
1.6. Exercises
Exercise 1.1: We lip a fair coin ten times. Find the probability of the following events.
(a) The number of heads and the number of tails are equal.
(b) There are more heads than tails.
(c) The ith lip and the (11 − i)th lip are the same for i = 1, . . . , 5.
(d) We lip at least four consecutive heads.
Exercise 1.2: We roll two standard six-sided dice. Find the probability of the following
events, assuming that the outcomes of the rolls are independent.
(a) The two dice show the same number.
(b) The number that appears on the irst die is larger than the number on the second.
17
events and probability
Exercise 1.4: We are playing a tournament in which we stop as soon as one of us wins
n games. We are evenly matched, so each of us wins any game with probability 1/2,
independently of other games. What is the probability that the loser has won k games
when the match is over?
Exercise 1.5: After lunch one day, Alice suggests to Bob the following method to
determine who pays. Alice pulls three six-sided dice from her pocket. These dice are
not the standard dice, but have the following numbers on their faces:
r die A – 1, 1, 6, 6, 8, 8;
r die B – 2, 2, 4, 4, 9, 9;
r die C – 3, 3, 5, 5, 7, 7.
The dice are fair, so each side comes up with equal probability. Alice explains that she
and Bob will each pick up one of the dice. They will each roll their die, and the one
who rolls the lowest number loses and will buy lunch. So as to take no advantage, Alice
offers Bob the irst choice of the dice.
(a) Suppose that Bob chooses die A and Alice chooses die B. Write out all of the
possible events and their probabilities, and show that the probability that Alice
wins is greater than 1/2.
(b) Suppose that Bob chooses die B and Alice chooses die C. Write out all of the
possible events and their probabilities, and show that the probability that Alice
wins is greater than 1/2.
(c) Since die A and die B lead to situations in Alice’s favor, it would seem that Bob
should choose die C. Suppose that Bob does choose die C and Alice chooses die
A. Write out all of the possible events and their probabilities, and show that the
probability that Alice wins is still greater than 1/2.
Exercise 1.6: Consider the following balls-and-bin game. We start with one black ball
and one white ball in a bin. We repeatedly do the following: choose one ball from the
bin uniformly at random, and then put the ball back in the bin with another ball of the
same color. We repeat until there are n balls in the bin. Show that the number of white
balls is equally likely to be any number between 1 and n − 1.
18
1.6 exercises
Exercise 1.8: I choose a number uniformly at random from the range [1, 1,000,000].
Using the inclusion–exclusion principle, determine the probability that the number cho-
sen is divisible by one or more of 4, 6, and 9.
Exercise 1.9: Suppose that a fair coin is lipped n times. For k > 0, ind an upper
bound on the probability that there is a sequence of log2 n + k consecutive heads.
Exercise 1.10: I have a fair coin and a two-headed coin. I choose one of the two coins
randomly with equal probability and lip it. Given that the lip was heads, what is the
probability that I lipped the two-headed coin?
Exercise 1.11: I am trying to send you a single bit, either a 0 or a 1. When I transmit
the bit, it goes through a series of n relays before it arrives to you. Each relay lips the
bit independently with probability p.
(a) Argue that the probability you receive the correct bit is
⌊n/2⌋
n 2k
p (1 − p)n−2k .
k=0
2k
(b) We consider an alternative way to calculate this probability. Let us say the relay
has bias q if the probability it lips the bit is (1 − q)/2. The bias q is therefore a
real number in the range [−1, 1]. Prove that sending a bit through two relays with
bias q1 and q2 is equivalent to sending a bit through a single relay with bias q1 q2 .
(c) Prove that the probability you receive the correct bit when it passes through n relays
as described before (a) is
1 + (1 − 2p)n
.
2
Exercise 1.12: The following problem is known as the Monty Hall problem, after
the host of the game show “Let’s Make a Deal”. There are three curtains. Behind one
curtain is a new car, and behind the other two are goats. The game is played as follows.
19
events and probability
The contestant chooses the curtain that she thinks the car is behind. Monty then opens
one of the other curtains to show a goat. (Monty may have more than one goat to choose
from; in this case, assume he chooses which goat to show uniformly at random.) The
contestant can then stay with the curtain she originally chose or switch to the other
unopened curtain. After that, the location of the car is revealed, and the contestant wins
the car or the remaining goat. Should the contestant switch curtains or not, or does it
make no difference?
Exercise 1.13: A medical company touts its new test for a certain genetic disorder.
The false negative rate is small: if you have the disorder, the probability that the test
returns a positive result is 0.999. The false positive rate is also small: if you do not
have the disorder, the probability that the test returns a positive result is only 0.005.
Assume that 2% of the population has the disorder. If a person chosen uniformly from
the population is tested and the result comes back positive, what is the probability that
the person has the disorder?
Exercise 1.15: Suppose that we roll ten standard six-sided dice. What is the probabil-
ity that their sum will be divisible by 6, assuming that the rolls are independent? (Hint:
Use the principle of deferred decisions, and consider the situation after rolling all but
one of the dice.)
Exercise 1.16: Consider the following game, played with three standard six-sided
dice. If the player ends with all three dice showing the same number, she wins. The
player starts by rolling all three dice. After this irst roll, the player can select any one,
two, or all of the three dice and re-roll them. After this second roll, the player can
again select any of the three dice and re-roll them one inal time. For questions (a)–(d),
assume that the player uses the following optimal strategy: if all three dice match, the
player stops and wins; if two dice match, the player re-rolls the die that does not match;
and if no dice match, the player re-rolls them all.
(a) Find the probability that all three dice show the same number on the irst roll.
(b) Find the probability that exactly two of the three dice show the same number on
the irst roll.
(c) Find the probability that the player wins, conditioned on exactly two of the three
dice showing the same number on the irst roll.
20
1.6 exercises
(d) By considering all possible sequences of rolls, ind the probability that the player
wins the game.
Exercise 1.17: In our matrix multiplication algorithm, we worked over the integers
modulo 2. Explain how the analysis would change if we worked over the integers mod-
ulo k for k > 2.
Exercise 1.19: Give examples of events where Pr(A | B) < Pr(A), Pr(A | B) = Pr(A),
and Pr(A | B) > Pr(A).
Exercise 1.21: Give an example of three random events X, Y, Z for which any pair are
independent but all three are not mutually independent.
Exercise 1.22: (a) Consider the set {1, . . . , n}. We generate a subset X of this set as
follows: a fair coin is lipped independently for each element of the set; if the coin lands
heads then the element is added to X, and otherwise it is not. Argue that the resulting
set X is equally likely to be any one of the 2n possible subsets.
(b) Suppose that two sets X and Y are chosen independently and uniformly at
random from all the 2n subsets of {1, . . . , n}. Determine Pr(X ⊆ Y ) and Pr(X ∪ Y =
{1, . . . , n}). (Hint: Use the part (a) of this problem.)
Exercise 1.23: There may be several different min-cut sets in a graph. Using the
analysis of the randomized min-cut algorithm, argue that there can be at most
n(n − 1)/2 distinct min-cut sets.
Exercise 1.25: To improve the probability of success of the randomized min-cut algo-
rithm, it can be run multiple times.
(a) Consider running the algorithm twice. Determine the number of edge contractions
and bound the probability of inding a min-cut.
(b) Consider the following variation. Starting with a graph with n vertices, irst con-
tract the graph down to k vertices using the randomized min-cut algorithm. Make
copies of the graph with k vertices, and now run the randomized algorithm on this
reduced graph ℓ times, independently. Determine the number of edge contractions
and bound the probability of inding a minimum cut.
(c) Find optimal (or at least near-optimal) values of k and ℓ for the variation in (b) that
maximize the probability of inding a minimum cut while using the same number
of edge contractions as running the original algorithm twice.
Exercise 1.26: Tic-tac-toe always ends up in a tie if players play optimally. Instead,
we may consider random variations of tic-tac-toe.
(a) First variation: Each of the nine squares is labeled either X or O according to an
independent and uniform coin lip. If only one of the players has one (or more)
winning tic-tac-toe combinations, that player wins. Otherwise, the game is a tie.
Determine the probability that X wins. (You may want to use a computer program
to help run through the conigurations.)
(b) Second variation: X and O take turns, with the X player going irst. On the X
player’s turn, an X is placed on a square chosen independently and uniformly at
random from the squares that are still vacant; O plays similarly. The irst player to
have a winning tic-tac-toe combination wins the game, and a tie occurs if neither
player achieves a winning combination. Find the probability that each player wins.
(Again, you may want to write a program to help you.)
22
chapter two
Discrete Random Variables
and Expectation
In this chapter, we introduce the concepts of discrete random variables and expectation
and then develop basic techniques for analyzing the expected performance of algo-
rithms. We apply these techniques to computing the expected running time of the well-
known Quicksort algorithm. In analyzing two versions of Quicksort, we demonstrate
the distinction between the analysis of randomized algorithms, where the probability
space is deined by the random choices made by the algorithm, and the probabilistic
analysis of deterministic algorithms, where the probability space is deined by some
probability distribution on the inputs.
Along the way we deine the Bernoulli, binomial, and geometric random variables,
study the expected size of a simple branching process, and analyze the expectation of
the coupon collector’s problem – a probabilistic paradigm that reappears throughout
the book.
When studying a random event, we are often interested in some value associated with
the random event rather than in the event itself. For example, in tossing two dice we
are often interested in the sum of the two dice rather than the separate value of each
die. The sample space in tossing two dice consists of 36 events of equal probability,
given by the ordered pairs of numbers {(1, 1), (1, 2), . . . , (6, 5), (6, 6)}. If the quantity
we are interested in is the sum of the two dice, then we are interested in 11 events (of
unequal probability): the 11 possible outcomes of the sum. Any such function from the
sample space to the real numbers is called a random variable.
Deinition 2.1: A random variable X on a sample space is a real-valued (mea-
surable) function on ; that is, X : → R. A discrete random variable is a random
variable that takes on only a inite or countably ininite number of values.
Since random variables are functions, they are usually denoted by a capital letter such
as X or Y, while real numbers are usually denoted by lowercase letters.
23
discrete random variables and expectation
For a discrete random variable X and a real value a, the event “X = a” includes all
the basic events of the sample space in which the random variable X assumes the value
a. That is, “X = a” represents the set {s ∈ | X (s) = a}. We denote the probability of
that event by
Pr(X = a) = Pr(s).
s∈: X (s)=a
If X is the random variable representing the sum of the two dice, then the event X = 4
corresponds to the set of basic events {(1, 3), (2, 2), (3, 1)}. Hence
3 1
Pr(X = 4) == .
36 12
The deinition of independence that we developed for events extends to random
variables.
Deinition 2.2: Two random variables X and Y are independent if and only if
Pr((X = x) ∩ (Y = y)) = Pr(X = x) · Pr(Y = y)
for all values x and y. Similarly, random variables X1 , X2 , . . . , Xk are mutually inde-
pendent if and only if, for any subset I ⊆ [1, k] and any values xi , i ∈ I,
Pr (Xi = xi ) = Pr(Xi = xi ).
i∈I i∈I
A basic characteristic of a random variable is its expectation, which is also often called
the mean. The expectation of a random variable is a weighted average of the values
it assumes, where each value is weighted by the probability that the variable assumes
that value.
Deinition 2.3: The expectation of a discrete random variable X, denoted by E[X], is
given by
E[X] = i Pr(X = i),
i
where
the summation is over all values in the range of X. The expectation is inite if
i |i| Pr(X = i) converges; otherwise, the expectation is unbounded.
For example, the expectation of the random variable X representing the sum of two
dice is
1 2 3 1
E[X] = ·2+ ·3+ · 4 + ··· + · 12 = 7.
36 36 36 36
You may try using symmetry to give simpler argument for why E[X] = 7.
As an example of where the expectation of a discrete random variable is unbounded,
consider a random variable X that takes on the value 2i with probability 1/2i for i =
1, 2, . . . . The expected value of X is
∞ ∞
1 i
E[X] = 2 = 1 = ∞.
i=1
2i i=1
24
2.1 random variables and expectation
Here we use the somewhat informal notation E[X] = ∞ to express that E[X] is
unbounded.
Theorem 2.1 [Linearity of Expectations]: For any inite collection of discrete ran-
dom variables X1 , X2 , . . . , Xn with inite expectations,
n
n
E Xi = E[Xi ].
i=1 i=1
Proof: We prove the statement for two random variables X and Y; the general case
follows by induction. The summations that follow are understood to be over the ranges
of the corresponding random variables:
E[X + Y ] = (i + j) Pr((X = i) ∩ (Y = j))
i j
= i Pr((X = i) ∩ (Y = j)) + j Pr((X = i) ∩ (Y = j))
i j i j
= i Pr((X = i) ∩ (Y = j)) + j Pr((X = i) ∩ (Y = j))
i j j i
= i Pr(X = i) + j Pr(Y = j)
i j
= E[X] + E[Y ].
The irst equality follows from Deinition 1.2. In the penultimate equation we have used
Theorem 1.6, the Law of Total Probability.
We now use this property to compute the expected sum of two standard dice. Let X =
X1 + X2 , where Xi represents the outcome of die i for i = 1, 2. Then
6
1 7
E[Xi ] = j= .
6 j=1 2
even though X1 and X12 are clearly dependent. As an exercise, you may verify this
identity by considering the six possible outcomes for X1 .
Linearity of expectations also holds for countably ininite summations in certain
cases. Speciically, it can be shown that
∞ ∞
E Xi = E[Xi ]
i=1 i=1
∞
whenever i=1 E[|Xi |] converges. The issue of dealing with the linearity of expecta-
tions with countably ininite summations is further considered in Exercise 2.29.
This chapter contains several examples in which the linearity of expectations signif-
icantly simpliies the computation of expectations. One result related to the linearity of
expectations is the following simple lemma.
Lemma 2.2: For any constant c and discrete random variable X,
E[cX] = cE[X].
Proof: The lemma is obvious for c = 0. For c = 0,
E[cX] = j Pr(cX = j)
j
=c ( j/c) Pr(X = j/c)
j
=c k Pr(X = k)
k
= cE[X].
26
2.2 the bernoulli and binomial random variables
To obtain the penultimate line, we used the linearity of expectations. To obtain the last
line we used Lemma 2.2 to simplify E[XE[X]] = E[X] · E[X].
The fact that E[X 2 ] ≥ (E[X])2 is an example of a more general theorem known as
Jensen’s inequality. Jensen’s inequality shows that, for any convex function f, we have
E[ f (X )] ≥ f (E[X]).
Visually, a convex function f has the property that, if you connect two points on the
graph of the function by a straight line, this line lies on or above the graph of the
function. The following fact, which we state without proof, is often a useful alternative
to Deinition 2.4.
E[ f (X )] ≥ f (E[X]).
Proof: We prove the theorem assuming that f has a Taylor expansion. Let μ = E[X].
By Taylor’s theorem there is a value c such that
f ′′ (c)(x − μ)2
f (x) = f (μ) + f ′ (μ)(x − μ) +
2
≥ f (μ) + f ′ (μ)(x − μ),
since f ′′ (c) > 0 by convexity. Taking expectations of both sides and applying linearity
of expectations and Lemma 2.2 yields the result:
An alternative proof of Jensen’s inequality, which holds for any random variable X that
takes on only initely many values, is presented in Exercise 2.10.
Suppose that we run an experiment that succeeds with probability p and fails with
probability 1 − p.
27
discrete random variables and expectation
The variable Y is called a Bernoulli or an indicator random variable. Note that, for a
Bernoulli random variable,
For example, if we lip a fair coin and consider the outcome “heads” a success, then
the expected value of the corresponding indicator random variable is 1/2.
Consider now a sequence of n independent coin lips. What is the distribution of the
number of heads in the entire sequence? More generally, consider a sequence of n inde-
pendent experiments, each of which succeeds with probability p. If we let X represent
the number of successes in the n experiments, then X has a binomial distribution.
That is, the binomial random variable X equals j when there are exactly j successes and
n − j failures in n independent experiments, each of which is successful with proba-
bility p.
As an exercise, you should show that Deinition 2.5 ensures that nj=0 Pr(X = j) =
1. This is necessary for the binomial random variable to have a valid probability func-
tion, according to Deinition 1.2.
The binomial random variable arises in many contexts, especially in sampling. As a
practical example, suppose that we want to gather data about the packets going through
a router by postprocessing them. We might want to know the approximate fraction of
packets from a certain source or of a certain data type. We do not have the memory
available to store all of the packets, so we choose to store a random subset – or sample
– of the packets for later analysis. If each packet is stored with probability p and if n
packets go through the router each day, then the number of sampled packets each day
is a binomial random variable X with parameters n and p. If we want to know how
much memory is necessary for such a sample, a natural starting point is to determine
the expectation of the random variable X.
Sampling in this manner arises in other contexts as well. For example, by sampling
the program counter while a program runs, one can determine what parts of a program
are taking the most time. This knowledge can be used to aid dynamic program opti-
mization techniques such as binary rewriting, where the executable binary form of a
program is modiied while the program executes. Since rewriting the executable as the
program runs is expensive, sampling helps the optimizer to determine when it will be
worthwhile.
28
2.3 conditional expectation
Deinition 2.6:
E[Y | Z = z] = y Pr(Y = y | Z = z),
y
For example, suppose that we independently roll two standard six-sided dice. Let X1
be the number that shows on the irst die, X2 the number on the second die, and X the
sum of the numbers on the two dice. Then
8
1 11
E[X | X1 = 2] = x Pr(X = x | X1 = 2) = x· = .
x x=3
6 2
where the sum is over all values in the range of Y and all of the expectations exist.
Proof: Pr(Y = y)E[X | Y = y] = Pr(Y = y) x Pr(X = x | Y = y)
y y x
= x Pr(X = x | Y = y) Pr(Y = y)
x y
= x Pr(X = x ∩ Y = y)
x y
= x Pr(X = x) = E[X].
x
Lemma 2.6: For any inite collection of discrete random variables X1 , X2 , . . . , Xn with
inite expectations and for any random variable Y,
n n
E Xi | Y = y = E[Xi | Y = y].
i=1 i=1
Perhaps somewhat confusingly, the conditional expectation is also used to refer to the
following random variable.
Deinition 2.7: The expression E[Y | Z] is a random variable f (Z) that takes on the
value E[Y | Z = z] when Z = z.
We emphasize that E[Y | Z] is not a real value; it is actually a function of the random
variable Z. Hence E[Y | Z] is itself a function from the sample space to the real numbers
and can therefore be thought of as a random variable.
In the previous example of rolling two dice,
X
1 +6
1 7
E[X | X1 ] = x Pr(X = x | X1 ) = x· = X1 + .
x x=X1 +1
6 2
Suppose that we lip a coin until it lands on heads. What is the distribution of the number
of lips? This is an example of a geometric distribution, which arises in the following
situation: we perform a sequence of independent trials until the irst success, where
each trial succeeds with probability p.
Deinition 2.8: A geometric random variable X with parameter p is given by the fol-
lowing probability distribution on n = 1, 2, . . . ,:
Pr(X = n) = (1 − p)n−1 p.
That is, for the geometric random variable X to equal n, there must be n − 1 failures,
followed by a success.
As an exercise, you should show that the geometric random variable satisies
Pr(X = n) = 1.
n≥1
Again, this is necessary for the geometric random variable to have a valid probability
function, according to Deinition 1.2.
In the context of our example from Section 2.2 of sampling packets on a router, if
packets are sampled with probability p, then the number of packets transmitted after the
last sampled packet until and including the next sampled packet is given by a geometric
random variable with parameter p.
Geometric random variables are said to be memoryless because the probability that
you will reach your irst success n trials from now is independent of the number of
failures you have experienced. Informally, one can ignore past failures because they do
not change the distribution of the number of future trials until irst success. Formally,
we have the following statement.
Lemma 2.8: For a geometric random variable X with parameter p and for n > 0,
The fourth equality uses the fact that, for 0 < x < 1, ∞ i k
i=k x = x /(1 − x).
33
discrete random variables and expectation
Lemma 2.9: Let X be a discrete random variable that takes on only nonnegative inte-
ger values. Then
∞
E[X] = Pr(X ≥ i).
i=1
∞
∞
∞
Proof: Pr(X ≥ i) = Pr(X = j)
i=1 i=1 j=i
j
∞
= Pr(X = j)
j=1 i=1
∞
= j Pr(X = j)
j=1
= E[X].
The interchange of (possibly) ininite summations is justiied, since the terms being
summed are all nonnegative.
Hence
∞
E[X] = Pr(X ≥ i)
i=1
∞
= (1 − p)i−1
i=1
1
=
1 − (1 − p)
1
= .
p
Thus, for a fair coin where p = 1/2, on average it takes two lips to see the irst
heads.
There is another approach to inding the expectation of a geometric random variable
X with parameter p – one that uses conditional expectations and the memoryless prop-
erty of geometric random variables. Recall that X corresponds to the number of lips
until the irst heads given that each lip is heads with probability p. Let Y = 0 if the irst
34
2.4 the geometric distribution
lip is tails and Y = 1 if the irst lip is heads. By the identity from Lemma 2.5,
i−1
pi = 1 − .
n
1 n
E[Xi ] = = .
pi n−i+1
35
discrete random variables and expectation
Lemma 2.10: The harmonic number H(n) = ni=1 1/i satisies H(n) = ln n + (1).
and
n n
1 1
≤ dx = ln n.
k=2
k x=1 x
This is clariied in Figure 2.1, where the area below the curve f (x) = 1/x corre-
sponds
n to the integral
n and the areas of the shaded regions correspond to the summations
k=1 1/k and k=2 1/k.
Hence ln n ≤ H(n) ≤ ln n + 1, proving the claim.
As a simple application of the coupon collector’s problem, suppose that packets are
sent in a stream from a source host to a destination host along a ixed path of routers.
The host at the destination would like to know which routers the stream of packets has
passed through, in case it inds later that some router damaged packets that it processed.
If there is enough room in the packet header, each router can append its identiication
number to the header, giving the path. Unfortunately, there may not be that much room
available in the packet header.
Suppose instead that each packet header has space for exactly one router identi-
ication number, and this space is used to store the identiication of a router chosen
uniformly at random from all of the routers on the path. This can actually be accom-
plished easily; we consider how in Exercise 2.18. Then, from the point of view of the
destination host, determining all the routers on the path is like a coupon collector’s
problem. If there are n routers along the path, then the expected number of packets in
36
2.5 application: the expected run-time of quicksort
– –
+ +
– + – +
Figure 2.1: Approximating the area above and below f (x) = 1/x.
the stream that must arrive before the destination host knows all of the routers on the
path is nH(n) = n ln n + (n).
Quicksort is a simple – and, in practice, very eficient – sorting algorithm. The input is
a list of n numbers x1 , x2 , . . . , xn . For convenience, we will assume that the numbers
are distinct. A call to the Quicksort function begins by choosing a pivot element from
the set. Let us assume the pivot is x. The algorithm proceeds by comparing every other
element to x, dividing the list of elements into two sublists: those that are less than x
and those that are greater than x. Notice that if the comparisons are performed in the
natural order, from left to right, then the order of the elements in each sublist is the
same as in the initial list. Quicksort then recursively sorts these sublists.
In the worst case, Quicksort requires (n2 ) comparison operations. For example,
suppose our input has the form x1 = n, x2 = n − 1, . . . , xn−1 = 2, xn = 1. Suppose
also that we adopt the rule that the pivot should be the irst element of the list. The
irst pivot chosen is then n, so Quicksort performs n − 1 comparisons. The division
has yielded one sublist of size 0 (which requires no additional work) and another of
size n − 1, with the order n − 1, n − 2, . . . , 2, 1. The next pivot chosen is n − 1, so
Quicksort performs n − 2 comparisons and is left with one group of size n − 2 in the
order n − 2, n − 3, . . . , 2, 1. Continuing in this fashion, Quicksort performs
n(n − 1)
(n − 1) + (n − 2) + · · · + 2 + 1 = comparisons.
2
This is not the only bad case that leads to (n2 ) comparisons; similarly poor perfor-
mance occurs if the pivot element is chosen from among the smallest few or the largest
few elements each time.
37
discrete random variables and expectation
Quicksort Algorithm:
We clearly made a bad choice of pivots for the given input. A reasonable choice of
pivots would require many fewer comparisons. For example, if our pivot always splits
the list into two sublists of size at most ⌈n/2⌉, then the number of comparisons C(n)
would obey the following recurrence relation:
C(n) ≤ 2C(⌈n/2⌉) + (n).
The solution to this equation yields C(n) = O(n log n), which is the best possible result
for comparison-based sorting. In fact, any sequence of pivot elements that always split
the input list into two sublists each of size at least cn for some constant c would yield
an O(n log n) running time.
This discussion provides some intuition for how we would like pivots to be chosen.
In each iteration of the algorithm there is a good set of pivot elements that split the
input list into two almost equal sublists; it sufices if the sizes of the two sublists are
within a constant factor of each other. There is also a bad set of pivot elements that do
not split up the list signiicantly. If good pivots are chosen suficiently often, Quicksort
will terminate quickly. How can we guarantee that the algorithm chooses good pivot
elements suficiently often? We can resolve this problem in one of two ways.
First, we can change the algorithm to choose the pivots randomly. This makes Quick-
sort a randomized algorithm; the randomization makes it extremely unlikely that we
repeatedly choose the wrong pivots. We demonstrate shortly that the expected number
of comparisons made by a simple randomized Quicksort is 2n ln n + O(n), matching
(up to constant factors) the (n log n) bound for comparison-based sorting. Here, the
expectation is over the random choice of pivots.
A second possibility is that we can keep our deterministic algorithm, using the irst
list element as a pivot, but consider a probabilistic model of the inputs. A permutation
of a set of n distinct items is just one of the n! orderings of these items. Instead of
looking for the worst possible input, we assume that the input items are given to us in
a random order. This may be a reasonable assumption for some applications; alterna-
tively, this could be accomplished by ordering the input list according to a randomly
38
2.5 application: the expected run-time of quicksort
chosen permutation before running the deterministic Quicksort algorithm. In this case,
we have a deterministic algorithm but a probabilistic analysis based on a model of the
inputs. We again show in this setting that the expected number of comparisons made
is 2n ln n + O(n). Here, the expectation is over the random choice of inputs.
The same techniques are generally used both in analyses of randomized algorithms
and in probabilistic analyses of deterministic algorithms. Indeed, in this application the
analysis of the randomized Quicksort and the probabilistic analysis of the deterministic
Quicksort under random inputs are essentially the same.
Let us irst analyze Random Quicksort, the randomized algorithm version of
Quicksort.
Theorem 2.11: Suppose that, whenever a pivot is chosen for Random Quicksort, it
is chosen independently and uniformly at random from all possibilities. Then, for any
input, the expected number of comparisons made by Random Quicksort is 2n ln n +
O(n).
Proof: Let y1 , y2 , . . . , yn be the same values as the input values x1 , x2 , . . . , xn but sorted
in increasing order. For i < j, let Xi j be a random variable that takes on the value 1 if
yi and y j are compared at any time over the course of the algorithm, and 0 otherwise.
Then the total number of comparisons X satisies
n−1
n
X= Xi j ,
i=1 j=i+1
and
n−1
n
E[X] = E Xi j
i=1 j=i+1
n−1
n
= E[Xi j ]
i=1 j=i+1
k = j − i + 1 then yields
n−1
n
2
E[X] =
i=1 j=i+1
j−i+1
n−1 n−i+1
2
=
i=1 k=2
k
n n+1−k
2
=
k=2 i=1
k
n
2
= (n + 1 − k)
k=2
k
n
2
= (n + 1) − 2(n − 1)
k=2
k
n
1
= (2n + 2) − 4n.
k=1
k
Notice that we used a rearrangement of the double summation to obtain a clean form
for the expectation.
Recalling that the summation H(n) = nk=1 1/k satisies H(n) = ln n + (1), we
have E[X] = 2n ln n + (n).
Next we consider the deterministic version of Quicksort, on random input. We assume
that the order of the elements in each recursively constructed sublist is the same as in
the initial list.
Theorem 2.12: Suppose that, whenever a pivot is chosen for Quicksort, the irst ele-
ment of the sublist is chosen. If the input is chosen uniformly at random from all pos-
sible permutations of the values, then the expected number of comparisons made by
Deterministic Quicksort is 2n ln n + O(n).
Proof: The proof is essentially the same as for Random Quicksort. Again, yi and y j
are compared if and only if either yi or y j is the irst pivot selected by Quicksort from
the set Y i j . Since the order of elements in each sublist is the same as in the original list,
the irst pivot selected from the set Y i j is just the irst element from Y i j in the input list,
and since all possible permutations of the input values are equally likely, every element
in Y i j is equally likely to be irst. From this, we can again use linearity of expectations
in the same way as in the analysis of Random Quicksort to obtain the same expression
for E[X].
2.6. Exercises
Exercise 2.1: Suppose we roll a fair k-sided die with the numbers 1 through k on the
die’s faces. If X is the number that appears, what is E[X]?
40
2.6 exercises
Exercise 2.2: A monkey types on a 26-letter keyboard that has lowercase letters only.
Each letter is chosen independently and uniformly at random from the alphabet. If the
monkey types 1,000,000 letters, what is the expected number of times the sequence
“proof” appears?
Exercise 2.3: Give examples of functions f and random variables X where E[ f (X )] <
f (E[X]), E[ f (X )] = f (E[X]), and E[ f (X )] > f (E[X]).
Exercise 2.4: Prove that E[X k ] ≥ E[X]k for any even integer k ≥ 1.
Exercise 2.5: If X is a B(n, 1/2) random variable with n ≥ 1, show that the probability
that X is even is 1/2.
Exercise 2.6: Suppose that we independently roll two standard six-sided dice. Let X1
be the number that shows on the irst die, X2 the number on the second die, and X the
sum of the numbers on the two dice.
Exercise 2.7: Let X and Y be independent geometric random variables, where X has
parameter p and Y has parameter q.
You may ind it helpful to keep in mind the memoryless property of geometric random
variables.
Exercise 2.8: (a) Alice and Bob decide to have children until either they have their irst
girl or they have k ≥ 1 children. Assume that each child is a boy or girl independently
with probability 1/2 and that there are no multiple births. What is the expected number
of female children that they have? What is the expected number of male children that
they have?
(b) Suppose Alice and Bob simply decide to keep having children until they have
their irst girl. Assuming that this is possible, what is the expected number of boys that
they have?
Exercise 2.9: (a) Suppose that we roll twice a fair k-sided die with the numbers 1
through k on the die’s faces, obtaining values X1 and X2 . What is E[max(X1 , X2 )]?
What is E[min(X1 , X2 )]?
41
discrete random variables and expectation
Exercise 2.10: (a) Show by induction that if f : R → R is convex then, for any
x1 , x2 , . . . , xn and λ1 , λ2 , . . . , λn with ni=1 λi = 1,
n
n
f λi xi ≤ λi f (xi ). (2.2)
i=1 i=1
Exercise 2.12: We draw cards uniformly at random with replacement from a deck of
n cards. What is the expected number of cards we must draw until we have seen all n
cards in the deck? If we draw 2n cards, what is the expected number of cards in the
deck that are not chosen at all? Chosen exactly once?
Exercise 2.13: (a) Consider the following variation of the coupon collector’s problem.
Each box of cereal contains one of 2n different coupons. The coupons are organized
into n pairs, so that coupons 1 and 2 are a pair, coupons 3 and 4 are a pair, and so on.
Once you obtain one coupon from every pair, you can obtain a prize. Assuming that
the coupon in each box is chosen independently and uniformly at random from the 2n
possibilities, what is the expected number of boxes you must buy before you can claim
the prize?
(b) Generalize the result of the problem in part (a) for the case where there are kn
different coupons, organized into n disjoint sets of k coupons, so that you need one
coupon from every set.
Exercise 2.14: The geometric distribution arises as the distribution of the number of
times we lip a coin until it comes up heads. Consider now the distribution of the number
of lips X until the kth head appears, where each coin lip comes up heads independently
with probability p. Prove that this distribution is given by
n−1
Pr(X = n) = pk (1 − p)n−k
k−1
for n ≥ k. (This is known as the negative binomial distribution.)
Exercise 2.15: For a coin that comes up heads independently with probability p on
each lip, what is the expected number of lips until the kth heads?
42
2.6 exercises
(a) Let n be a power of 2. Show that the expected number of streaks of length log2 n + 1
is 1 − o(1).
(b) Show that, for suficiently large n, the probability that there is no streak of length
at least ⌊log2 n − 2 log2 log2 n⌋ is less than 1/n. (Hint: Break the sequence of lips
up into disjoint blocks of ⌊log2 n − 2 log2 log2 n⌋ consecutive lips, and use that the
event that one block is a streak is independent of the event that any other block is
a streak.)
Exercise 2.17: Recall the recursive spawning process described in Section 2.3. Sup-
pose that each call to process S recursively spawns new copies of the process S, where
the number of new copies is 2 with probability p and 0 with probability 1 − p. If Yi
denotes the number of copies of S in the ith generation, determine E[Yi ]. For what
values of p is the expected total number of copies bounded?
Exercise 2.18: The following approach is often called reservoir sampling. Suppose
we have a sequence of items passing by one at a time. We want to maintain a sample
of one item with the property that it is uniformly distributed over all the items that we
have seen at each step. Moreover, we want to accomplish this without knowing the total
number of items in advance or storing all of the items that we see.
Consider the following algorithm, which stores just one item in memory at all times.
When the irst item appears, it is stored in the memory. When the kth item appears, it
replaces the item in memory with probability 1/k. Explain why this algorithm solves
the problem.
Exercise 2.19: Suppose that we modify the reservoir sampling algorithm of Exer-
cise 2.18 so that, when the kth item appears, it replaces the item in memory with prob-
ability 1/2. Describe the distribution of the item in memory.
Exercise 2.23: Linear insertion sort can sort an array of numbers in place. The irst
and second numbers are compared; if they are out of order, they are swapped so that
they are in sorted order. The third number is then placed in the appropriate place in the
sorted order. It is irst compared with the second; if it is not in the proper order, it is
swapped and compared with the irst. Iteratively, the kth number is handled by swap-
ping it downward until the irst k numbers are in sorted order. Determine the expected
number of swaps that need to be made with a linear insertion sort when the input is a
random permutation of n distinct numbers.
Exercise 2.24: We roll a standard fair die over and over. What is the expected number
of rolls until the irst pair of consecutive sixes appears? (Hint: The answer is not 36.)
Exercise 2.25: A blood test is being performed on n individuals. Each person can
be tested separately, but this is expensive. Pooling can decrease the cost. The blood
samples of k people can be pooled and analyzed together. If the test is negative, this
one test sufices for the group of k individuals. If the test is positive, then each of the k
persons must be tested separately and thus k + 1 total tests are required for the k people.
Suppose that we create n/k disjoint groups of k people (where k divides n) and use the
pooling method. Assume that each person has a positive result on the test independently
with probability p.
(a) What is the probability that the test for a pooled sample of k people will be positive?
(b) What is the expected number of tests necessary?
(c) Describe how to ind the best value of k.
(d) Give an inequality that shows for what values of p pooling is better than just testing
every individual.
the cycles could be self-loops. What is the expected number of cycles in a random
permutation of n numbers?
Exercise 2.28: Consider a simpliied version of roulette in which you wager x dollars
on either red or black. The wheel is spun, and you receive your original wager plus
another x dollars if the ball lands on your color; if the ball doesn’t land on your color,
you lose your wager. Each color occurs independently with probability 1/2. (This is a
simpliication because real roulette wheels have one or two spaces that are neither red
nor black, so the probability of guessing the correct color is actually less than 1/2.)
The following gambling strategy is a popular one. On the irst spin, bet 1 dollar. If
you lose, bet 2 dollars on the next spin. In general, if you have lost on the irst k − 1
spins, bet 2k−1 dollars on the kth spin. Argue that by following this strategy you will
eventually win a dollar. Now let X be the random variable that measures your maximum
loss before winning (i.e., the amount of money you have lost before the play on which
you win). Show that E[X] is unbounded. What does it imply about the practicality of
this strategy?
Exercise 2.30: In the roulette problem of Exercise 2.28, we found that with probability
1 you eventually win a dollar. Let X j be the amount you win on the jth bet. (This
might be 0 if you have already won a previous bet.) Determine E[X j ] and show that,
by applying the linearity of expectations, you ind your expected winnings are 0. Does
the linearity of expectations hold in this case? (Compare with Exercise 2.29.)
Exercise 2.31: A variation on the roulette problem of Exercise 2.28 is the following.
We repeatedly lip a fair coin. You pay j dollars to play the game. If the irst head comes
up on the kth lip, you win 2k /k dollars. What are your expected winnings? How much
would you be willing to pay to play the game?
Exercise 2.32: You need a new staff assistant, and you have n people to interview. You
want to hire the best candidate for the position. When you interview a candidate, you
can give them a score, with the highest score being the best and no ties being possible.
45
discrete random variables and expectation
You interview the candidates one by one. Because of your company’s hiring practices,
after you interview the kth candidate, you either offer the candidate the job before the
next interview or you forever lose the chance to hire that candidate. We suppose the
candidates are interviewed in a random order, chosen uniformly at random from all n!
possible orderings.
We consider the following strategy. First, interview m candidates but reject them all;
these candidates give you an idea of how strong the ield is. After the mth candidate,
hire the irst candidate you interview who is better than all of the previous candidates
you have interviewed.
(a) Let E be the event that we hire the best assistant, and let Ei be the event that ith
candidate is the best and we hire him. Determine Pr(Ei ), and show that
n
m 1
Pr(E ) = .
n j=m+1 j − 1
n 1
(b) Bound j=m+1 j−1 to obtain
m m
(ln n − ln m) ≤ Pr(E ) ≤ (ln(n − 1) − ln(m − 1)).
n n
(c) Show that m(ln n − ln m)/n is maximized when m = n/e, and explain why this
means Pr(E ) ≥ 1/e for this choice of m.
46
chapter three
Moments and Deviations
In this and the next chapter we examine techniques for bounding the tail distribution,
the probability that a random variable assumes values that are far from its expectation.
In the context of analysis of algorithms, these bounds are the major tool for estimating
the failure probability of algorithms and for establishing high probability bounds on
their run-time. In this chapter we study Markov’s and Chebyshev’s inequalities and
demonstrate their application in an analysis of a randomized median algorithm. The
next chapter is devoted to the Chernoff bound and its applications.
Markov’s inequality, formulated in the next theorem, is often too weak to yield useful
results, but it is a fundamental tool in developing more sophisticated bounds.
Theorem 3.1 [Markov’s Inequality]: Let X be a random variable that assumes only
nonnegative values. Then, for all a > 0,
E[X]
Pr(X ≥ a) ≤ .
a
Proof: For a > 0, let
1 if X ≥ a,
I=
0 otherwise,
and note that, since X ≥ 0,
X
I≤ . (3.1)
a
Because I is a 0–1 random variable, E[I] = Pr(I = 1) = Pr(X ≥ a).
Taking expectations in (3.1) thus yields
X E[X]
Pr(X ≥ a) = E[I] ≤ E = .
a a
47
moments and deviations
For example, suppose we use Markov’s inequality to bound the probability of obtaining
more than 3n/4 heads in a sequence of n fair coin lips. Let
1 if the ith coin lip is heads,
Xi =
0 otherwise,
n
and let X = i=1 Xi denote the number of heads in the n coin lips. Since E[Xi ] =
Pr(Xi = 1) = 1/2, it follows that E[X] = ni=1 E[Xi ] = n/2. Applying Markov’s
inequality, we obtain
E[X] n/2 2
Pr(X ≥ 3n/4) ≤ = = .
3n/4 3n/4 3
Markov’s inequality gives the best tail bound possible when all we know is the expec-
tation of the random variable and that the variable is nonnegative (see Exercise 3.16). It
can be improved upon if more information about the distribution of the random variable
is available.
Additional information about a random variable is often expressed in terms of its
moments. The expectation is also called the irst moment of a random variable. More
generally, we deine the moments of a random variable as follows.
Deinition 3.1: The kth moment of a random variable X is E[X k ].
A signiicantly stronger tail bound is obtained when the second moment (E[X 2 ]) is also
available. Given the irst and second moments, one can compute the variance and stan-
dard deviation of the random variable. Intuitively, the variance and standard deviation
offer a measure of how far the random variable is likely to be from its expectation.
Deinition 3.2: The variance of a random variable X is deined as
Var[X] = E[(X − E[X])2 ] = E[X 2 ] − (E[X])2 .
The standard deviation of a random variable X is
σ [X] = Var[X].
The two forms of the variance in the deinition are equivalent, as is easily seen by using
the linearity of expectations. Keeping in mind that E[X] is a constant, we have
E[(X − E[X])2 ] = E[X 2 − 2XE[X] + E[X]2 ]
= E[X 2 ] − 2E[XE[X]] + E[X]2
= E[X 2 ] − 2E[X]E[X] + E[X]2
= E[X 2 ] − (E[X])2 .
If a random variable X is constant – so that it always assumes the same value – then
its variance and standard deviation are both zero. More generally, if a random vari-
able X takes on the value kE[X] with probability 1/k and the value 0 with probability
48
3.2 variance and moments of a random variable
√
1 − 1/k, then its variance is (k − 1)(E[X])2 and its standard deviation is k − 1E[X].
These cases help demonstrate the intuition that the variance (and standard deviation)
of a random variable are small when the random variable assumes values close to its
expectation and are large when it assumes values far from its expectation.
We have previously seen that the expectation of the sum of two random variables is
equal to the sum of their individual expectations. It is natural to ask whether the same
is true for the variance. We ind that the variance of the sum of two random variable
has an extra term, called the covariance.
Deinition 3.3: The covariance of two random variables X and Y is
Cov(X, Y ) = E[(X − E[X])(Y − E[Y ])].
Theorem 3.2: For any two random variables X and Y,
Var[X + Y ] = Var[X] + Var[Y ] + 2 Cov(X, Y ).
Proof:
Var[X + Y ] = E[(X + Y − E[X + Y ])2 ]
= E[(X + Y − E[X] − E[Y ])2 ]
= E[(X − E[X])2 + (Y − E[Y ])2 + 2(X − E[X])(Y − E[Y ])]
= E[(X − E[X])2 ] + E[(Y − E[Y ])2 ] + 2E[(X − E[X])(Y − E[Y ])]
= Var[X] + Var[Y ] + 2 Cov(X, Y ).
The extension of this theorem to a sum of any inite number of random variables is
proven in Exercise 3.14.
The variance of the sum of two (or any inite number of) random variables does equal
the sum of the variances when the random variables are independent. Equivalently, if X
and Y are independent random variables, then their covariance is equal to zero. To prove
this result, we irst need a result about the expectation of the product of independent
random variables.
Theorem 3.3: If X and Y are two independent random variables, then
E[X · Y ] = E[X] · E[Y ].
Proof: In the summations that follow, let i take on all values in the range of X, and let
j take on all values in the range of Y:
E[X · Y ] = (i · j) · Pr((X = i) ∩ (Y = j))
i j
= (i · j) · Pr(X = i) · Pr(Y = j)
i j
= i · Pr(X = i) j · Pr(Y = j)
i j
= E[X] · E[Y ],
where the independence of X and Y is used in the second line.
49
moments and deviations
Unlike the linearity of expectations, which holds for the sum of random variables
whether they are independent or not, the result that the expectation of the product of
two (or more) random variables is equal to the product of their expectations does not
necessarily hold if the random variables are dependent. To see this, let Y and Z each
correspond to fair coin lips, with Y and Z taking on the value 0 if the lip is heads
and 1 if the lip is tails. Then E[Y ] = E[Z] = 1/2. If the two lips are independent,
then Y · Z is 1 with probability 1/4 and 0 otherwise, so indeed E[Y · Z] = E[Y ] · E[Z].
Suppose instead that the coin lips are dependent in the following way: the coins are
tied together, so Y and Z either both come up heads or both come up tails together. Each
coin considered individually is still a fair coin lip, but now Y · Z is 1 with probability
1/2 and so E[Y · Z] = E[Y ] · E[Z].
Cov(X, Y ) = 0
and
Proof:
In the second equation we have used the fact that, since X and Y are independent, so
are X − E[X] and Y − E[Y ] and hence Theorem 3.3 applies. For the last equation we
use the fact that, for any random variable Z,
By induction we can extend the result of Corollary 3.4 to show that the variance of
the sum of any inite number of independent random variables equals the sum of their
variances.
n
n
Var Xi = Var[Xi ].
i=1 i=1
50
3.3 chebyshev’s inequality
Using the expectation and the variance of the random variable, one can derive a signif-
icantly stronger tail bound known as Chebyshev’s inequality.
Theorem 3.6 [Chebyshev’s Inequality]: For any a > 0,
Var[X]
Pr(|X − E[X]| ≥ a) ≤ .
a2
51
moments and deviations
52
3.3 chebyshev’s inequality
In fact, we can do slightly better. Chebyshev’s inequality yields that 4/n is actu-
ally a bound on the probability that X is either smaller than n/4 or larger than 3n/4,
so by symmetry the probability that X is greater than 3n/4 is actually 2/n. Cheby-
shev’s inequality gives a signiicantly better bound than Markov’s inequality for large
n.
1
Pr(X ≥ 2nHn ) ≤ .
2
To use Chebyshev’s inequality, we need to ind the variance of X. Recall again from
Section 2.4.1 that X = ni=1 Xi , where the Xi are geometric random variables with
parameter (n − i + 1)/n. In this case, the Xi are independent because the time to col-
lect the ith coupon does not depend on how long it took to collect the previous i − 1
coupons. Hence
n
n
Var[X] = Var Xi = Var[Xi ],
i=1 i=1
53
moments and deviations
Finally, we reach
For a geometric random variable Y, E[Y 2 ] can also be derived using conditional expec-
tations. We use that Y corresponds to the number of lips until the irst heads, where
each lip is heads with probability p. Let X = 0 if the irst lip is tails and X = 1 if the
irst lip is heads. By Lemma 2.5,
54
3.4 median and mean
Let X be a random variable. The median of X is deined to be any value m such that
Pr(X ≤ m) ≥ 1/2 and Pr(X ≥ m) ≥ 1/2.
55
moments and deviations
For example, for a discrete random variable that is uniformly distributed over an odd
number of distinct, sorted values x1 , x2 , . . . , x2k+1 , the median is the middle value xk+1 .
For a discrete random variable that is uniformly distributed over an even number of
values x1 , x2 , . . . , x2k , any value in the range (xk , xk+1 ) would be a median.
The expectation E[X] and the median are usually different numbers. For distribu-
tions with a unique median that are symmetric around either the mean or median, the
median is equal to the mean. For some distributions, the median can be easier to work
with than the mean, and in some settings it is a more natural quantity to work with.
The following theorem gives an alternate characterization of the mean and median:
Theorem 3.9: For any random variable X with inite expectation E[X] and inite
median m,
E[(X − c)2 ],
and
2. the median m is a value of c that minimizes the expression
E[|X − c|].
and taking the derivative with respect to c shows that c = E[X] yields the minimum.
For the second result, we want to show that that for any value c that is not a
median and for any median m, we have E[|X − c|] > E[|X − m|], or equivalently that
E[|X − c| − |X − m|] > 0. In that case the value of c that minimizes E[|X − c|] will
be a median. (In fact, as a by-product, we show that for any two medians m and m′ ,
E[|X − m|] = E[|X − m′ |].)
Let us take the case where c > m for a median m, and c is not a median, so Pr(X ≥
c) < 1/2. A similar argument holds for any value of c such that Pr(X ≤ c) < 1/2.
For x ≥ c, |x − c| − |x − m| = m − c. For m < x < c, |x − c| − |x − m| = c + m −
2x > m − c. Finally, for x ≤ m, |x − c| − |x − m| = c − m. Combining the three cases,
we have
E[|X − c| − |X − m|]
= Pr(X ≥ c)(m − c) + Pr(X = x)(c + m − 2x) + Pr(X ≤ m)(c − m).
x:m<x<c
where the inequality comes from Pr(X ≥ c) < 1/2 and m < c. (Note here that if c were
another median, so Pr(X ≥ c) = 1/2, we would obtain E[|X − c| − |X − m|] = 0, as
stated earlier.)
If Pr(m < X < c) = 0, then
E[|X − c| − |X − m|]
= Pr(X ≥ c)(m − c) + Pr(X = x)(c + m − 2x) + Pr(X ≤ m)(c − m)
x:m<x<c
> Pr(X > m)(m − c) + Pr(X ≤ m)(c − m)
1 1
> (m − c) + (c − m)
2 2
= 0,
where here the irst inequality comes from c + m − 2x > m − c for any value of x
with non-zero probability in the range m < x < c. (This case cannot hold if c and m
are both medians, as in this case we cannot have Pr(X ≥ m) = 1/2 and Pr(X ≥ c) =
1/2.)
Interestingly, for well-behaved random variables, the median and the mean cannot
deviate from each other too much.
Theorem 3.10: If X is a random variable with inite standard deviation σ , expectation
μ, and median m, then
|μ − m| ≤ σ.
Proof: The proof follows from the following sequence:
|μ − m| = |E[X] − m|
= |E[X − m]|
≤ E[|X − m|]
≤ E[|X − μ|]
≤ E[(X − μ)2 ]
= σ.
Here the irst inequality follows from Jensen’s inequality, the second inequality follows
from the result that the median minimizes E[|X − c|], and the third inequality is again
Jensen’s inequality.
In Exercise 3.19, we suggest another way of proving this result.
Given a set S of n elements drawn from a totally ordered universe, the median of S is
an element m of S such that at least ⌊n/2⌋ elements in S are less than or equal to m and
at least ⌊n/2⌋ + 1 elements in S are greater than or equal to m. If the elements in S are
57
moments and deviations
distinct, then m is the (⌈n/2⌉)th element in the sorted order of S. Note that the median
of a set is similar to but slightly different from the the median of a random variable
deined in Section 3.4.
The median can be easily found deterministically in O(n log n) steps by sorting,
and there is a relatively complex deterministic algorithm that computes the median in
O(n) time. Here we analyze a randomized linear time algorithm that is signiicantly
simpler than the deterministic one and yields a smaller constant factor in the linear
running time. To simplify the presentation, we assume that n is odd and that the ele-
ments in the input set S are distinct. The algorithm and analysis can be easily modiied
to include the case of a multi-set S (see Exercise 3.24) and a set with an even number of
elements.
√
this choice, the set C includes all the elements of S that are between the 2 n sample
points surrounding the median of R. The analysis will clarify that the choice of the size
of R and the choices for d and u are tailored to guarantee both that (a) the set C is large
enough to include m with high probability and (b) the set C is suficiently small so that
it can be sorted in sublinear time with high probability.
A formal description of the procedure is presented as Algorithm 3.1. In what follows,
√
for convenience we treat n and n3/4 as integers.
The interesting part of the analysis that remains after Theorem 3.11 is bounding the
probability that the algorithm outputs FAIL. We bound this probability by identifying
59
moments and deviations
three “bad” events such that, if none of these bad events occurs, the algorithm does not
fail. In a series of lemmas, we then bound the probability of each of these events and
show that the sum of these probabilities is only O(n−1/4 ).
Consider the following three events:
√
E 1 : Y1 = |{r ∈ R | r ≤ m}| < 21 n3/4 − n;
√
E2 : Y2 = |{r ∈ R | r ≥ m}| < 12 n3/4 − n;
E3 : |C| > 4n3/4 .
Lemma 3.12: The randomized median algorithm fails if and only if at least one of E1 ,
E2 , or E3 occurs.
Proof: Failure in step 7 of the algorithm is equivalent to the event E3 . Failure in step
6 of the algorithm occurs if and only if ℓd > n/2 or ℓu > n/2. But for ℓd > n/2, the
1 3/4 √
2
n − n th smallest element of R must be larger than m; this is equivalent to the
event E1 . Similarly, ℓu > n/2 is equivalent to the event E2 .
Lemma 3.13:
1 −1/4
Pr(E1 ) ≤ n .
4
Proof: Deine a random variable Xi by
1 if the ith sample is less than or equal to the median,
Xi =
0 otherwise.
The Xi are independent, since the sampling is done with replacement. Because there are
(n − 1)/2 + 1 elements in S that are less than or equal to the median, the probability
that a randomly chosen element of S is less than or equal to the median can be written as
(n − 1)/2 + 1 1 1
Pr(Xi = 1) = = + .
n 2 2n
The event E1 is equivalent to
3/4
n
1 3/4 √
Y1 = Xi < n − n.
i=1
2
Since Y1 is the sum of Bernoulli trials, it is a binomial random variable with para-
meters n3/4 and 1/2 + 1/2n. Hence, using the result of Section 3.2.1 yields
3/4 1 1 1 1
Var[Y1 ] = n + −
2 2n 2 2n
1 1
= n3/4 − 5/4
4 4n
1 3/4
< n .
4
60
3.5 application: a randomized algorithm for computing the median
and
1 −1/4
Pr(E3 ) ≤ Pr(E3,1 ) + Pr(E3,2 ) ≤ n .
2
Combining the bounds just derived, we conclude that the probability that the algorithm
outputs FAIL is bounded by
Pr(E1 ) + Pr(E2 ) + Pr(E3 ) ≤ n−1/4 .
This yields the following theorem.
Theorem 3.15: The probability that the randomized median algorithm fails is
bounded by n−1/4 .
By repeating Algorithm 3.1 until it succeeds in inding the median, we can obtain an
iterative algorithm that never fails but has a random running time. The samples taken
in successive runs of the algorithm are independent, so the success of each run is inde-
pendent of other runs, and hence the number of runs until success is achieved is a
geometric random variable. As an exercise, you may wish to show that this variation
of the algorithm (that runs until it inds a solution) still has linear expected running
time.
Randomized algorithms that may fail or return an incorrect answer are called Monte
Carlo algorithms. The running time of a Monte Carlo algorithm often does not depend
on the random choices made. For example, we showed in Theorem 3.11 that the ran-
domized median algorithm always terminates in linear time, regardless of its random
choices.
A randomized algorithm that always returns the right answer is called a Las Vegas
algorithm. We have seen that the Monte Carlo randomized algorithm for the median can
be turned into a Las Vegas algorithm by running it repeatedly until it succeeds. Again,
turning it into a Las Vegas algorithm means the running time is variable, although the
expected running time is still linear.
3.6. Exercises
Exercise 3.1: Let X be a number chosen uniformly at random from [1, n]. Find Var[X].
Exercise 3.2: Let X be a number chosen uniformly at random from [−k, k]. Find
Var[X].
Exercise 3.3: Suppose that we roll a standard fair die 100 times. Let X be the sum
of the numbers that appear over the 100 rolls. Use Chebyshev’s inequality to bound
Pr(|X − 350| ≥ 50).
Exercise 3.4: Prove that, for any real number c and any discrete random variable X,
Var[cX] = c2 Var[X].
62
3.6 exercises
Exercise 3.5: Given any two random variables X and Y, by the linearity of expecta-
tions we have E[X − Y ] = E[X] − E[Y ]. Prove that, when X and Y are independent,
Var[X − Y ] = Var[X] + Var[Y ].
Exercise 3.6: For a coin that comes up heads independently with probability p on each
lip, what is the variance in the number of lips until the kth head appears?
Exercise 3.7: A simple model of the stock market suggests that, each day, a stock with
price q will increase by a factor r > 1 to qr with probability p and will fall to q/r with
probability 1 − p. Assuming we start with a stock with price 1, ind a formula for the
expected value and the variance of the price of the stock after d days.
Exercise 3.8: Suppose that we have an algorithm that takes as input a string of n
bits. We are told that the expected running time is O(n2 ) if the input bits are chosen
independently and uniformly at random. What can Markov’s inequality tell us about
the worst-case running time of this algorithm on inputs of size n?
n
Exercise 3.9: (a) Let X be the sum of Bernoulli random variables, X = i=1 Xi . The
Xi do not need to be independent. Show that
n
E[X 2 ] = Pr(Xi = 1)E[X | Xi = 1]. (3.5)
i=1
Exercise 3.10: For a geometric random variable X, ind E[X 3 ] and E[X 4 ]. (Hint: Use
Lemma 2.5.)
Exercise 3.11: Recall the Bubblesort algorithm of Exercise 2.22. Determine the vari-
ance of the number of inversions that need to be corrected by Bubblesort.
Exercise 3.12: Find an example of a random variable with inite expectation and
unbounded variance. Give a clear argument showing that your choice has these
properties.
Exercise 3.13: Find an example of a random variable with inite jth moments for
1 ≤ j ≤ k but an unbounded (k + 1)th moment. Give a clear argument showing that
your choice has these properties.
63
moments and deviations
Exercise 3.14: Prove that, for any inite collection of random variables X1 , X2 , . . . , Xn ,
n n n
Var Xi = Var[Xi ] + 2 Cov(Xi , X j ).
i=1 i=1 i=1 j>i
Exercise 3.15:
n Let the random variable X be representable as a sum of random vari-
if E[Xi X j ] = E[Xi ]E[X j ] for every pair of i and j with
ables X = i=1 Xi . Show that,
1 ≤ i < j ≤ n, then Var[X] = ni=1 Var[Xi ].
Exercise 3.16: This problem shows that Markov’s inequality is as tight as it could
possibly be. Given a positive integer k, describe a random variable X that assumes only
nonnegative values such that
1
Pr(X ≥ kE[X]) = .
k
Exercise 3.17: Can you give an example (similar to that for Markov’s inequality in
Exercise 3.16) that shows that Chebyshev’s inequality is tight? If not, explain why not.
Exercise 3.18: Show that, for a random variable X with standard deviation σ [X] and
any positive real number t:
1
(a) Pr(X − E[X] ≥ tσ [X]) ≤ ;
1 + t2
2
(b) Pr(|X − E[X]| ≥ tσ [X]) ≤ .
1 + t2
Exercise 3.19: Using Exercise 3.18, show that |μ − m| ≤ σ for a random variable
with inite standard deviation σ , expectation μ, and median m.
Exercise 3.21: (a) Chebyshev’s inequality uses the variance of a random variable to
bound its deviation from its expectation. We can also use higher moments. Suppose
that we have a random variable X and an even integer k for which E[(X − E[X])k ] is
inite. Show that
1
Pr |X − E[X]| > t k E[(X − E[X])k ] ≤ k .
t
(b) Why is it dificult to derive a similar inequality when k is odd?
Exercise 3.22: A ixed point of a permutation π [1, n] → [1, n] is a value for which
π (x) = x. Find the variance in the number of ixed points of a permutation chosen
uni-
formly at random from all permutations. (Hint: Let Xi be 1 if π (i) = i, so that ni=1 Xi
64
3.6 exercises
n
is the number of ixed points. You cannot use linearity to ind Var i=1 Xi , but you
can calculate it directly.)
Exercise 3.23: Suppose that we lip a fair coin n times to obtain n random bits. Con-
sider all m = n2 pairs ofthese bits in some order. Let Yi be the exclusive-or of the ith
pair of bits, and let Y = mi=1 Yi be the number of Yi that equal 1.
(a) Show that each Yi is 0 with probability 1/2 and 1 with probability 1/2.
(b) Show that the Yi are not mutually independent.
(c) Show that the Yi satisfy the property that E[YiY j ] = E[Yi ]E[Y j ].
(d) Using Exercise 3.15, ind Var[Y ].
(e) Using Chebyshev’s inequality, prove a bound on Pr(|Y − E[Y ]| ≥ n).
Exercise 3.24: Generalize the median-inding algorithm for the case where the input
S is a multi-set. Bound the error probability and the running time of the resulting
algorithm.
Exercise 3.25: Generalize the median-inding algorithm to ind the kth largest item in
a set of n items for any given value of k. Prove that your resulting algorithm is correct,
and bound its running time.
Exercise 3.26: The weak law of large numbers states that, if X1 , X2 , X3 , . . . are inde-
pendent and identically distributed random variables with mean μ and standard devia-
tion σ , then for any constant ε > 0 we have
X1 + X2 + · · · + Xn
lim Pr − μ > ε = 0.
n→∞ n
Use Chebyshev’s inequality to prove the weak law of large numbers.
65
chapter four
Chernoff and Hoeffding Bounds
This chapter introduces large deviation bounds commonly called Chernoff and
Hoeffding bounds. These bounds are extremely powerful, giving exponentially
decreasing bounds on the tail distribution. These bounds are derived by applying
Markov’s inequality to the moment generating function of a random variable. We start
this chapter by deining and discussing the properties of the moment generating func-
tion. We then derive Chernoff bounds for the binomial distribution and other related
distributions, using a set balancing problem as an example, and the Hoeffding bound
for sums of bounded random variables. To demonstrate the power of Chernoff bounds,
we apply them to the analysis of randomized packet routing schemes on the hypercube
and butterly networks.
Before developing Chernoff bounds, we discuss the special role of the moment gener-
ating function E[etX ].
Deinition 4.1: The moment generating function of a random variable X is
MX (t ) = E[etX ].
We are mainly interested in the existence and properties of this function in the neigh-
borhood of zero.
The function MX (t ) captures all of the moments of X.
Theorem 4.1: Let X be a random variable with moment generating function MX (t ).
Under the assumption that exchanging the expectation and differentiation operands is
legitimate, for all n > 1 we then have
E[X n ] = MX(n) (0),
66
4.1 moment generating functions
Proof: Assuming that we can exchange the expectation and differentiation operands,
then
MX(n) (t ) = E[X n etX ].
Computed at t = 0, this expression yields
MX(n) (0) = E[X n ].
The assumption that expectation and differentiation operands can be exchanged holds
whenever the moment generating function exists in a neighborhood of zero, which will
be the case for all distributions considered in this book.
As a speciic example, consider a geometric random variable X with parameter p, as
in Deinition 2.8. Then, for t < − ln(1 − p),
MX (t ) = E[etX ]
∞
= (1 − p)k−1 petk
k=1
∞
p
= (1 − p)k etk
1 − p k=1
p
= ((1 − (1 − p)et )−1 − 1).
1− p
It follows that
MX(1) (t ) = p(1 − (1 − p)et )−2 et and
MX(2) (t ) = 2p(1 − p)(1 − (1 − p)et )−3 e2t + p(1 − (1 − p)et )−2 et .
Evaluating these derivatives at t = 0 and using Theorem 4.1 gives E[X] = 1/p
and E[X 2 ] = (2 − p)/p2 , matching our previous calculations from Section 2.4 and
Section 3.3.1.
Another useful property is that the moment generating function of a random variable
(or, equivalently, all of the moments of the variable) uniquely deines its distribution.
However, the proof of the following theorem is beyond the scope of this book.
Theorem 4.2: Let X and Y be two random variables. If
MX (t ) = MY (t )
for all t ∈ (−δ, δ) for some δ > 0, then X and Y have the same distribution.
One application of Theorem 4.2 is in determining the distribution of a sum of indepen-
dent random variables.
Theorem 4.3: If X and Y are independent random variables, then
MX+Y (t ) = MX (t )MY (t ).
Proof:
MX+Y (t ) = E[et(X+Y ) ] = E[etX etY ] = E[etX ]E[etY ] = MX (t )MY (t ).
67
chernoff and hoeffding bounds
Here we have used that X and Y are independent – and hence etX and etY are indepen-
dent – to conclude that E[etX etY ] = E[etX ]E[etY ].
The Chernoff bound for a random variable X is obtained by applying Markov’s inequal-
ity to etX for some well-chosen value t. From Markov’s inequality, we can derive the
following useful inequality: for any t > 0,
E[etX ]
Pr(X ≥ a) = Pr(etX ≥ eta ) ≤ .
eta
In particular,
E[etX ]
Pr(X ≥ a) ≤ min .
t>0 eta
Similarly, for any t < 0,
E[etX ]
Pr(X ≤ a) = Pr(etX ≥ eta ) ≤ .
eta
Hence
E[etX ]
Pr(X ≤ a) ≤ min .
t<0 eta
Bounds for speciic distributions are obtained by choosing appropriate values for t.
While the value of t that minimizes E[etX ]/eta gives the best possible bounds, often one
chooses a value of t that gives a convenient form. Bounds derived from this approach
are generally referred to collectively as Chernoff bounds. When we speak of a Chernoff
bound for a random variable, it could actually be one of many bounds derived in this
fashion.
trials. Our Chernoff bound will hold for the binomial distribution and also for the more
general setting of the sum of Poisson trials.
, . . . , Xn be a sequence of independent Poisson trials with Pr(Xi = 1) = pi .
Let X1
Let X = ni=1 Xi , and let
n n n
μ = E[X] = E Xi = E[Xi ] = pi .
i=1 i=1 i=1
For a given δ > 0, we are interested in bounds on Pr(X ≥ (1 + δ)μ) and Pr(X ≤
(1 − δ)μ) – that is, the probability that X deviates from its expectation μ by δμ or more.
To develop a Chernoff bound we need to compute the moment generating function of
X. We start with the moment generating function of each Xi :
MXi (t ) = E[etXi ]
= pi et + (1 − pi )
= 1 + pi (et − 1)
t
≤ e pi (e −1) ,
where in the last inequality we have used the fact that, for any y, 1 + y ≤ ey . Applying
Theorem 4.3, we take the product of the n generating functions to obtain
n
MX (t ) = MXi (t )
i=1
n
t
≤ e pi (e −1)
i=1
n
!
t
= exp pi (e − 1)
i=1
(et −1)μ
=e .
Now that we have determined a bound on the moment generating function, we are
ready to develop concrete versions of the Chernoff bound for a sum of Poisson trials.
We start with bounds on the deviation above the mean.
Theorem4.4: Let X1 , . . . , Xn be independent Poisson trials such that Pr(Xi = 1) = pi .
Let X = ni=1 Xi and μ = E[X]. Then the following Chernoff bounds hold:
1. for any δ > 0,
μ
eδ
Pr(X ≥ (1 + δ)μ) ≤ ; (4.1)
(1 + δ)(1+δ)
2. for 0 < δ ≤ 1,
2
Pr(X ≥ (1 + δ)μ) ≤ e−μδ /3
; (4.2)
3. for R ≥ 6μ,
Pr(X ≥ R) ≤ 2−R . (4.3)
69
chernoff and hoeffding bounds
The irst bound of the theorem is the strongest, and it is from this bound that we derive
the other two bounds, which have the advantage of being easier to state and compute
with in many situations.
Proof: Applying Markov’s inequality, for any t > 0 we have
Pr(X ≥ (1 + δ)μ) = Pr(etX ≥ et(1+δ)μ )
E[etX ]
≤ t(1+δ)μ
e
t
e(e −1)μ
≤ t(1+δ)μ .
e
For any δ > 0, we can set t = ln(1 + δ) > 0 to get Eqn. (4.1):
μ
eδ
Pr(X ≥ (1 + δ)μ) ≤ .
(1 + δ)(1+δ)
To obtain Eqn. (4.2) we need to show that, for 0 < δ ≤ 1,
eδ 2
≤ e−δ /3 .
(1 + δ)(1+δ)
Taking the logarithm of both sides, we obtain the equivalent condition
δ2
f (δ) = δ − (1 + δ) ln(1 + δ) + ≤ 0.
3
Computing the derivatives of f (δ), we have:
1+δ 2
f ′ (δ) = 1 − − ln(1 + δ) + δ
1+δ 3
2
= − ln(1 + δ) + δ;
3
′′ 1 2
f (δ) = − + .
1+δ 3
We see that f ′′ (δ) < 0 for 0 ≤ δ < 1/2 and that f ′′ (δ) > 0 for δ > 1/2. Hence f ′ (δ)
irst decreases and then increases over the interval [0, 1]. Since f ′ (0) = 0 and f ′ (1) <
0, we can conclude that f ′ (δ) ≤ 0 in the interval [0, 1]. Since f (0) = 0, it follows that
f (δ) ≤ 0 in that interval, proving Eqn. (4.2).
To prove Eqn. (4.3), let R = (1 + δ)μ. Then, for R ≥ 6μ, δ = R/μ − 1 ≥ 5. Hence,
using Eqn. (4.1),
μ
eδ
Pr(X ≥ (1 + δ)μ) ≤
(1 + δ)(1+δ)
(1+δ)μ
e
≤
1+δ
e R
≤
6
−R
≤2 .
70
4.2 deriving and applying chernoff bounds
In practice we often do not have the exact value of E[X]. Instead we can use μ ≥ E[X]
in Theorem 4.4 and μ ≤ E[X] in Theorem 4.5 (see Exercise 4.7).
Notice that, instead of predicting a single value for the parameter, we give an interval
that is likely to contain the parameter. If p can take on any real value, it may not make
sense to try to pin down its exact value from a inite sample, but it does make sense to
estimate it within some small range.
Naturally we want both the interval size 2δ and the error probability γ to be as
small as possible. We derive a trade-off between these two parameters and the number
of samples n. In particular, given that among n samples (chosen uniformly at random
from the entire population) we ind the mutation in exactly X = p̃n samples, we need
to ind values of δ and γ for which
We can apply the Chernoff bounds in Eqns. (4.2) and (4.5) to compute
δ δ
Pr(p ∈
/ [ p̃ − δ, p̃ + δ]) = Pr X < np 1 − + Pr X > np 1 + (4.7)
p p
2 2
< e−np(δ/p) /2 + e−np(δ/p) /3 (4.8)
−nδ 2 /2p −nδ 2 /3p
=e +e . (4.9)
The bound given in Eqn. (4.9) is not useful because the value of p is unknown. A
simple solution is to use the fact that p ≤ 1, yielding
2 2
/ [ p̃ − δ, p̃ + δ]) < e−nδ
Pr(p ∈ /2
+ e−nδ /3
.
2 2
Setting γ = e−nδ /2 + e−nδ /3 , we obtain a trade-off between δ, n, and the error proba-
bility γ .
We can apply other Chernoff bounds, such as those in Exercises 4.13 and 4.16, to
obtain better bounds. We return to the subject of parameter estimation when we discuss
the Monte Carlo method in Chapter 11.
We can obtain stronger bounds using a simpler proof technique for some special cases
of symmetric random variables.
We consider irst the sum of independent random variables when each variable
assumes the value 1 or −1 with equal probability.
n
Let X = i=1 Xi . For any a > 0,
2
Pr(X ≥ a) ≤ e−a /2n .
Proof: For any t > 0,
1 t 1 −t
E[etXi ] = e + e .
2 2
To estimate E[etXi ], we observe that
t2 ti
et = 1 + t + + ··· + + ···
2! i!
and
t2 ti
e−t = 1 − t + + · · · + (−1)i + · · · ,
2! i!
using the Taylor series expansion for et . Thus,
1 t 1 −t
E[etXi ] = e + e
2 2
t 2i
=
i≥0
(2i)!
(t 2 /2)i
≤
i≥0
i!
2
= et /2
.
Using this estimate yields
n
2
E[etX ] = E[etXi ] ≤ et n/2
i=1
and
E[etX ] 2
Pr(X ≥ a) = Pr(etX ≥ eta ) ≤ ta
≤ et n/2−ta .
e
Setting t = a/n, we obtain
2
Pr(X ≥ a) ≤ e−a /2n .
n
Let X = i=1 Xi . Then, for any a > 0,
2
Pr(|X| ≥ a) ≤ 2e−a /2n .
proving the irst part of the corollary. The second part follows from setting a = δμ =
δn/2. Again applying Theorem 4.7, we have
2
μ2 /n 2
Pr(Y ≥ (1 + δ)μ) = Pr(X ≥ 2δμ) ≤ e−2δ = e−δ μ .
Note that the constant in the exponent of the bound of Eqn. (4.10) is 1 instead of the
1/3 in the bound of Eqn. (4.2).
Similarly, we have the following result.
75
chernoff and hoeffding bounds
Suppose that we are looking for a vector b̄ with entries in {−1, 1} that minimizes
This problem arises in designing statistical experiments. Each column of the matrix A
represents a subject in the experiment and each row represents a feature. The vector
b̄ partitions the subjects into two disjoint groups, so that each feature is roughly as
balanced as possible between the two groups. One of the groups serves as a control
group for an experiment that is run on the other group.
Our randomized algorithm for computing a vector b̄ is extremely simple. We ran-
domly choose the entries of b̄, with Pr(bi = 1) = Pr(bi = −1) = 1/2. The choices
for different entries are independent. Surprisingly, although this algorithm ignores the
entries
√ of the matrix A, the following theorem shows that Ab̄∞ is likely to be only
O m ln n . This bound is fairly tight. In Exercise 4.15 you √are asked to show that,
when m = n, there exists a matrix A for which Ab̄∞ is n for any choice of b̄.
Theorem 4.11: For a random vector b̄ with entries chosen independently and with
equal probability from the set {−1, 1},
√ 2
Pr Ab̄∞ ≥ 4m ln n ≤ .
n
Proof: Consider
√ the ith row āi = ai,1 , . . . , ai,m , and
√ let k be the number of 1s in that
row.
√ If k ≤ 4m ln n, then clearly |āi · b̄| = |ci | ≤ 4m ln n. On the other hand, if k >
4m ln n then we note that the k nonzero terms in the sum
m
Zi = ai, j b j
j=1
are independent random variables, each with probability 1/2 of being either +1 or −1.
Now using the Chernoff bound of Corollary 4.8 and the fact that m ≥ k,
√ 2
Pr |Zi | > 4m ln n ≤ 2e−4m ln n/2k ≤ 2 .
n
By the union bound, the probability that the bound fails for any row is at most
2/n.
76
4.5 the hoeffding bound
Hoeffding’s bound extends the Chernoff bound technique to general random variables
with a bounded range.
Theorem 4.12 [Hoeffding Bound]: Let X1 , . . . , Xn be independent random variables
such that for all 1 ≤ i ≤ n, E[Xi ] = μ and Pr(a ≤ Xi ≤ b) = 1. Then
n
1 2 2
Pr Xi − μ ≥ ǫ ≤ 2e−2nǫ /(b−a) .
n
i=1
Proof: The proof relies on the following bound for the moment generating function,
which we prove irst.
Lemma 4.13 [Hoeffding’s Lemma]: Let X be a random variable such that
Pr(X ∈ [a, b]) = 1 and E[X] = 0. Then for every λ > 0,
2
(b−a)2 /8
E[eλX ] ≤ eλ .
Proof: Before beginning, note that since E[X] = 0, if a = 0 then b = 0 and the state-
ment is trivial. Hence we may assume a < 0 and b > 0.
Since f (x) = eλx is a convex function, for any α ∈ (0, 1),
f (αa + (1 − α)b) ≤ αeλa + (1 − α)eλb .
b−x
For x ∈ [a, b], let α = b−a
; then x = αa + (1 − α)b and we have
b − x λa x − a λb
eλx ≤ e + e .
b−a b−a
We consider eλX and take expectations. Using the fact that E[X] = 0, we have
b − X λa X − a λb
E[eλX ] ≤ E e +E e
b−a b−a
b λa E[X] λa a λb E[X] λb
= e − e − e + e
b−a b−a b−a b−a
b λa a λb
= e − e .
b−a b−a
We now require some manipulation of this inal expression. Let φ(t ) = −θt +
−a
ln(1 − θ + θet ), for θ = b−a > 0. Then
λ2 (b − a)2
φ(λ(b − a)) ≤ .
8
It follows that
2
(b−a)2 /8
E[eλX ] ≤ eφ(λ(b−a)) ≤ eλ .
where for the key second to last inequality we have used Hoeffding’s Lemma with the
4nǫ
fact that Zi /n is bounded between (a − μ)/n and (b − μ)/n. Setting λ = (b−a) 2 gives
n
1 2
/(b−a)2
Pr Xi − μ ≥ ǫ = Pr(Z ≥ ǫ) ≤ e−2nǫ .
n i=1
4nǫ
Applying the same argument for Pr(Z ≤ −ǫ) with λ = − (b−a) 2 gives
n
1 2
/(b−a)2
Pr Xi − μ ≤ −ǫ = Pr(Z ≤ −ǫ) ≤ e−2nǫ .
n i=1
The proof of the following more general version of the bound is left as an exercise
(Exercise 4.20).
Note that Theorem 4.12 bounds the deviation of the average of the n random vari-
ables while Theorem 4.14 bounds the deviation of the sum of the variables.
78
4.6
∗
application: packet routing in sparse networks
Examples:
1. Consider n independent random variables X1 , . . . , Xn such that Xi is uniformly dis-
tributed in {0, . . . , ℓ}. For all i, μ = E[Xi ] = ℓ/2, and
n
1 ℓ 2 2
Pr Xi − ≥ ǫ ≤ 2e−2nǫ /ℓ .
n
i=1
2
In particular,
n
1 2
Pr Xi − μ ≥ δμ ≤ 2e−nδ /2 .
n
i=1
Each node can be connected directly to only a few neighbors, and most packets must
traverse intermediate nodes en route to their inal destination. Since an edge may be
on the path of more than one packet and since each edge can process only one packet
per step, parallel packet routing on sparse networks may lead to congestion and bottle-
necks. The practical problem of designing an eficient communication scheme for par-
allel computers leads to an interesting combinatorial and algorithmic problem: design-
ing a family of sparse networks connecting any number of processors, together with
a routing algorithm that routes an arbitrary permutation request in a small number of
parallel steps.
We discuss here a simple and elegant randomized routing technique and then use
Chernoff bounds to analyze its performance on the hypercube network and the butterly
network. We irst analyze the case of routing a permutation on a hypercube, a network
with N processors and O(N log N) edges. We then present a tighter argument for the
butterly network, which has N nodes and only O(N) edges.
See Figure 4.1. Note that the total number of directed edges in the n-cube is nN, since
each node is adjacent to n outgoing and n ingoing edges. Also, the diameter of the
network is n; that is, there is a directed path of length up to n connecting any two
nodes in the network, and there are pairs of nodes that are not connected by any shorter
path.
The topology of the hypercube allows for a simple bit-ixing routing mechanism, as
shown in Algorithm 4.1. When determining which edge to cross next, the algorithm
simply considers each bit in order and crosses the edge if necessary.
Although it seems quite natural, using only the bit-ixing routes can lead to high
levels of congestion and poor performance, as shown in Exercise 4.22. There are certain
permutations on which the bit-ixing routes behave poorly. It turns out, as we will show,
that these routes perform well if each packet is being sent from a source to a destination
chosen uniformly at random. This motivates the following approach: irst route each
packet to a randomly chosen intermediate point, and then route it from this intermediate
point to its inal destination.
It may seem unusual to irst route packets to a random intermediate point. In some
sense, this is similar in spirit to our analysis of Quicksort in Section 2.5. We found there
that for a list already sorted in reverse order, Quicksort would take (n2 ) comparisons,
whereas the expected number of comparisons for a randomly chosen permutation is
only O(n log n). Randomizing the data can lead to a better running time for Quicksort.
80
4.6
∗
application: packet routing in sparse networks
000 010
0 00 10 100 110
001 011
1 01 11 101 111
(d) n = 4.
Here, too, randomizing the routes that packets take – by routing them through a ran-
dom intermediate point – avoids bad initial permutations and leads to good expected
performance.
The two-phase routing algorithm (Algorithm 4.2) is executed in parallel by all the
packets. The random choices are made independently for each packet. Our analysis
holds for any queueing policy that obeys the following natural requirement: if a queue
is not empty at the beginning of a time step, some packet is sent along the edge associ-
ated with that queue during that time step. We prove that this routing strategy achieves
asymptotically optimal parallel time.
81
chernoff and hoeffding bounds
Proof: We irst analyze the run-time of Phase I. To simplify the analysis we assume that
no packet starts the execution of Phase II before all packets have inished the execution
of Phase I. We show later that this assumption can be removed.
We emphasize a fact that we use implicitly throughout. If a packet is routed to a
randomly chosen node x̄ in the network, we can think of x̄ = (x1 , . . . , xn ) as being
generated by setting each xi independently to be 0 with probability 1/2 and 1 with
probability 1/2.
For a given packet M, let T1 (M) be the number of steps for M to inish Phase I. For
a given edge e, let X1 (e) denote the total number of packets that traverse edge e during
Phase I.
In each step of executing Phase I, packet M is either traversing an edge or waiting in a
queue while some other packet traverses an edge on M’s route. This simple observation
relates the routing time of M to the total number of packet transitions through edges on
the path of M, as follows.
Let us call any path P = (e1 , e2 , . . . , em ) of m ≤ n edges that follows the bit-ixing
algorithm a possible packet path. We denote the corresponding nodes by v0 , v1 , . . . , vm
with ei = (vi−1 , vi ). Following the deinition of T1 (M), for any possible packet path P
we let
m
T1 (P) = X1 (ei ).
i=1
By Lemma 4.16, the probability that Phase I takes more than T steps is bounded by
the probability that, for some possible packet path P, T1 (P) ≥ T . Note that there are
at most 2n · 2n = 22n possible packet paths, since there are 2n possible origins and 2n
possible destinations.
82
4.6
∗
application: packet routing in sparse networks
1 This approach overestimates the time to inish a phase. In fact, there is a deterministic argument showing that,
in this setting, the delay of a packet on a path is bounded by the number of different packets that traverse edges
of the path, and hence there is no need to bound the total number of traversals of these packets on the path.
However, in the spirit of this book we prefer to present the probabilistic argument.
83
chernoff and hoeffding bounds
Pr(T1 (P) ≥ 30n) ≤ Pr(H ≥ 6n) + Pr(T1 (P) ≥ 30n | H < 6n)
≤ 2−6n + Pr(T1 (P) ≥ 30n | H < 6n).
Hence if we show
we then have
P more than 30n times, as can be shown easily by induction (on the number of biased
coins).
Letting Z be the number of heads in 36n fair coin lips, we now apply the Chernoff
bound of Eqn. (4.5) to prove:
2
Pr(T1 (P) ≥ 30n | H ≤ 6n) ≤ Pr(Z ≤ 6n) ≤ e−18n(2/3) /2 = e−4n ≤ 2−3n−1 .
It follows that
Pr(T1 (P) ≥ 30n) ≤ Pr(H ≥ 6n) + Pr(T1 (P) ≥ 30n | H ≤ 6n) ≤ 2−3n ,
as we wanted to show. Because there are at most 22n possible packet paths in the hyper-
cube, the probability that there is any possible packet path for which T1 (P) ≥ 30n is
bounded by
22n 2−3n = 2−n = O(N −1 ).
This completes the analysis of Phase I. Consider now the execution of Phase II,
assuming that all packets completed their Phase I route. In this case, Phase II can be
viewed as running Phase I backwards: instead of packets starting at a given origin
and going to a random destination, they start at a random origin and end at a given
destination. Hence no packet spends more than 30n steps in Phase II with probability
1 − O(N −1 ).
In fact, we can remove the assumption that packets begin Phase II only after Phase
I has completed. The foregoing argument allows us to conclude that the total number
of packet traversals across the edges of any packet path during Phase I and Phase II
together is bounded by 60n with probability 1 − O(N −1 ). Since a packet can be delayed
only by another packet traversing that edge, we ind that every packet completes both
Phase I and Phase II after 60n steps with probability 1 − O(N −1 ) regardless of how the
phases interact, concluding the proof of Theorem 4.15
Note that the run-time of the routing algorithm is optimal up to a constant factor, since
the diameter of the hypercube is n. However, the network is not fully utilized because
2nN directed edges are used to route just N packets. At any give time, at most 1/2n of
the edges are actually being used. This issue is addressed in the next section.
l0
l1
l2
l3
ve
ve
ve
ve
le
le
le
le
row 000
row 001
row 010
row 011
row 100
row 101
row 110
row 111
Figure 4.2: The butterly network. In the wrapped butterly, levels 0 and 3 are collapsed into one
level.
is the row number and 0 ≤ r ≤ n − 1 is the column number of the node. Node (x, r) is
connected to node (y, s) if and only if s = r + 1 mod n and either:
See Figure 4.2. To see the relation between the wrapped butterly and the hypercube,
observe that by collapsing the n nodes in each row of the wrapped butterly into one
“super node” we obtain an n-cube network. Using this correspondence, one can easily
verify that there is a unique directed path of length n connecting node (x, r) to any
other node (w, r) in the same column. This path is obtained by bit ixing: irst ixing
bits r + 1 to n, then bits 1 to r. See Algorithm 4.3. Our randomized permutation routing
algorithm on the butterly consists of three phases, as shown in Algorithm 4.4.
Unlike our analysis of the hypercube, our analysis here cannot simply bound the
number of active packets that possibly traverse edges of a path. Given the path of a
packet, the expected number of other packets that share edges with this path when
86
4.6
∗
application: packet routing in sparse networks
routing a random permutation on the butterly network is (n2 ) and not O(n) as in the
n-cube. To obtain an O(n) routing time, we need a more reined analysis technique that
takes into account the order in which packets traverse edges.
Because of this, we need to consider the priority policy that the queues use when
there are several packets waiting to use the edge. A variety of priority policies would
work here; we assume the following rules.
Theorem 4.17: Given an arbitrary permutation routing problem on the wrapped but-
terly with N = n2n nodes, with probability 1 − O(N −1 ) the three-phase routing scheme
of Algorithm 4.4 routes all packets to their destinations in O(n) = O(log N) parallel
steps.
Proof: The priority rule in the edge queues guarantees that packets in a phase cannot
delay packets in earlier phases. Because of this, in our forthcoming analysis we can
consider the time for each phase to complete separately and then add these times to
bound the total time for the three-phase routing scheme to complete.
We begin by considering the second phase. We irst argue that with high probability
each row transmits at most 4n packets in the second phase. To see this, let Xw be the
87
chernoff and hoeffding bounds
number of packets whose intermediate row choice is w in the three-phase routing algo-
rithm. Then Xw is the sum of 0–1 independent random variables, one for each packet,
and E[Xw ] = n. Hence, we can directly apply the Chernoff bound of Eqn. (4.1) to ind
3
n
e
Pr(Xw ≥ 4n) ≤ ≤ 3−2n .
44
There are 2n possible rows w. By the union bound, the probability that any row has
more than 4n packets is only 2n · 3−2n = O(N −1 ).
We now argue that, if each row has at most 4n packets for the second phase, then the
second phase takes at most 5n steps to complete. Combined with our previous observa-
tions, this means the second phase takes at most 5n steps with probability 1 − O(N −1 ).
To see this, note that in the second phase the routing has a special structure: each packet
moves from edge to edge along its row. Because of the priority rule, each packet can
be delayed only by packets already in a queue when it arrives. Therefore, to place an
upper bound on the number of packets that delay a packet p, we can bound the total
number of packets found in each queue when p arrives at the queue. But in Phase II, the
number of other packets that an arriving packet inds in a queue cannot increase in size
over time, since at each step a queue sends a packet and receives at most one packet.
(It is worth considering the special case when a queue becomes empty at some point
in Phase II; this queue can receive another packet at some later step, but the number of
packets an arriving packet will ind in the queue after that point is always zero.) Since
there are at most 4n packets total in the row to begin with, p inds at most 4n packets
that delay it as it moves from queue to queue. Since each packet moves at most n times
in the second phase, the total time for the phase is 5n steps.
We now consider the other phases. The irst and third phases are again the same by
symmetry, so we consider just the irst phase. Our analysis will use a delay sequence
argument.
that time it has already inished transmitting all packets with priority numbers up to i.
Thus,
Ti+1 ≤ Ti + ti+1 .
Since T1 = t1 , we have
Tn ≤ Tn−1 + tn
≤ Tn−2 + tn−1 + tn
n
≤ ti ,
i=1
4.7. Exercises
Exercise 4.1: Alice and Bob play checkers often. Alice is a better player, so the proba-
bility that she wins any given game is 0.6, independent of all other games. They decide
to play a tournament of n games. Bound the probability that Alice loses the tournament
using a Chernoff bound.
Exercise 4.2: We have a standard six-sided die. Let X be the number of times that a 6
occurs over n throws of the die. Let p be the probability of the event X ≥ n/4. Compare
the best upper bounds on p that you can obtain using Markov’s inequality, Chebyshev’s
inequality, and Chernoff bounds.
Exercise 4.3: (a) Determine the moment generating function for the binomial random
variable B(n, p).
90
4.7 exercises
(b) Let X be a B(n, p) random variable and Y a B(m, p) random variable, where X
and Y are independent. Use part (a) to determine the moment generating function of
X + Y.
(c) What can we conclude from the form of the moment generating function of
X + Y?
Exercise 4.4: Determine the probability of obtaining 55 or more heads when lipping
a fair coin 100 times by an explicit calculation, and compare this with the Chernoff
bound. Do the same for 550 or more heads in 1000 lips.
Exercise 4.5: We plan to conduct an opinion poll to ind out the percentage of people
in a community who want its president impeached. Assume that every person answers
either yes or no. If the actual fraction of people who want the president impeached is
p, we want to ind an estimate X of p such that
Pr(|X − p| ≤ ε p) > 1 − δ
for a given ε and δ, with 0 < ε, δ < 1.
We query N people chosen independently and uniformly at random from the com-
munity and output the fraction of them who want the president impeached. How large
should N be for our result to be a suitable estimator of p? Use Chernoff bounds, and
express N in terms of p, ε, and δ. Calculate the value of N from your bound if ε = 0.1
and δ = 0.05 and if you know that p is between 0.2 and 0.8.
Exercise 4.6: (a) In an election with two candidates using paper ballots, each vote is
independently misrecorded with probability p = 0.02. Use a Chernoff bound to give
an upper bound on the probability that more than 4% of the votes are misrecorded in
an election of 1,000,000 ballots.
(b) Assume that a misrecorded ballot always counts as a vote for the other candidate.
Suppose that candidate A received 510,000 votes and that candidate B received 490,000
votes. Use Chernoff bounds to upper bound the probability that candidate B wins the
election owing to misrecorded ballots. Speciically, let X be the number of votes for
candidate A that are misrecorded and let Y be the number of votes for candidate B that
are misrecorded. Bound Pr((X > k) ∪ (Y < ℓ)) for suitable choices of k and ℓ.
Exercise 4.7: Throughout the chapter we implicitly assumed the following extension
of the Chernoff
n bound. Prove that it is true.
Let X = i=1 Xi , where the Xi are independent 0–1 random variables. Let μ =
E[X]. Choose any μL and μH such that μL ≤ μ ≤ μH . Then, for any δ > 0,
μH
eδ
Pr(X ≥ (1 + δ)μH ) ≤ .
(1 + δ)(1+δ)
Similarly, for any 0 < δ < 1,
μL
e−δ
Pr(X ≤ (1 − δ)μL ) ≤ .
(1 − δ)(1−δ)
91
chernoff and hoeffding bounds
Exercise 4.8: We show how to construct a random permutation π on [1, n], given a
black box that outputs numbers independently and uniformly at random from [1, k]
where k ≥ n. If we compute a function f [1, n] → [1, k] with f (i) = f ( j) for i = j,
this yields a permutation; simply output the numbers [1, n] according to the order of
the f (i) values. To construct such a function f, do the following for j = 1, . . . , n: choose
f ( j) by repeatedly obtaining numbers from the black box and setting f ( j) to the irst
number found such that f ( j) = f (i) for i < j.
Prove that this approach gives a permutation chosen uniformly at random from all
permutations. Find the expected number of calls to the black box that are needed when
k = n and k = 2n. For the case k = 2n, argue that the probability that each call to the
black box assigns a value of f ( j) to some j is at least 1/2. Based on this, use a Chernoff
bound to bound the probability that the number of calls to the black box is at least 4n.
(a) Show using Chebyshev’s inequality that O(r2 /ε2 δ) samples are suficient to solve
the problem.
(b) Suppose that we need only a weak estimate that is within εE[X] of E[X] with
probability at least 3/4. Argue that O(r2 /ε2 ) samples are enough for this weak
estimate.
(c) Show that, by taking the median of O(log(1/δ)) weak estimates, we can obtain an
estimate within εE[X] of E[X] with probability at least 1 − δ. Conclude that we
need only O((r2 log(1/δ))/ε2 ) samples.
Exercise 4.10: A casino is testing a new class of simple slot machines. Each game, the
player puts in $1, and the slot machine is supposed to return either $3 to the player with
probability 4/25, $100 with probability 1/200, or nothing with all remaining probabil-
ity. Each game is supposed to be independent of other games.
The casino has been surprised to ind in testing that the machines have lost $10,000
over the irst million games. Derive a Chernoff bound for the probability of this event.
You may want to use a calculator or program to help you choose appropriate values as
you derive your bound.
Exercise
4.13: Let X1 , . . . , Xn be independent Poisson trials such that Pr(Xi = 1) = p.
Let X = ni=1 Xi , so that E[X] = pn. Let
F (x, p) = x ln(x/p) + (1 − x) ln((1 − x)/(1 − p)).
(a) Show that, for 1 ≥ x > p,
Pr(X ≥ xn) ≤ e−nF (x,p) .
(b) Show that, when 0 < x, p < 1, we have F (x, p) − 2(x − p)2 ≥ 0. (Hint: Take the
second derivative of F (x, p) − 2(x − p)2 with respect to x.)
(c) Using parts (a) and (b), argue that
2
Pr(X ≥ (p + ε)n) ≤ e−2nε .
(d) Use symmetry to argue that
2
Pr(X ≤ (p − ε)n) ≤ e−2nε ,
and conclude that
2
Pr(|X − pn| ≥ εn) ≤ 2e−2nε .
Exercise 4.14: Modify the proof of Theorem 4.4 to show the following bound for
a weighted sum of Poisson trials. Let X1 , . . . , Xn be independent Poisson trials such
that Pr(Xi ) = pi and let a1 , . . . , an be real numbers in [0, 1]. Let X = ni=1 ai Xi and
μ = E[X]. Then the following Chernoff bound holds: for any δ > 0,
μ
eδ
Pr(X ≥ (1 + δ)μ) ≤ .
(1 + δ)(1+δ)
Prove a similar bound for the probability that X ≤ (1 − δ)μ for any 0 < δ < 1.
93
chernoff and hoeffding bounds
Exercise 4.17: Suppose that we have n jobs to distribute among m processors. For
simplicity, we assume that m divides n. A job takes 1 step with probability p and k > 1
steps with probability 1 − p. Use Chernoff bounds to determine upper and lower
bounds (that hold with high probability) on when all jobs will be completed if we
randomly assign exactly n/m jobs to each processor.
Exercise 4.19: Recall that a function f is said to be convex if, for any x1 , x2 and for
0 ≤ λ ≤ 1,
f (λx1 + (1 − λ)x2 ) ≤ λ f (x1 ) + (1 − λ) f (x2 ).
(a) Let Z be a random variable that takes on a (inite) set of values in the interval [0, 1],
and let p = E[Z]. Deine the Bernoulli random variable X by Pr(X = 1) = p and
Pr(X = 0) = 1 − p. Show that E[ f (Z)] ≤ E[ f (X )] for any convex function f.
(b) Use the fact that f (x) = etx is convex for any t ≥ 0 to obtain a Chernoff bound for
the sum of n independent random variables with distribution Z as in part (a), based
on a Chernoff bound for independent Poisson trials.
Exercise 4.21: We prove that the Randomized Quicksort algorithm sorts a set of
n numbers in time O(n log n) with high probability. Consider the following view of
Randomized Quicksort. Every point in the algorithm where it decides on a pivot ele-
ment is called a node. Suppose the size of the set to be sorted at a particular node is s.
The node is called good if the pivot element divides the set into two parts, each of size
not exceeding 2s/3. Otherwise the node is called bad. The nodes can be thought of as
94
4.7 exercises
forming a tree in which the root node has the whole set to be sorted and its children
have the two sets formed after the irst pivot step and so on.
(a) Show that the number of good nodes in any path from the root to a leaf in this tree
is not greater than c log2 n, where c is some positive constant.
(b) Show that, with high probability (greater than 1 − 1/n2 ), the number of nodes in
a given root to leaf path of the tree is not greater than c′ log2 n, where c′ is another
constant.
(c) Show that, with high probability (greater than 1 − 1/n), the number of nodes in
the longest root to leaf path is not greater than c′ log2 n. (Hint: How many nodes
are there in the tree?)
(d) Use your answers to show that the running time of Quicksort is O(n log n) with
probability at least 1 − 1/n.
Exercise 4.22: Consider the bit-ixing routing algorithm for routing a permutation on
the n-cube. Suppose that n is even. Write each source node s as the concatenation of
two binary strings as and bs each of length n/2. Let the destination of s’s packet be
bs and
the concatenation of√ as . Show that this permutation causes the bit-ixing routing
algorithm to take N steps.
Exercise 4.23: Consider the following modiication to the bit-ixing routing algorithm
for routing a permutation on the n-cube. Suppose that, instead of ixing the bits in order
from 1 to n, each packet chooses a random order (independent of other packets’ choices)
and ixes the bits in that order. Show that there is a permutation for which this algorithm
requires 2(n) steps with high probability.
Exercise 4.24: Assume that we use the randomized routing algorithm for the n-cube
network (Algorithm 4.2) to route a total of up to p2n packets, where each node is the
source of no more than p packets and each node is the destination of no more than p
packets.
Exercise 4.25: Show that the expected number of packets that traverse any edge on
the path of a given packet when routing a random permutation on the wrapped butterly
network of N = n2n nodes is (n2 ).
Exercise 4.26: In this exercise, we design a randomized algorithm for the following
packet routing problem. We are given a network that is an undirected connected graph
G, where nodes represent processors and the edges between the nodes represent wires.
We are also given a set of N packets to route. For each packet we are given a source
node, a destination node, and the exact route (path in the graph) that the packet should
take from the source to its destination. (We may assume that there are no loops in the
95
chernoff and hoeffding bounds
path.) In each time step, at most one packet can traverse an edge. A packet can wait at
any node during any time step, and we assume unbounded queue sizes at each node.
A schedule for a set of packets speciies the timing for the movement of packets
along their respective routes. That is, it speciies which packet should move and which
should wait at each time step. Our goal is to produce a schedule for the packets that
tries to minimize the total time and the maximum queue size needed to route all the
packets to their destinations.
(a) The dilation d is the maximum distance traveled by any packet. The congestion c is
the maximum number of packets that must traverse a single edge during the entire
course of the routing. Argue that the time required for any schedule should be at
least (c + d).
(b) Consider the following unconstrained schedule, where many packets may traverse
an edge during a single time step. Assign each packet an integral delay chosen ran-
domly, independently, and uniformly from the interval [1, ⌈αc/ log(Nd)⌉], where α
is a constant. A packet that is assigned a delay of x waits in its source node for x time
steps; then it moves on to its inal destination through its speciied route without
ever stopping. Give an upper bound on the probability that more than O(log(Nd))
packets use a particular edge e at a particular time step t.
(c) Again using the unconstrained schedule of part (b), show that the probability that
more than O(log(Nd)) packets pass through any edge at any time step is at most
1/(Nd) for a suficiently large α.
(d) Use the unconstrained schedule to devise a simple randomized algorithm that, with
high probability, produces a schedule of length O(c + d log(Nd)) using queues of
size O(log(Nd)) and following the constraint that at most one packet crosses an
edge per time step.
96
chapter five
Balls, Bins, and Random Graphs
In this chapter, we focus on one of the most basic of random processes: m balls are
thrown randomly into n bins, each ball landing in a bin chosen independently and uni-
formly at random. We use the techniques we have developed previously to analyze this
process and develop a new approach based on what is known as the Poisson approx-
imation. We demonstrate several applications of this model, including a more sophis-
ticated analysis of the coupon collector’s problem and an analysis of the Bloom ilter
data structure. After introducing a closely related model of random graphs, we show an
eficient algorithm for inding a Hamiltonian cycle on a random graph with suficiently
many edges. Even though inding a Hamiltonian cycle is NP-hard in general, our result
shows that, for a randomly chosen graph, the problem is solvable in polynomial time
with high probability.
Sitting in lecture, you notice that there are 30 people in the room. Is it more likely that
some two people in the room share the same birthday or that no two people in the room
share the same birthday?
We can model this problem by assuming that the birthday of each person is a ran-
dom day from a 365-day year, chosen independently and uniformly at random for each
person. This is obviously a simpliication; for example, we assume that a person’s birth-
day is equally likely to be any day of the year, we avoid the issue of leap years, and we
ignore the possibility of twins! As a model, however, it has the virtue of being easy to
understand and analyze.
One way to calculate this probability is to directly count the conigurations where
two people do not share a birthday. It is easier to think about the conigurations where
people do not share a birthday than about conigurations where some two people do.
Thirty days must be chosen from the 365; there are 365 30
ways to do this. These 30
days
can be assigned to the people in any of the 30! possible orders. Hence there are
365
30
30! conigurations where no two people share the same birthday, out of the 36530
97
balls, bins, and random graphs
= e−m(m−1)/2n
2
≈ e−m /2n .
Hence the value for m at which the probability that m people all have different birthdays
is 1/2 is approximately given by the equation
m2
= ln 2,
2n
√
or m = 2n ln 2. For the case n = 365, this approximation gives m = 22.49 to two
decimal places, matching the exact calculation quite well.
98
5.2 balls into bins
Quite tight and formal bounds can be established using bounds in place of the
approximations just derived, an option that is considered in Exercise 5.3. The follow-
ing simple arguments, however, give loose bounds and good intuition. Let us consider
each person one at a time, and let Ek be the event that the kth person’s birthday does
not match any of the birthdays of the irst k − 1 people. Then the probability that the
irst k people fail to have distinct birthdays is
k
Pr(Ē1 ∪ Ē2 ∪ · · · ∪ Ēk ) ≤ Pr(Ēi )
i=1
k
i−1
≤
i=1
n
k(k − 1)
= .
2n
√ √
If k ≤ n this probability is less than 1/2, so with n people the probability is at
least 1/2 that all birthdays will
√bedistinct.
Now assume that the irst n people all have distinct birthdays. Each person after
√ √
that has
√ probability at least n/n = 1/ n of having the √same
birthday as one of these
irst n people. Hence the
√ probability that the next n people all have different
birthdays than the irst n people is at most
⌈√n ⌉
1 1 1
1− √ < < .
n e 2
√
Hence, once there are 2 n people, the probability is at most 1/e that all birthdays
will be distinct.
the number of balls equals the number of bins and the average load is 1. Of course the
maximum possible load is n, but it is very unlikely that all n balls land in the same bin.
We seek an upper bound that holds with probability tending to 1 as n grows large. We
can show that the maximum load is more than 3 ln n/ ln ln n with probability at most
1/n for suficiently large n via a direct calculation and a union bound. This is a very
loose bound; although the maximum load is in fact (ln n/ ln ln n) with probability
close to 1 (as we show later), the constant factor 3 we use here is chosen to simplify
the argument and could be reduced with more care.
Lemma 5.1: When n balls are thrown independently and uniformly at random into n
bins, the probability that the maximum load is more than 3 ln n/ ln ln n is at most 1/n
for n suficiently large.
This follows from a union bound; there are Mn distinct sets of M balls, and for any set
of M balls the probability that all land in bin 1 is (1/n)M . We now use the inequalities
M
M
n 1 1 e
≤ ≤ .
M n M! M
Here the second inequality is a consequence of the following general bound on facto-
rials: since
∞
kk ki
< = ek ,
k! i=0
i!
we have
k
k
k! > .
e
Applying a union bound again allows us to ind that, for M ≥ 3 ln n/ ln ln n, the prob-
ability that any bin receives at least M balls is bounded above by
M
e ln ln n 3 ln n/ln ln n
e
n ≤n
M 3 ln n
ln ln n 3 ln n/ln ln n
≤n
ln n
= eln n (eln ln ln n−ln ln n )3 ln n/ln ln n
= e−2 ln n+3(ln n)(ln ln ln n)/ln ln n
1
≤
n
for n suficiently large.
100
5.3 the poisson distribution
where the irst equality follows from the linearity of expectations and the second fol-
lows from symmetry, as E[X j2 ] is the same for all buckets.
Since X1 is a binomial random variable B(n, 1/n), using the results of Section 3.2.1
yields
n(n − 1) 1
E[X12 ] = 2
+ 1 = 2 − < 2.
n n
Hence the total expected time spent in the second stage is at most 2cn, so Bucket sort
runs in expected linear time.
We now consider the probability that a given bin is empty in the balls and bins model
with m balls and n bins as well as the expected number of empty bins. For the irst bin
101
balls, bins, and random graphs
to be empty, it must be missed by all m balls. Since each ball hits the irst bin with
probability 1/n, the probability the irst bin remains empty is
1 m
1− ≈ e−m/n ;
n
of course, by symmetry this probability is the same for all bins. If Xi is a random variable
that is 1 when the ith bin is empty and 0 otherwise, then E[Xi ] = (1 − 1/n)m . Let X be
a random variable that represents the number of empty bins. Then, by the linearity of
expectations,
n n
1 m
E[X] = E Xi = E[Xi ] = n 1 − ≈ ne−m/n .
i=1 i=1
n
Thus, the expected fraction of empty bins is approximately e−m/n . This approximation
is very good even for moderately size values of m and n, and we use it frequently
throughout this chapter.
We can generalize the preceding argument to ind the expected fraction of bins with
r balls for any constant r. The probability that a given bin has r balls is
r
1 m−r 1 m−r
m 1 1 m(m − 1) · · · (m − r + 1)
1− = 1 − .
r n n r! nr n
When m and n are large compared to r, the second factor on the right-hand side is
approximately (m/n)r , and the third factor is approximately e−m/n . Hence the proba-
bility pr that a given bin has r balls is approximately
e−m/n (m/n)r
pr ≈ , (5.2)
r!
and the expected number of bins with exactly r balls is approximately npr . We formalize
this relationship in Section 5.3.1.
The previous calculation naturally leads us to consider the following distribution.
Deinition 5.1: A discrete Poisson random variable X with parameter μ is given by
the following probability distribution on j = 0, 1, 2, . . . :
e−μ μ j
Pr(X = j) = .
j!
(Note that Poisson random variables differ from Poisson trials, discussed in Sec-
tion 4.2.1.)
Let us verify that the deinition gives a proper distribution in that the probabilities
sum to 1:
∞ ∞
e−μ μ j
Pr(X = j) =
j=0 j=0
j!
∞
−μ
μj
=e
j=0
j!
= 1,
where we have used the Taylor expansion ex = ∞ j
j=0 (x / j!).
102
5.3 the poisson distribution
103
balls, bins, and random graphs
ex (1 − x2 ) ≤ 1 + x ≤ ex , (5.3)
which follows from the Taylor series expansion of ex . (This is left as Exercise 5.7.)
Then
nk k (1 − p)n
Pr(Xn = k) ≤ p
k! (1 − p)k
(np)k e−p n
≤
k! 1 − pk
e (np)k 1
−p n
= .
k! 1 − pk
The second line follows from the irst by Eqn. (5.3) and the fact that (1 − p)k ≥ 1 − pk
for k ≥ 0. Also,
(n − k + 1)k k
Pr(Xn = k) ≥ p (1 − p)n
k!
((n − k + 1)p)k −p n
≥ e (1 − p2 )n
k!
e−p n ((n − k + 1)p)k
≥ (1 − p2 n),
k!
where in the second inequality we applied Eqn. (5.3) with x = −p.
Combining, we have
e−p n (np)k 1 e−p n ((n − k + 1)p)k
≥ Pr(Xn = k) ≥ (1 − p2 n).
k! 1 − pk k!
In the limit, as n approaches ininity, p approaches zero because the limiting value of
pn is the constant λ. Hence 1/(1 − pk) approaches 1, 1 − p2 n approaches 1, and the
difference between (n − k + 1)p and np approaches 0. It follows that
e−p n (np)k 1 e−λ λk
lim =
n→∞ k! 1 − pk k!
and
e−p n ((n − k + 1)p)k e−λ λk
lim (1 − p2 n) = .
n→∞ k! k!
Since limn→∞ Pr(Xn = k) lies between these two values, the theorem follows.
106
5.4 the poisson approximation
107
balls, bins, and random graphs
The probability that Yi(m) = ki is e−m/n (m/n)ki /ki !, since the Yi(m) are independent Pois-
son random variables with mean m/n. Also, by Lemma 5.2, the sum of the Yi(m) is itself
a Poisson random variable with mean m. Hence
Pr Y1(m) = k1 ∩ Y1(m) = k2 ∩ · · · ∩ Yn(m) = kn
n −m/n
i=1 e (m/n)ki /ki !
n (m)
=
e−m mk /k!
Pr i=1 Yi =k
k!
= ,
(k1 !)(k2 !) · · · (kn !)nk
proving the theorem.
With this relationship between the two distributions, we can prove strong results about
any function on the loads of the bins.
Theorem 5.7: Let f (x1 , . . . , xn ) be a nonnegative function. Then
√
E f X1(m) , . . . , Xn(m) ≤ e mE f Y1(m) , . . . , Yn(m) .
(5.4)
Proof: We have that
n
∞
n
(m) (m)
(m)
(m)
E f Y1 , . . . , Yn(m) = E f Y1 , . . . , Yn(m)
Yi = k Pr Yi = k
k=0 i=1 i=1
n
n
(m)
≥ E f Y1 , . . . , Yn(m) Yi(m) = m Pr Yi(m) = m
i=1 i=1
X1(m) , . . . , Xn(m) ] Yi(m)
= E[ f Pr =m ,
(m)
where
n the last equality follows from the fact that the joint distribution of the Y i given
(m) (m) n (m)
Y
i=1 i = m is exactly that of the Xi , as shown in Theorem 5.6. Since i=1 Yi
is Poisson distributed with mean m, we now have
mm e−m
E f Y1(m) , . . . , Yn(m) ≥ E f X1(m) , . . . , Xn(m)
.
m!
We use the following loose bound on m!, which we prove as Lemma 5.8:
m
√ m
m! < e m .
e
This yields
1
E f Y1(m) , . . . , Yn(m) ≥ E f X1(m) , . . . , Xn(m) √ ,
e m
and the theorem is proven.
108
5.4 the poisson approximation
We prove the upper bound we used for factorials, which closely matches the loose lower
bound we used in Lemma 5.1.
Lemma 5.8:
√ n n
n! ≤ e n . (5.5)
e
Proof: We use the fact that
n
ln(n!) = ln i.
i=1
This follows from the fact that ln x is concave, since its second derivative is −1/x2 ,
which is always negative. Therefore,
n n
ln n
ln x dx ≥ ln i −
1 i=1
2
or, equivalently,
ln n
n ln n − n + 1 ≥ ln(n!) − .
2
The result now follows simply by exponentiating.
Theorem 5.7 holds for any nonnegative function on the number of balls in the bins. In
particular, if the function is the indicator function that is 1 if some event occurs and 0
otherwise, then the theorem gives bounds on the probability of events. Let us call the
scenario in which the number of balls in the bins are taken to be independent Poisson
random variables with mean m/n the Poisson case, and the scenario where m balls are
thrown into n bins independently and uniformly at random the exact case.
Corollary 5.9: Any event that takes place with probability p in the Poisson case takes
√
place with probability at most pe m in the exact case.
Proof: Let f be the indicator function of the event. In this case, E[ f ] is just the
probability that the event occurs, and the result follows immediately from Theorem
5.7.
This is a quite powerful result. It says that any event that happens with small proba-
bility in the Poisson case also happens with small probability in the exact case, where
balls are thrown into bins. Since in the analysis of algorithms we often want to show
that certain events happen with small probability, this result says that we can utilize an
109
balls, bins, and random graphs
analysis of the Poisson approximation to obtain a bound for the exact case. The Pois-
son approximation is easier to analyze because the numbers of balls in each bin are
independent random variables.1
We can actually do even a little bit better in many natural cases. Part of the proof of
the following theorem is outlined in Exercises 5.14 and 5.15.
Theorem 5.10: Let f (x1 , . . . , xn ) be a nonnegative function such that
E[ f (X1(m) , . . . , Xn(m) )] is either monotonically increasing or monotonically decreasing
in m. Then
E f X1(m) , . . . , Xn(m) ≤ 2E f Y1(m) , . . . , Yn(m) .
(5.6)
The following corollary is immediate.
Corollary 5.11: Let E be an event whose probability is either monotonically increas-
ing or monotonically decreasing in the number of balls. If E has probability p in the
Poisson case, then E has probability at most 2p in the exact case.
To demonstrate the utility of this corollary, we again consider the maximum load prob-
lem for the case m = n. We have shown via a union bound argument that the maximum
load is at most 3 ln n/ ln ln n with high probability. Using the Poisson approximation,
we prove the following almost-matching lower bound on the maximum load.
Lemma 5.12: When n balls are thrown independently and uniformly at random into
n bins, the maximum load is at least ln n/ ln ln n with probability at least 1 − 1/n for n
suficiently large.
Proof: In the Poisson case, the probability that bin 1 has load at least M = ln n/ ln ln n
is at least 1/eM!, which is the probability it has load exactly M. In the Poisson case,
all bins are independent, so the probability that no bin has load at least M is at
most
1 n
1− ≤ e−n/(eM!) .
eM!
We now need to choose M so that e−n/(eM!) ≤ n−2 , for then (by Theorem 5.7) we will
have that the probability that the maximum load is not at least M in the exact case is at
√
most e n/n2 < 1/n. This will give the lemma. Because the maximum load is clearly
monotonically increasing in the number of balls, we could also apply the slightly better
Theorem 5.10, but this would not affect the argument substantially.
It therefore sufices to show that M! ≤ n/2e ln n, or equivalently that ln M! ≤ ln n −
ln ln n − ln(2e). From our bound of Eqn. (5.5), it follows that
M
M
√ M M
M! ≤ e M ≤M
e e
1 There are other ways to handle the dependencies in the balls-and-bins model. In Chapter 13 we describe a more
general way to deal with dependencies (using martingales) that applies here. Also, there is a theory of negative
dependence that applies to balls-and-bins problems that also allows these dependencies to be dealt with nicely.
110
5.4 the poisson approximation
when n (and hence M = ln n/ ln ln n) are suitably large. Hence, for n suitably large,
ln M! ≤ M ln M − M + ln M
ln n ln n
= (ln ln n − ln ln ln n) − + (ln ln n − ln ln ln n)
ln ln n ln ln n
ln n
≤ ln n −
ln ln n
≤ ln n − ln ln n − ln(2e),
where in the last two inequalities we have used the fact that ln ln n = o(ln n/ ln ln n).
Since all bins are independent under the Poisson approximation, the probability that no
bin is empty is
e−c n
−c
1− ≈ e−e .
n
The last approximation is appropriate in the limit as n grows large, so we apply it here.
To show the Poisson approximation is accurate, we undertake the following steps.
Consider the experiment where each bin has a Poisson number of balls, each with mean
ln n + c. Let E be the event that no bin is empty, and let X be the number of balls thrown.
We have seen that
−c
lim Pr(E ) = e−e .
n→∞
That is, the difference between our experiment coming up with exactly m balls or just
almost m balls makes an asymptotically negligible difference in the probability that
every bin has a ball. With these two facts, Eqn. (5.7) becomes
√ √
Pr(E ) = Pr E | |X − m| ≤ 2m ln m · Pr |X − m| ≤ 2m ln m
√ √
+ Pr E | |X − m| > 2m ln m · Pr |X − m| > 2m ln m
√
= Pr E | |X − m| ≤ 2m ln m · (1 − o(1)) + o(1)
= Pr(E | X = m)(1 − o(1)) + o(1),
and hence
lim Pr(E ) = lim Pr(E | X = m).
n→∞ n→∞
But from Theorem 5.6, the quantity on the right is equal to the probability that every
bin has at least one ball when m balls are thrown randomly, since conditioning on m
total balls with the Poisson approximation is equivalent to throwing m balls randomly
into the n bins. As aresult, the theorem
√ follows
once we have shown these two facts.
To show that Pr |X − m| > 2m ln m is o(1), consider that X is a Poisson ran-
dom variable with mean m, since it is a sum of independent Poisson random variables.
We use the Chernoff bound for the Poisson distribution (Theorem 5.4) to bound this
112
5.5 application: hashing
Another possibility is to place the words into bins and then search the appropriate bin
for the word. The words in a bin would be represented by a linked list. The placement
of words into bins is accomplished by using a hash function. A hash function f from a
universe U into a range [0, n − 1] can be thought of as a way of placing items from the
universe into n bins. Here the universe U would consist of possible password strings.
The collection of bins is called a hash table. This approach to hashing is called chain
hashing, since items that fall in the same bin are chained together in a linked list.
Using a hash table turns the dictionary problem into a balls-and-bins problem. If our
dictionary of unacceptable passwords consists of m words and the range of the hash
function is [0, n − 1], then we can model the distribution of words in bins with the
same distribution as m balls placed randomly in n bins. We are making a rather strong
assumption by presuming that our hash function maps words into bins in a fashion
that appears random, so that the location of each word is independent and identically
distributed. There is a great deal of theory behind designing hash functions that appear
random, and we will not delve into that theory here. We simply model the problem by
assuming that hash functions are random. In other words, we assume that (a) for each
x ∈ U, the probability that f (x) = j is 1/n (for 0 ≤ j ≤ n − 1) and that (b) the values
of f (x) for each x are independent of each other. Notice that this does not mean that
every evaluation of f (x) yields a different random answer! The value of f (x) is ixed
for all time; it is just equally likely to take on any value in the range.
Let us consider the search time when there are n bins and m words. To search for an
item, we irst hash it to ind the bin that it lies in and then search sequentially through
the linked list for it. If we search for a word that is not in our dictionary, the expected
number of words in the bin the word hashes to is m/n. If we search for a word that is in
our dictionary, the expected number of other words in that word’s bin is (m − 1)/n, so
the expected number of words in the bin is 1 + (m − 1)/n. If we choose n = m bins for
our hash table, then the expected number of words we must search through in a bin is
constant. If the hashing takes constant time, then the total expected time for the search
is constant.
The maximum time to search for a word, however, is proportional to the maximum
number of words in a bin. We have shown that when n = m this maximum load is
(ln n/ ln ln n) with probability close to 1, and hence with high probability this is the
maximum search time in such a hash table. While this is still faster than the required
time for standard binary search, it is much slower than the average, which can be a
drawback for many applications.
Another drawback of chain hashing can be wasted space. If we use n bins for n items,
several of the bins will be empty, potentially leading to wasted space. The space wasted
can be traded off against the search time by making the average number of words per
bin larger than 1.
represent. Suppose we use a hash function to map each word into a 32-bit string. This
string will serve as a short ingerprint for the word; just as a ingerprint is a succinct way
of identifying people, the ingerprint string is a succinct way of identifying a word. We
keep the ingerprints in a sorted list. To check if a proposed password is unacceptable,
we calculate its ingerprint and look for it on the list, say by a binary search.2 If the
ingerprint is on the list, we declare the password unacceptable.
In this case, our password checker may not give the correct answer! It is possible for
a user to input an acceptable password, only to have it rejected because its ingerprint
matches the ingerprint of an unacceptable password. Hence there is some chance that
hashing will yield a false positive: it may falsely declare a match when there is not an
actual match. The problem is that – unlike ingerprints for human beings – our inger-
prints do not uniquely identify the associated word. This is the only type of mistake this
algorithm can make; it does not allow a password that is in the dictionary of unsuitable
passwords. In the password application, allowing false positives means our algorithm
is overly conservative, which is probably acceptable. Letting easily cracked passwords
through, however, would probably not be acceptable.
To place the problem in a more general context, we describe it as an approximate
set membership problem. Suppose we have a set S = {s1 , s2 , . . . , sm } of m elements
from a large universe U. We would like to represent the elements in such a way that
we can quickly answer queries of the form “is x an element of S?” We would also like
the representation to take as little space as possible. In order to save space, we would
be willing to allow occasional mistakes in the form of false positives. Here the unal-
lowable passwords correspond to our set S.
How large should the range of the hash function used to create the ingerprints be?
Speciically, if we are working with bits, how many bits should we use to create a
ingerprint? Obviously, we want to choose the number of bits that gives an acceptable
probability for a false positive match. The probability that an acceptable password has a
ingerprint that is different from any speciic unallowable password in S is (1 − 1/2b ).
It follows that if the set S has size m and if we use b bits for the ingerprint, then
the probability of a false positive for an acceptable password is 1 − (1 − 1/2b )m ≥
b
1 − e−m/2 . If we want this probability of a false positive to be less than a constant c,
we need
b
e−m/2 ≥ 1 − c,
which implies that
m
b ≥ log2 .
ln(1/(1 − c))
That is, we need b = (log2 m) bits. On the other hand, if we use b = 2 log2 m bits,
then the probability of a false positive falls to
1 m
1
1− 1− 2 < .
m m
2 In this case the ingerprints will be uniformly distributed over all 32-bit strings. There are faster algorithms
for searching over sets of numbers with this distribution, just as Bucket sort allows faster sorting than standard
comparison-based sorting when the elements to be sorted are from a uniform distribution, but we will not concern
ourselves with this point here.
115
balls, bins, and random graphs
In our example, if our dictionary has 216 = 65,536 words, then using 32 bits when
hashing yields a false positive probability of just less than 1/65,536.
We let f = (1 − e−km/n )k = (1 − p)k . From now on, for convenience we use the
asymptotic approximations p and f to represent (respectively) the probability that a
bit in the Bloom ilter is 0 and the probability of a false positive.
Suppose that we are given m and n and wish to optimize the number of hash func-
tions k in order to minimize the false positive probability f. There are two competing
forces: using more hash functions gives us more chances to ind a 0-bit for an element
that is not a member of S, but using fewer hash functions increases the fraction of 0-bits
116
5.5 application: hashing
in the array. The optimal number of hash functions that minimizes f as a function of k
is easily found taking the derivative. Let g = k ln(1 − e−km/n ), so that f = eg and mini-
mizing the false positive probability f is equivalent to minimizing g with respect to k. We
ind
dg km e−km/n
= ln(1 − e−km/n ) + .
dk n 1 − e−km/n
It is easy to check that the derivative is zero when k = (ln 2) · (n/m) and that this
point is a global minimum. In this case the false positive probability f is (1/2)k ≈
(0.6185)n/m . The false positive probability falls exponentially in n/m, the number of
bits used per item. In practice, of course, k must be an integer, so the best possible
choice of k may lead to a slightly higher false positive rate.
A Bloom ilter is like a hash table, but instead of storing set items we simply use one
bit to keep track of whether or not an item hashed to that location. If k = 1, we have
just one hash function and the Bloom ilter is equivalent to a hashing-based ingerprint
system, where the list of the ingerprints is stored in a 0–1 bit array. Thus Bloom ilters
can be seen as a generalization of the idea of hashing-based ingerprints. As we saw
when using ingerprints, to get even a small constant probability of a false positive
required (log m) ingerprint bits per item. In many practical applications, (log m)
bits per item can be too many. Bloom ilters allow a constant probability of a false
positive while keeping n/m, the number of bits of storage required per item, constant.
For many applications, the small space requirements make a constant probability of
117
balls, bins, and random graphs
error acceptable. For example, in the password application, we may be willing to accept
false positive rates of 1% or 2%.
Bloom ilters are highly effective even if n = cm for a small constant c, such as
c = 8. In this case, when k = 5 or k = 6 the false positive probability is just over 0.02.
This contrasts with the approach of hashing each element into (log m) bits. Bloom
ilters require signiicantly fewer bits while still achieving a very good false positive
probability.
It is also interesting to frame the optimization another way. Consider f, the proba-
bility of a false positive, as a function of p. We ind
f = (1 − p)k
= (1 − p)(− ln p)(n/m)
= (e− ln(p) ln(1−p) )n/m . (5.8)
From the symmetry of this expression, it is easy to check that p = 1/2 minimizes the
false positive probability f. Hence the optimal results are achieved when each bit of the
Bloom ilter is 0 with probability 1/2. An optimized Bloom ilter looks like a random
bit string.
To conclude, we reconsider our assumption that the fraction of entries that are still 0
after all of the elements of S are hashed into the Bloom ilter is p. Each bit in the array
can be thought of as a bin, and hashing an item is like throwing a ball. The fraction of
entries that are still 0 after all of the elements of S are hashed is therefore equivalent to
the fraction of empty bins after mk balls are thrown into n bins. Let X be the number of
such bins when mk balls are thrown. The expected fraction of such bins is
1 km
′
p = 1− .
n
The events of different bins being empty are not independent, but we can apply
Corollary 5.9, along with the Chernoff bound of Eqn. (4.6), to obtain
√ 2 ′
Pr(|X − np′ | ≥ εn) ≤ 2e ne−nε /3p .
Actually, Corollary 5.11 applies as well, since the number of 0-entries – which corre-
sponds to the number of empty bins – is monotonically decreasing in the number of
balls thrown. The bound tells us that the fraction of empty bins is close to p′ (when
n is reasonably large) and that p′ is very close to p. Our assumption that the fraction
of 0-entries in the Bloom ilter is p is therefore quite accurate for predicting actual
performance.
If each user has an identifying name or number, hashing provides one possible solu-
tion. Hash each user’s identiier into 2b bits, and then take the permutation given by
the sorted order of the resulting numbers. That is, the user whose identiier gives the
smallest number when hashed comes irst, and so on. For this approach to work, we do
not want two users to hash to the same value, since then we must decide again how to
order these users.
If b is suficiently large, then with high probability the users will all obtain distinct
hash values. One can analyze the probability that two hash values collide by using the
analysis from Section 5.1 for the birthday paradox; hash values correspond to birthdays.
We here use a simpler analysis based just on using a union bound. There are n2 pairs
of users. The probability that any speciic pair has the same hash value is 1/2b . Hence
the probability that any pair has the same hash value is at most
n 1 n(n − 1)
b
= .
2 2 2b+1
n
pm (1 − p)(2 )−m .
One way to generate a random graph in Gn,p is to consider each of the n2 possible edges
in some order and then independently add each edge to the graph with probability p.
119
balls, bins, and random graphs
The expected number of edges in the graph is therefore n2 p, and each vertex has
random graph in Gn,p is concentrated around N, and conditioned on a graph from Gn,p
having N edges, that graph is uniform over all the graphs from Gn,N . The relationship
is similar to the relationship between throwing m balls into n bins and having each bin
have a Poisson distributed number of balls with mean m/n.
Here, for example is one way of formalizing the relationship between the Gn,p and
Gn,N models. A graph property is a property that holds for a graph regardless of how
the vertices are labeled, so it holds for all possible isomorphisms of the graph. We say
that a graph property is monotone increasing if whenever the property holds for G =
(V, E ) it holds also for any graph G′ = (V, E ′ ) with E ⊆ E ′ ; monotone decreasing graph
properties are deined similarly. For example, the property that a graph is connected
is a monotone increasing graph property, as is the property that a graph contains a
connected component of at least k vertices for any particular value of k. The property
that a graph is a tree, however, is not a monotone graph property, although the property
that the graph contains no cycles is a monotone decreasing graph property. We have
the following lemma:
Lemma 5.14: For a given monotone increasing graph property let P(n, N) be the
probability that the property holds for a graph in Gn,N and P(n, p) the probability
that
it holds for a graph in Gn,p . Let p+ = (1 + ǫ)N/ n2 and p− = (1 − ǫ)N/ n2 for a
constant 1 > ǫ > 0. Then
P(n, p− ) − e−O(N ) ≤ P(n, N) ≤ P(n, p+ ) + e−O(N ) .
Proof: Let X be a random variable giving the number of edges that occur when a graph
is chosen from Gn,p− . Conditioned on X = k, a random graph from Gn,p− is equivalent
to a graph from Gn,k , since the k edges chosen are equally likely to be any subset of k
edges. Hence
( n2 )
−
P(n, p ) = P(n, k) Pr(X = k).
k=0
In particular,
P(n, p− ) = P(n, k) Pr(X = k) + P(n, k) Pr(X = k).
k≤N k>N
120
5.6 random graphs
Also, for a monotone increasing graph property, P(n, k) ≤ P(n, N) for k ≤ N. Hence
P(n, p− ) ≤ Pr(X ≤ N)P(n, N) + Pr(X > N) ≤ P(n, N) + Pr(X > N).
However,
n
Pr(X > N) can be bounded by a standard Chernoff bound; X is the sum of
2
independent Bernoulli random variables, and hence by Theorem 4.4
1 2
Pr(X > N) = Pr X > E[X] ≤ Pr(X > (1 + ǫ)E[X]) ≤ e−(1−ǫ)ǫ N/3 .
1−ǫ
1
Here we have used that 1−ǫ
> 1 + ǫ for 0 < ǫ < 1.
Similarly,
P(n, p+ ) = P(n, k) Pr(X = k) + P(n, k) Pr(X = k),
k<N k≥N
so
P(n, p+ ) ≥ Pr(X ≥ N)P(n, N) ≥ P(n, N) − Pr(X < N).
By Theorem 4.5
1 ǫ 2
Pr(X > N) = Pr X < E[X] ≤ Pr X < 1 + E[X] ≤ e−(1+ǫ)ǫ N/8 ,
1+ǫ 2
1
where here we have used that 1+ǫ
< 1 − ǫ/2 for 0 < ǫ < 1.
Figure 5.2: The rotation of the path v1 , v2 , v3 , v4 , v5 , v6 with the edge (v6 , v3 ) yields a new path
v1 , v2 , v3 , v6 , v5 , v4 .
is an NP-hard problem. However, our analysis of this algorithm shows that inding a
Hamiltonian cycle is not hard for suitably randomly selected graphs, even though it
may be hard to solve in general.
Our algorithm will make use of a simple operation called a rotation. Let G be an
undirected graph. Suppose that
P = v1 , v2 , . . . , vk
is a simple path in G and that (vk , vi ) is an edge of G. Then
P′ = v1 , v2 , . . . , vi , vk , vk−1 , . . . , vi+2 , vi+1
is also a simple path, which we refer to as the rotation of P with the rotation edge
(vk , vi ); see Figure 5.2.
We irst consider a simple, natural algorithm that proves challenging to analyze. We
assume that our input is presented as a list of adjacent edges for each vertex in the graph,
with the edges of each list being given in a random order according to independent and
uniform random permutations. Initially, the algorithm chooses an arbitrary vertex to
start the path; this is the initial head of the path. The head is always one of the endpoints
of the path. From this point on, the algorithm either “grows” the path deterministically
from the head, or rotates the path – as long as there is an adjacent edge remaining on
the head’s list. See Algorithm 5.1.
The dificulty in analyzing this algorithm is that, once the algorithm views some
edges in the edge lists, the distribution of the remaining edges is conditioned on the
edges the algorithm has already seen. We circumvent this dificulty by considering a
modiied algorithm that, though less eficient, avoids this conditioning issue and so is
easier to analyze for the random graphs we consider. See Algorithm 5.2. Each ver-
tex v keeps two lists. The list used-edges(v) contains edges adjacent to v that have
been used in the course of the algorithm while v was the head; initially this list is
empty. The list unused-edges(v) contains other edges adjacent to v that have not been
used.
We initially analyze the algorithm assuming a speciic model for the initial unused-
edges lists. We subsequently relate this model to the Gn,p model for random graphs.
Assume that each of the n − 1 possible edges connected to a vertex v is initially on the
unused-edges list for vertex v independently with some probability q. We also assume
these edges are in a random order. One way to think of this is that, before beginning
the algorithm, we create the unused-edges list for each vertex v by inserting each pos-
sible edge (v, u) with probability q; we think of the corresponding graph G as being
the graph including all edges that were inserted on some unused-edges list. Notice that
122
5.6 random graphs
123
balls, bins, and random graphs
this means an edge (v, u) could initially be on the unused-edges list for v but not for
u. Also, when an edge (v, u) is irst used in the algorithm, if v is the head then it is
removed just from the unused-edges list of v; if the edge is on the unused-edges list for
u, it remains on this list.
By choosing the rotation edge from either the used-edges list or the unused-edges
list with appropriate probabilities and then reversing the path with some small proba-
bility in each step, we modify the rotation process so that the next head of the list is
chosen uniformly at random from among all vertices of the graph. Once we establish
this property, the progress of the algorithm can be analyzed through a straightforward
application of our analysis of the coupon collector’s problem.
The modiied algorithm appears wasteful; reversing the path or rotating with one of
the used edges cannot increase the path length. Also, we may not be taking advantage
of all the possible edges of G at each step. The advantage of the modiied algorithm is
that it proves easier to analyze, owing to the following lemma.
Lemma 5.16: Suppose the modiied Hamiltonian cycle algorithm is run on a graph
chosen using the described model. Let Vt be the head vertex after the tth step. Then, for
any vertex u, as long as at the tth step there is at least one unused edge available at the
head vertex,
That is, the head vertex can be thought of as being chosen uniformly at random from
all vertices at each step, regardless of the history of the process.
If u = vi+1 is a vertex on the path but (vk , vi ) is not in used-edges(vk ), then the
probability that Vt+1 = u is the probability that the edge (vk , vi ) is chosen from unused-
edges(vk ) as the next rotation edge, which is
1 |used-edges(vk )| 1 1
1− − = . (5.9)
n n n − |used-edges(vk )| − 1 n
Finally, if u is not on the path, then the probability that Vt+1 = u is the probability that
the edge (vk+1 , u) is chosen from unused-edges(vk ). But this has the same probability
as in Eqn. (5.9).
For Algorithm 5.2, the problem of inding a Hamiltonian path looks exactly like the
coupon collector’s problem; the probability of inding a new vertex to add to the path
when there are k vertices left to be added is k/n. Once all the vertices are on the
path, the probability that a cycle is closed in each rotation is 1/n. Hence, if no list
of unused-edges is exhausted then we can expect a Hamiltonian path to be formed in
about O(n ln n) rotations, with about another O(n ln n) rotations to close the path to
form a Hamiltonian cycle. More concretely, we can prove the following theorem.
Theorem 5.17: Suppose the input to the modiied Hamiltonian cycle algorithm ini-
tially has unused-edge lists where each edge (v, u) with u = v is placed on v’s list
independently with probability q ≥ 20 ln n/n. Then the algorithm successfully inds a
Hamiltonian cycle in O(n ln n) iterations of the repeat loop (step 2) with probability
1 − O(n−1 ).
Note that we did not assume that the input random graph has a Hamiltonian cycle. A
corollary of the theorem is that, with high probability, a random graph chosen in this
way has a Hamiltonian cycle.
Proof of Theorem 5.17: Consider the following two events.
E1 : The algorithm ran for 3n ln n steps with no unused-edges list becoming empty, but
it failed to construct a Hamiltonian cycle.
E2 : At least one unused-edges list became empty during the irst 3n ln n iterations of
the loop.
For the algorithm to fail, either event E1 or E2 must occur. We irst bound the proba-
bility of E1 . Lemma 5.16 implies that, as long as there is no empty unused-edges list in
the irst 3n ln n iterations of step 2 of Algorithm 5.2, in each iteration the next head of
the path is uniform among the n vertices of the graph. To bound E1 , we therefore con-
sider the probability that more than 3n ln n iterations are required to ind a Hamiltonian
cycle when the head is chosen uniformly at random each iteration.
The probability that the algorithm takes more than 2n ln n iterations to ind a Hamil-
tonian path is exactly the probability that a coupon collector’s problem on n types
requires more than 2n ln n coupons. The probability that any speciic coupon type has
not been found among 2n ln n random coupons is
1 2n ln n
1
1− ≤ e−2 ln n = 2 .
n n
125
balls, bins, and random graphs
By the union bound, the probability that any coupon type is not found is at most
1/n.
In order to complete a Hamiltonian path to a cycle the path must close, which it does
at each step with probability 1/n. Hence the probability that the path does not become
a cycle within the next n ln n iterations is
n ln n
1 1
1− ≤ e− ln n = .
n n
and hence
2
Pr(E2 ) ≤ .
n
In total, the probability that the algorithm fails to ind a Hamiltonian cycle in 3n ln n
iterations is bounded by
4
Pr(E1 ) + Pr(E2 ) ≤ .
n
We did not make an effort to optimize the constants in the proof. There is, how-
ever, a clear trade-off; with more edges, one could achieve a lower probability of
failure.
We are left with showing how our algorithm can be applied to graphs in Gn,p . We
show that, as long as p is known, we can partition the edges of the graph into edge lists
that satisfy the requirements of Theorem 5.17.
Proof: We partition the edges of our input graph from Gn,p as follows. Let q ∈ [0, 1] be
such that p = 2q − q2 . Consider any edge (u, v) in the input graph. We execute exactly
one of the following three possibilities: with probability q(1 − q)/(2q − q2 ) we place
the edge on u’s unused-edges list but not on v’s; with probability q(1 − q)/(2q − q2 ) we
initially place the edge on v’s unused-edges list but not on u’s; and with the remaining
probability q2 /(2q − q2 ) the edge is placed on both unused-edges lists.
Now, for any possible edge (u, v), the probability that it is initially placed in the
unused-edges list for v is
q2
q(1 − q)
p + = q.
2q − q2 2q − q2
Moreover, the probability that an edge (u, v) is initially placed on the unused-edges
list for both u and v is pq2 /(2q − q2 ) = q2 , so these two placements are indepen-
dent events. Since each edge (u, v) is treated independently, this partitioning fulills
the requirements of Theorem 5.17 provided the resulting q is at least 20 ln n/n. When
p ≥ (40 ln n)/n we have q ≥ p/2 ≥ (20 ln n)/n, and the result follows.
In Exercise 5.27, we consider how to use Algorithm 5.2 even in the case where p is not
known in advance, so that the edge lists must be initialized without knowledge of p.
5.7. Exercises
127
balls, bins, and random graphs
Exercise 5.2: Suppose that Social Security numbers were issued uniformly at random,
with replacement. That is, your Social Security number would consist of just nine ran-
domly generated digits, and no check would be made to ensure that the same number
was not issued twice. Sometimes, the last four digits of a Social Security number are
used as a password. How many people would you need to have in a room before it was
more likely than not that two had the same last four digits? How many numbers could
be issued before it would be more likely than not that there is a duplicate number? How
would you answer these two questions if Social Security numbers had 13 digits? Try
to give exact numerical answers.
Exercise 5.3: Suppose that balls are thrown randomly into n bins. Show, for some
√
constant c1 , that if there are c1 n balls then the probability that no two land in the
same bin is at most 1/e. Similarly, show for some constant c2 (and suficiently large n)
√
that, if there are c2 n balls, then the probability that no two land in the same bin is at
least 1/2. Make these constants as close to optimal as possible. Hint: You may want to
use the facts that
e−x ≥ 1 − x
and
2 1
e−x−x ≤ 1 − x for x≤ .
2
Exercise 5.4: In a lecture hall containing 100 people, you consider whether or not
there are three people in the room who share the same birthday. Explain how to calculate
this probability exactly, using the same assumptions as in our previous analysis.
Exercise 5.5: Use the moment generating function of the Poisson distribution to com-
pute the second moment and the variance of the distribution.
Exercise 5.6: Let X be a Poisson random variable with mean μ, representing the num-
ber of errors on a page of this book. Each error is independently a grammatical error
with probability p and a spelling error with probability 1 − p. If Y and Z are random
variables representing the number of grammatical and spelling errors (respectively) on
a page of this book, prove that Y and Z are Poisson random variables with means μp
and μ(1 − p), respectively. Also, prove that Y and Z are independent.
Exercise 5.8: Suppose that n balls are thrown independently and uniformly at random
into n bins.
128
5.7 exercises
(a) Find the conditional probability that bin 1 has one ball given that exactly one ball
fell into the irst three bins.
(b) Find the conditional expectation of the number of balls in bin 1 under the condition
that bin 2 received no balls.
(c) Write an expression for the probability that bin 1 receives more balls than bin 2.
Exercise 5.9: Our analysis of Bucket sort in Section 5.2.2 assumed that n elements
were chosen independently and uniformly at random from the range [0, 2k ). Suppose
instead that n elements are chosen independently from the range [0, 2k ) according to
a distribution with the property that any number x ∈ [0, 2k ) is chosen with probability
at most a/2k for some ixed constant a > 0. Show that, under these conditions, Bucket
sort still requires linear expected time.
Exercise 5.10: Consider the probability that every bin receives exactly one ball when
n balls are thrown randomly into n bins.
(a) Give an upper bound on this probability using the Poisson approximation.
(b) Determine the exact probability of this event.
(c) Show that these two probabilities differ by a multiplicative factor that equals the
probability that a Poisson random variable with parameter n takes on the value n.
Explain why this is implied by Theorem 5.6.
Exercise 5.11: Consider throwing m balls into n bins, and for convenience let the
bins be numbered from 0 to n − 1. We say there is a k-gap starting at bin i if bins
i, i + 1, . . . , i + k − 1 are all empty.
Exercise 5.12: The following problem models a simple distributed system wherein
agents contend for resources but “back off” in the face of contention. Balls represent
agents, and bins represent resources.
The system evolves over rounds. Every round, balls are thrown independently and
uniformly at random into n bins. Any ball that lands in a bin by itself is served and
removed from consideration. The remaining balls are thrown again in the next round.
We begin with n balls in the irst round, and we inish when every ball is served.
(a) If there are b balls at the start of a round, what is the expected number of balls at
the start of the next round?
(b) Suppose that every round the number of balls served was exactly the expected
number of balls to be served. Show that all the balls would be served in O(log log n)
rounds. (Hint: If x j is the expected number of balls left after j rounds, show and
use that x j+1 ≤ x2j /n.)
129
balls, bins, and random graphs
Exercise 5.13: Suppose that we vary the balls-and-bins process as follows. For conve-
nience let the bins be numbered from 0 to n − 1. There are log2 n players. Each player
randomly chooses a starting location ℓ uniformly from [0, n − 1] and then places one
ball in each of the bins numbered ℓ mod n, ℓ + 1 mod n, . . . , ℓ + n/ log2 n − 1 mod n.
Argue that the maximum load in this case is only O(log log n/ log log log n) with prob-
ability that approaches 1 as n → ∞.
Exercise 5.15: (a) In Theorem 5.7 we showed that, for any nonnegative functions f,
(m)
E f Y1(m) , . . . , Yn(m) ≥ E f X1(m) , . . . , Xn(m) Pr
Yi = m .
Exercise 5.16: We consider another way to obtain Chernoff-like bounds in the setting
of balls and bins without using Theorem 5.7. Consider n balls thrownrandomly into
n bins. Let Xi = 1 if the ith bin is empty and 0 otherwise. Let X = ni=1 Xi . Let Yi ,
i = 1, . . . , n, be independent
Bernoulli random variables that are 1 with probability
p = (1 − 1/n)n . Let Y = ni=1 Yi .
(a) Show that E[X1 X2 · · · Xk ] ≤ E[Y1Y2 · · · Yk ] for any k ≥ 1.
(b) Show that E[etX ] ≤ E[etY ] for all t ≥ 0. (Hint: Use the expansion for ex and com-
pare E[X k ] to E[Y k ].)
(c) Derive a Chernoff bound for Pr(X ≥ (1 + δ)E[X]).
Exercise 5.17: Let G be a random graph generated using the Gn,p model.
(a) A clique of k vertices in a graph is a subset of k vertices such that all 2k edges
between these vertices lie in the graph. For what value of p, as a function of n, is
the expected number of cliques of ive vertices in G equal to 1?
(b) A K3,3 graph is a complete bipartite graph with three vertices on each side. In other
words, it is a graph with six vertices and nine edges; the six distinct vertices are
arranged in two groups of three, and the nine edges connect each of the nine pairs
130
5.7 exercises
of vertices with one vertex in each group. For what value of p, as a function of n,
is the expected number of K3,3 subgraphs of G equal to 1?
(c) For what value of p, as a function of n, is the expected number of Hamiltonian
cycles in the graph equal to 1?
Exercise 5.18: Theorem 5.7 shows that any event that occurs with small probability
in the balls-and-bins setting where the number of balls in each bin is an independent
Poisson random variable also occurs with small probability in the standard balls-and-
bins model. Prove a similar statement for random graphs: Every event that happens
in the Gn,p model also happens with small probability in the
with small probability
Gn,N model for N = n2 p.
Exercise 5.21: (a) Let f (n) be the expected number of random edges that must be
added before an empty undirected graph with n vertices becomes connected. (Con-
nectedness is deined in Exercise 5.19.) That is, suppose that we start with a graph on
n vertices with zero edges and then repeatedly add an edge, chosen uniformly at ran-
dom from all edges not currently in the graph, until the graph becomes connected. If
Xn represents the number of edges added, then f (n) = E[Xn ].
Write a program to estimate f (n) for a given value of n. Your program should track
the connected components of the graph as you add edges until the graph becomes con-
nected. You will probably want to use a disjoint set data structure, a topic covered in
standard undergraduate algorithms texts. You should try n = 100, 200, 300, 400, 500,
600, 700, 800, 900, and 1000. Repeat each experiment 100 times, and for each value of
n compute the average number of edges needed. Based on your experiments, suggest a
function h(n) that you think is a good estimate for f (n).
(b) Modify your program for the problem in part (a) so that it also keeps track of
isolated vertices. Let g(n) be the expected number of edges added before there are no
more isolated vertices. What seems to be the relationship between f (n) and g(n)?
Exercise 5.22: In hashing with open addressing, the hash table is implemented as an
array and there are no linked lists or chaining. Each entry in the array either contains
one hashed item or is empty. The hash function deines, for each key k, a probe sequence
h(k, 0), h(k, 1), . . . of table locations. To insert the key k, we irst examine the sequence
of table locations in the order deined by the key’s probe sequence until we ind an
empty location; then we insert the item at that position. When searching for an item in
the hash table, we examine the sequence of table locations in the order deined by the
key’s probe sequence until either the item is found or we have found an empty location
131
balls, bins, and random graphs
in the sequence. If an empty location is found, this means the item is not present in the
table.
An open-address hash table with 2n entries is used to store n items. Assume that the
table location h(k, j) is uniform over the 2n possible table locations and that all h(k, j)
are independent.
(a) Show that, under these conditions, the probability of an insertion requiring more
than k probes is at most 2−k .
(b) Show that, for i = 1, 2, . . . , n, the probability that the ith insertion requires more
than 2 log n probes is at most 1/n2 .
Let the random variable Xi denote the number of probes required by the ith insertion.
You have shown in part (b) that Pr(Xi > 2 log n) ≤ 1/n2 . Let the random variable X =
max1≤i≤n Xi denote the maximum number of probes required by any of the n insertions.
(c) Show that Pr(X > 2 log n) ≤ 1/n.
(d) Show that the expected length of the longest probe sequence is E[X] = O(log n).
Exercise 5.23: Bloom ilters can be used to estimate set differences. Suppose you have
a set X and I have a set Y, both with n elements. For example, the sets might represent
our 100 favorite songs. We both create Bloom ilters of our sets, using the same number
of bits m and the same k hash functions. Determine the expected number of bits where
our Bloom ilters differ as a function of m, n, k, and |X ∩ Y |. Explain how this could be
used as a tool to ind people with the same taste in music more easily than comparing
lists of songs directly.
Exercise 5.24: Suppose that we wanted to extend Bloom ilters to allow deletions as
well as insertions of items into the underlying set. We could modify the Bloom ilter
to be an array of counters instead of an array of bits. Each time an item is inserted
into a Bloom ilter, the counters given by the hashes of the item are increased by one.
To delete an item, one can simply decrement the counters. To keep space small, the
counters should be a ixed length, such as 4 bits.
Explain how errors can arise when using ixed-length counters. Assuming a setting
where one has at most n elements in the set at any time, m counters, k hash functions,
and counters with b bits, explain how to bound the probability that an error occurs over
the course of t insertions or deletions.
Exercise 5.25: Suppose that you built a Bloom ilter for a dictionary of words with
m = 2b bits. A co-worker building an application wants to use your Bloom ilter but
has only 2b−1 bits available. Explain how your colleague can use your Bloom ilter to
avoid rebuilding a new Bloom ilter using the original dictionary of words.
Exercise 5.26: For the leader election problem alluded to in Section 5.5.4, we have n
users, each with an identiier. The hash function takes as input the identiier and outputs
a b-bit hash value, and we assume that these values are independent and uniformly
distributed. Each user hashes its identiier, and the leader is the user with the smallest
132
5.8 an exploratory assignment
hash value. Give lower and upper bounds on the number of bits b necessary to ensure
that a unique leader is successfully chosen with probability p. Make your bounds as
tight as possible.
Exercise 5.27: Consider Algorithm 5.2, the modiied algorithm for inding Hamilto-
nian cycles. We have shown that the algorithm can be applied to ind a Hamiltonian
cycle with high probability in a graph chosen randomly from Gn,p , when p is known
and suficiently large, by initially placing edges in the edge lists appropriately. Argue
that the algorithm can similarly be applied to ind a Hamiltonian cycle with high prob-
ability on a graph chosen randomly from Gn,N when N = c1 n ln n for a suitably large
constant c1 . Argue also that the modiied algorithm can be applied even when p is not
known in advance as long as p is at least c2 ln n/n for a suitably large constant c2 .
Part of the research process in random processes is irst to understand what is going on
at a high level and then to use this understanding in order to develop formal mathemat-
ical proofs. In this assignment, you will be given several variations on a basic random
process. To gain insight, you should perform experiments based on writing code to
simulate the processes. (The code should be very short, a few pages at most.) After
the experiments, you should use the results of the simulations to guide you to make
conjectures and prove statements about the processes. You can apply what you have
learned up to this point, including probabilistic bounds and analysis of balls-and-bins
problems.
Consider a complete binary tree with N = 2n − 1 nodes. Here n is the depth of the
tree. Initially, all nodes are unmarked. Over time, via processes that we shall describe,
nodes becomes marked.
All of the processes share the same basic form. We can think of the nodes as having
unique identifying numbers in the range of [1, N]. Each unit of time, I send you the
identiier of a node. When you receive a sent node, you mark it. Also, you invoke the
following marking rule, which takes effect before I send out the next node.
r If a node and its sibling are marked, its parent is marked.
r If a node and its parent are marked, the other sibling is marked.
The marking rule is applied recursively as much as possible before the next node is
sent. For example, in Figure 5.3, the marked nodes are illed in. The arrival of the node
labeled by an X will allow you to mark the remainder of the nodes, as you apply the
marking rule irst up and then down the tree. Keep in mind that you always apply the
marking rule as much as possible.
Now let us consider the different ways in which I might be sending you the nodes.
Process 1: Each unit of time, I send the identiier of a node chosen independently and
uniformly at random from all of the N nodes. Note that I might send you a node that
is already marked, and in fact I may send a useless node that I have already sent.
133
balls, bins, and random graphs
Process 2: Each unit of time I send the identiier of a node chosen uniformly at random
from those nodes that I have not yet sent. Again, a node that has already been marked
might arrive, but each node will be sent at most once.
Process 3: Each unit of time I send the identiier of a node chosen uniformly at random
from those nodes that you have not yet marked.
We want to determine how many time steps are needed before all the nodes are
marked for each of these processes. Begin by writing programs to simulate the sending
processes and the marking rule. Run each process ten times for each value of n in the
range [10, 20]. Present the data from your experiments in a clear, easy-to-read fashion
and explain your data suitably. A tip: You may ind it useful to have your program print
out the last node that was sent before the tree became completely marked.
1. For the irst process, prove that the expected number of nodes sent is (N log N).
How well does this match your simulations?
2. For the second process, you should ind that almost all N nodes must be sent
√ before
the tree is marked. Show that, with constant probability, at least N − 2 N nodes
must be sent.
3. The behavior of the third process might seem a bit unusual. Explain it with a proof.
After answering these questions, you may wish to consider other facts you could prove
about these processes.
134
chapter six
The Probabilistic Method
The probabilistic method is a way of proving the existence of objects. The under-
lying principle is simple: to prove the existence of an object with certain properties,
we demonstrate a sample space of objects in which the probability is positive that a
randomly selected object has the required properties. If the probability of selecting an
object with the required properties is positive, then the sample space must contain such
an object, and therefore such an object exists. For example, if there is a positive proba-
bility of winning a million-dollar prize in a rafle, then there must be at least one rafle
ticket that wins that prize.
Although the basic principle of the probabilistic method is simple, its application to
speciic problems often involves sophisticated combinatorial arguments. In this chapter
we study a number of techniques for constructing proofs based on the probabilistic
method, starting with simple counting and averaging arguments and then introducing
two more advanced tools, the Lovász local lemma and the second moment method.
In the context of algorithms we are generally interested in explicit constructions
of objects, not merely in proofs of existence. In many cases the proofs of existence
obtained by the probabilistic method can be converted into eficient randomized con-
struction algorithms. In some cases, these proofs can be converted into eficient deter-
ministic construction algorithms; this process is called derandomization, since it con-
verts a probabilistic argument into a deterministic one. We give examples of both
randomized and deterministic construction algorithms arising from the probabilistic
method.
Kn is a complete subgraph Kk .
k
Theorem 6.1: If nk 2− 2 +1 < 1 then it is possible to color the edges of Kn with two
colors so that it has no monochromatic Kk subgraph.
where the last inequality follows from the assumptions of the theorem. Hence
⎛ n ⎞ ⎛ n ⎞
k k
⎜ ⎟
Pr ⎝ Ai ⎠ = 1 − Pr ⎝ Ai ⎠ > 0.
⎜ ⎟
i=1 i=1
As an example, consider whether the edges of K1000 can be 2-colored in such a way
that there is no monochromatic K20 . Our calculations are simpliied if we note that, for
n ≤ 2k/2 and k ≥ 3,
nk
n − 2k +1
2 ≤ 2−(k(k−1)/2)+1
k k!
2k/2+1
≤
k!
< 1.
Observing that for our example n = 1000 ≤ 210 = 2k/2 , we see that by Theorem 6.1
there exists a 2-coloring of the edges of K1000 with no monochromatic K20 .
136
6.2 the expectation argument
Can we use this proof to design an eficient algorithm to construct such a coloring?
Let us consider a general approach that gives a randomized construction algorithm.
First, we require that we can eficiently sample a coloring from the sample space. In
this case sampling is easy, because we can simply color each edge independently with
a randomly chosen color. In general, however, there might not be an eficient sampling
algorithm.
If we have an eficient sampling algorithm, the next question is: How many sam-
ples must we generate before obtaining a sample that satisies our requirements? If
the probability of obtaining a sample with the desired properties is p and if we sam-
ple independently at each trial, then the number of samples needed before inding a
sample with the required properties is a geometric random variable with expectation
1/p. Hence we need that 1/p be polynomial in the problem size in order to have an
algorithm that inds a suitable sample in polynomial expected time.
If p = 1 − o(1), then sampling once gives a Monte Carlo construction algorithm
that is incorrect with probability o(1). In our speciic example of inding a coloring on
a graph of 1000 vertices with no monochromatic K20 , we know that the probability that
a random coloring has a monochromatic K20 is at most
220/2+1
< 8.5 · 10−16 .
20!
Hence we have a Monte Carlo algorithm with a small probability of failure.
If we want a Las Vegas algorithm – that is, one that always gives a correct construc-
tion – then we need a third ingredient. We require a polynomial time procedure for
verifying that a sample object satisies the requirements; then we can test samples until
we ind one that does so. An upper bound on the expected time for this construction
can be found by multiplying together the expected number of samples 1/p by the sum
of an upper bound on the time to generate each sample and an upper bound on the time
to check each sample.1 For the coloring problem, there is a polynomial time veriica-
tion algorithm when k is a constant: simply check all nk cliques and make sure they
are not monochromatic. It does not seem that this approach can be extended to yield
polynomial time algorithms when k grows with n.
As we have seen, in order to prove that an object with certain properties exists, we
can design a probability space from which an element chosen at random yields an
object with the desired properties with positive probability. A similar and sometimes
easier approach for proving that such an object exists is to use an averaging argument.
The intuition behind this approach is that, in a discrete probability space, a random
variable must with positive probability assume at least one value that is no greater
than its expectation and at least one value that is not smaller than its expectation.
1 Sometimes the time to generate or check a sample may itself be a random variable. In this case, Wald’s equation
(discussed in Chapter 13) may apply.
137
the probabilistic method
For example, if the expected value of a rafle ticket is $3, then there must be at least
one ticket that ends up being worth no more than $3 and at least one that ends up being
worth no less than $3.
More formally, we have the following lemma.
Lemma 6.2: Suppose we have a probability space S and a random variable X deined
on S such that E[X] = μ. Then Pr(X ≥ μ) > 0 and Pr(X ≤ μ) > 0.
Proof: We have
μ = E[X] = x Pr(X = x),
x
where the summation ranges over all values in the range of X. If Pr(X ≥ μ) = 0, then
μ= x Pr(X = x) = x Pr(X = x) < μ Pr(X = x) = μ,
x x<μ x<μ
Thus, there must be at least one instance in the sample space of S for which the value
of X is at least μ and at least one instance for which the value of X is no greater than μ.
Let C(A, B) be a random variable denoting the value of the cut corresponding to the
sets A and B. Then
m m
1 m
E[C(A, B)] = E Xi = E[Xi ] = m · = .
i=1 i=1
2 2
Since the expectation of the random variable C(A, B) is m/2, there exists a partition A
and B with at least m/2 edges connecting the set A to the set B.
We can transform this argument into an eficient algorithm for inding a cut with value
at least m/2. We irst show how to obtain a Las Vegas algorithm. In Section 6.3, we
show how to construct a deterministic polynomial time algorithm.
It is easy to randomly choose a partition as described in the proof. The expectation
argument does not give a lower bound on the probability that a random partition has a
cut of value at least m/2. To derive such a bound, let
m
p = Pr C(A, B) ≥ ,
2
and observe that C(A, B) ≤ m. Then
m
= E[C(A, B)]
2
= i Pr(C(A, B) = i) + i Pr(C(A, B) = i)
i<m/2 i≥m/2
m
≤ (1 − p) − 1 + pm,
2
which implies that
1
p≥ .
m/2 + 1
The expected number of samples before inding a cut with value at least m/2 is therefore
just m/2 + 1. Testing to see if the value of the cut determined by the sample is at least
m/2 can be done in polynomial time simply by counting the edges crossing the cut. We
therefore have a Las Vegas algorithm for inding the cut.
clauses.
Proof: Assign values independently and uniformly at random to the variables. The
probability that the ith clause with ki literals is satisied is at least (1 − 2−ki ). The
expected number of satisied clauses is therefore at least
m
(1 − 2−ki ) ≥ m(1 − 2−k ),
i=1
and there must be an assignment that satisies at least that many clauses.
The foregoing argument can also be easily transformed into an eficient randomized
algorithm; the case where all ki = k is left as Exercise 6.1.
The probabilistic method can yield insight into how to construct deterministic algo-
rithms. As an example, we apply the method of conditional expectations in order to
derandomize the algorithm of Section 6.2.1 for inding a large cut.
Recall that we ind a partition of the n vertices V of a graph into sets A and B by
placing each vertex independently and uniformly at random in one of the two sets. This
gives a cut with expected value E[C(A, B)] ≥ m/2. Now imagine placing the vertices
deterministically, one at a time, in an arbitrary order v1 , v2 , . . . , vn . Let xi be the set
where vi is placed (so xi is either A or B). Suppose that we have placed the irst k
vertices, and consider the expected value of the cut if the remaining vertices are then
placed independently and uniformly into one of the two sets. We write this quantity
as E[C(A, B) | x1 , x2 , . . . , xk ]; it is the conditional expectation of the value of the cut
given the locations x1 , x2 , . . . , xk of the irst k vertices. We show inductively how to
place the next vertex so that
E[C(A, B) | x1 , x2 , . . . , xk ] ≤ E[C(A, B) | x1 , x2 , . . . , xk+1 ].
140
6.3 derandomization using conditional expectations
It follows that
E[C(A, B)] ≤ E[C(A, B) | x1 , x2 , . . . , xn ].
The right-hand side is the value of the cut determined by our placement algorithm,
since if x1 , x2 , . . . , xn are all determined then we have a cut of the graph. Hence our
algorithm returns a cut whose value is at least E[C(A, B)] ≥ m/2.
The base case in the induction is
E[C(A, B) | x1 ] = E[C(A, B)],
which holds by symmetry because it does not matter where we place the irst vertex.
We now prove the inductive step, that
E[C(A, B) | x1 , x2 , . . . , xk ] ≤ E[C(A, B) | x1 , x2 , . . . , xk+1 ]. (6.1)
Consider placing vk+1 randomly, so that it is placed in A or B with probability 1/2 each,
and let Yk+1 be a random variable representing the set where it is placed. Then
1
E[C(A, B) | x1 , x2 , . . . , xk ] = E[C(A, B) | x1 , x2 , . . . , xk , Yk+1 = A]
2
1
+ E[C(A, B) | x1 , x2 , . . . , xk , Yk+1 = B].
2
It follows that
max E[C(A, B) | x1 , x2 , . . . , xk , Yk+1 = A], E[C(A, B) | x1 , x2 , . . . , xk , Yk+1 = B]
≥ E[C(A, B) | x1 , x2 , . . . , xk ].
Therefore, all we have to do is compute the two quantities E[C(A, B) | x1 , x2 , . . . ,
xk , Yk+1 = A] and E[C(A, B) | x1 , x2 , . . . , xk , Yk+1 = B] and then place the vk+1 in the
set that yields the larger expectation. Once we do this, we will have a placement satis-
fying
E[C(A, B) | x1 , x2 , . . . , xk ] ≤ E[C(A, B) | x1 , x2 , . . . , xk+1 ].
To compute E[C(A, B) | x1 , x2 , . . . , xk , Yk+1 = A], note that the conditioning gives
the placement of the irst k + 1 vertices. We can therefore compute the number of edges
among these vertices that contribute to the value of the cut. For all other edges, the
probability that it will later contribute to the cut is 1/2, since this is the probability
its two endpoints end up on different sides of the cut. By linearity of expectations,
E[C(A, B) | x1 , x2 , . . . , xk , Yk+1 = A] is the number of edges crossing the cut whose
endpoints are both among the irst k + 1 vertices, plus half of the remaining edges. This
is easy to compute in linear time. The same is true for E[C(A, B) | x1 , x2 , . . . , xk , Yk+1 =
B].
In fact, from this argument, we see that the larger of the two quantities is determined
just by whether vk+1 has more neighbors in A or in B. All edges that do not have vk+1
as an endpoint contribute the same amount to the two expectations. Our derandomized
algorithm therefore has the following simple form: Take the vertices in some order.
Place the irst vertex arbitrarily in A. Place each successive vertex to maximize the
number of edges crossing the cut. Equivalently, place each vertex on the side with
141
the probabilistic method
fewer neighbors, breaking ties arbitrarily. This is a simple greedy algorithm, and our
analysis shows that it always guarantees a cut with at least m/2 edges.
Thus far we have used the probabilistic method to construct random structures with the
desired properties directly. In some cases it is easier to work indirectly, breaking the
argument into two stages. In the irst stage we construct a random structure that does not
have the required properties. In the second stage we then modify the random structure
so that it does have the required property. We give two examples of this sample-and-
modify technique.
Theorem 6.5: Let G = (V, E ) be a connected graph on n vertices with m ≥ n/2 edges.
Then G has an independent set with at least n2 /4m vertices.
Proof: Let d = 2m/n ≥ 1 be the average degree of the vertices in G. Consider the
following randomized algorithm.
1. Delete each vertex of G (together with its incident edges) independently with prob-
ability 1 − 1/d.
2. For each remaining edge, remove it and one of its adjacent vertices.
The remaining vertices form an independent set, since all edges have been removed.
This is an example of the sample-and-modify technique. We irst sample the vertices,
and then we modify the remaining graph.
Let X be the number of vertices that survive the irst step of the algorithm. Since
the graph has n vertices and since each vertex survives with probability 1/d, it follows
that
n
E[X] = .
d
Let Y be the number of edges that survive the irst step. There are nd/2 edges in
the graph, and an edge survives if and only if its two adjacent vertices survive.
Thus
nd 1 2
n
E[Y ] = = .
2 d 2d
The second step of the algorithm removes all the remaining edges and at most Y
vertices. When the algorithm terminates, it outputs an independent set of size at least
142
6.5 the second moment method
X − Y , and
n n n
E[X − Y ] = − = .
d 2d 2d
The expected size of the independent set generated by the algorithm is at least n/2d,
so the graph has an independent set with at least n/2d = n2 /4m vertices.
We modify the original randomly chosen graph G by eliminating one edge from each
cycle of length up to k − 1. The modiied graph therefore has girth at least k. When n
is suficiently large, the expected number of edges in the resulting graph is
1 1 1/k+1 1
E[X − Y ] ≥ 1− n − kn(k−1)/k ≥ n1/k+1 .
2 n 4
Hence there exists a graph with at least 14 n1+1/k edges and girth at least k.
The second moment method is another useful way to apply the probabilistic method.
The standard approach typically makes use of the following inequality, which is easily
derived from Chebyshev’s inequality.
Theorem 6.7: If X is an integer-valued random variable, then
Var[X]
Pr(X = 0) ≤ . (6.2)
(E[X])2
143
the probabilistic method
Proof:
Var[X]
Pr(X = 0) ≤ Pr(|X − E[X]| ≥ E[X]) ≤ .
(E[X])2
so that
n
E[X] = p6 .
4
In this case E[X] = o(1), which means that E[X] < ε for suficiently large n. Since X is
a nonnegative integer-valued random variable, it follows that Pr(X ≥ 1) ≤ E[X] < ε.
Hence, the probability that a random graph chosen from Gn,p has a clique of four or
more vertices is less than ε.
We now consider the case when p = f (n) and f (n) = ω(n−2/3 ). In this case,
E[X] → ∞ as n grows large. This in itself is not suficient to conclude that, with high
probability, a graph chosen random from Gn,p has a clique of at least four vertices. We
can, however, use Theorem 6.7 to prove that Pr(X = 0) = o(1) in this case. To do so
we must show that Var[X] = o((E[X])2 ). Here we shall bound the variance directly;
an alternative approach is given as Exercise 6.12.
We begin with the following useful formula.
Lemma 6.9: Let Yi , i = 1, . . . , m, be 0–1 random variables, and let Y = m
i=1 Yi . Then
Var[Y ] ≤ E[Y ] + Cov(Yi , Y j ).
1≤i, j≤m;i= j
144
6.6 the conditional expectation inequality
We wish to compute
⎡ n
(4 )
⎤
Var[X] = Var ⎣ Xi ⎦ .
i=1
Applying Lemma 6.9, we see that we need to consider the covariance of the Xi . If
|Ci ∩ C j | = 0 then the corresponding cliques are disjoint, and it follows that Xi and X j
are independent. Hence, in this case, E[Xi X j ] − E[Xi ]E[X j ] = 0. The same is true if
|Ci ∩ C j | = 1.
If |Ci ∩ C j | = 2, then the corresponding cliques share one edge. For both cliques to
be in the graph, the eleven corresponding edges must appear n in the graph. Hence, in
11
this case E[Xi X j ] − E[Xi ]E[X j ] ≤ E[Xi X j ] ≤ p . There are 6 ways to choose the six
6
vertices and 2;2;2 ways to split them into Ci and C j (because we choose two vertices
for Ci ∩ C j , two for Ci alone, and two for C j alone).
If |Ci ∩ C j | = 3, then the corresponding cliques share three edges. For both cliques
to be in the graph, the nine corresponding edges must appear in the graph. Hence, in
this case E[XiX j ] − E[Xi ]E[X j ] ≤ E[Xi X j ] ≤ p9 . There are n5 ways to choose the ive
5
vertices, and 3;1;1 ways to split them
into Ci and C j .
Finally, recall again that E[X] = n4 p6 and p = f (n) = ω(n−2/3 ). Therefore,
n 6 n 6 11 n 5
Var[X] ≤ p+ p + p9 = o(n8 p12 ) = o((E[X])2 ),
4 6 2; 2; 2 5 3; 1; 1
since
2
2 n
(E[X]) = p6 = (n8 p12 ).
4
Theorem 6.7 now applies, showing that Pr(X = 0) = o(1) and thus the second part of
the theorem.
For a sum of Bernoulli random variables, we can derive an alternative to the second
moment method that is often easier to apply.
145
the probabilistic method
n
Theorem 6.10: Let X = i=1 Xi , where each Xi is a 0–1 random variable. Then
n
Pr(Xi = 1)
Pr(X > 0) ≥ . (6.3)
i=1
E[X | Xi = 1]
Notice that the Xi need not be independent for Eqn. (6.3) to hold.
Proof: Let Y = 1/X if X > 0, with Y = 0 otherwise. Then
Pr(X > 0) = E[XY ].
However,
n
E[XY ] = E XiY
i=1
n
= E[XiY ]
i=1
n
= E[XiY | Xi = 1] Pr(Xi = 1) + E[XiY | Xi = 0] Pr(Xi = 0)
i=1
n
= E[Y | Xi = 1] Pr(Xi = 1)
i=1
n
= E[1/X | Xi = 1] Pr(Xi = 1)
i=1
n
Pr(Xi = 1)
≥ .
i=1
E[X | Xi = 1]
The key step is from the third to the fourth line, where we use conditional expectations
in a fruitful way by taking advantage of the fact that E[XiY | Xi = 0] = 0. The last line
makes use of Jensen’s inequality, with the convex function f (x) = 1/x.
We can use Theorem 6.10 to give an alternate proof of Theorem 6.8. Speciically, if
p = f (n) = ω(n−2/3 ), we use Theorem 6.10 to show that, for any constant ε > 0 and
for suficiently large n, the probability that a random graph chosen from Gn,p does not
have a clique with four or more vertices is less than ε.
(n4 )
As in the proof of Theorem 6.8, let X = i=1 Xi , where Xi is 1 if the subset of four
vertices Ci is a 4-clique and 0 otherwise. For a speciic X j , we have Pr(X j = 1) = p6 .
Using the linearity of expectations, we compute
⎡ n
(4 ) (n4 )
⎤
E[X | X j = 1] = E ⎣ Xi X j = 1⎦ = E[Xi | X j = 1].
i=1 i=1
(n4 )
E[X | X j = 1] = E[Xi | X j = 1]
i=1
n−4 6n−4 6 n−4 5 n−4
=1+ p +4 p +6 p +4 p3 .
4 3 2 1
One of the most elegant and useful tools in applying the probabilistic method is the
Lovász Local Lemma. Let E1 , . . . , En be a set of bad events in some probability space.
We want to show that there is an element in the sample space that is not included in
any of the bad events.
This would be easy to do if the events were mutually independent. Recall
that events E1 , E2 , . . . , En are mutually independent if and only if, for any subset
I ⊆ [1, n],
Pr Ei = Pr(Ei ).
i∈I i∈I
Also, if E1 , . . . , En are mutually independent then so are Ē1 , . . . , Ēn . (This was left as
Exercise 1.20.) If Pr(Ei ) < 1 for all i, then
n n
Pr Ēi = Pr(Ēi ) > 0,
i=1 i=1
and there is an element of the sample space that is not included in any bad event.
Mutual independence is too much to ask for in many arguments. The Lovász local
lemma generalizes the preceding argument to the case where the n events are not mutu-
ally independent but the dependency is limited. Speciically, following from the dei-
nition of mutual independence, we say that an event En+1 is mutually independent of
147
the probabilistic method
Pr ⎝En+1 E j ⎠ = Pr(En+1 ).
j∈I
Pr Ēi > 0.
i=1
Pr ⎝Ek Ē j ⎠ ≤ 2p.
j∈S
For this expression to be well-deined when S is not empty, we need Pr j∈S Ē j > 0.
The base case s = 0 follows from the assumption
that Pr(Ek ) ≤ p. To perform the
inductive step, we irst show that Pr Ē
j∈S j > 0. This is true when s = 1, because
Pr(Ē j ) ≥ 1 − p > 0. For s > 1, without loss of generality let S = {1, 2, . . . , s}. Then
s ⎛ ⎞
s i−1
Pr Ēi = Pr ⎝Ēi Ē j ⎠
i=1 i=1 j=1
⎛ ⎛ ⎞⎞
s
i−1
= ⎝1 − Pr ⎝Ei Ē j ⎠⎠
i=1 j=1
s
≥ (1 − 2p) > 0.
i=1
Pr ⎝Ek Ē j ⎠ = Pr(Ek ) ≤ p.
j∈S
We continue with the case |S2 | < s. It will be helpful to introduce the following nota-
tion. Let FS be deined by
FS = Ē j ,
j∈S
and similarly deine FS1 and FS2 . Notice that FS = FS1 ∩ FS2 .
Applying the deinition of conditional probability yields
Pr(Ek ∩ FS )
Pr(Ek | FS ) = . (6.4)
Pr(FS )
Canceling the common factor, which we have already shown to be nonzero, yields
149
the probabilistic method
Using also the fact that |S1 | ≤ d, we establish a lower bound on the denominator of
Eqn. (6.5) as follows:
⎛ ⎞
≥ 1− Pr ⎝Ei Ē j ⎠
i∈S1 j∈S2
≥ 1− 2p
i∈S1
≥ 1 − 2pd
1
≥ .
2
Using the upper bound for the numerator and the lower bound for the denominator,
we prove the induction:
Pr(Ek ∩ FS1 | FS2 )
Pr(Ek | FS ) =
Pr(FS1 | FS2 )
p
≤ = 2p.
1/2
The theorem follows from
n ⎛ ⎞
n
i−1
Pr Ēi = Pr ⎝Ēi Ē j ⎠
i=1 i=1 j=1
⎛ ⎛ ⎞⎞
n
i−1
= ⎝1 − Pr ⎝Ei Ē j ⎠⎠
i=1 j=1
n
≥ (1 − 2p) > 0.
i=1
event that the paths chosen by pairs i and j share at least one edge. Since a path in Fi
shares edges with no more than k paths in Fj ,
k
p = Pr(Ei, j ) ≤ .
m
Let d be the degree of the dependency graph. Since event Ei, j is independent of all
events Ei′ , j′ when i′ ∈
/ {i, j} and j′ ∈
/ {i, j}, we have d < 2n. Since
8nk
4dp < ≤ 1,
m
all of the conditions of the Lovász Local Lemma are satisied, proving
⎛ ⎞
Pr ⎝ Ēi, j ⎠ > 0.
i= j
Hence, there is a choice of paths such that the n paths are edge disjoint.
Pr Ēi > 0;
i=1
151
the probabilistic method
The Lovász Local Lemma proves that a random element in an appropriately deined
sample space has a nonzero probability of satisfying our requirement. However, this
probability might be too small for an algorithm that is based on simple sampling. The
number of objects that we need to sample before we ind an element that satisies our
requirements might be exponential in the problem size.
In a number of interesting applications, the existential result of the Lovász Local
Lemma can be used to derive eficient construction algorithms. Although the details
differ in the speciic applications, many known algorithms are based on a common two-
phase scheme. In the irst phase, a subset of the variables of the problem are assigned
random values; the remaining variables are deferred to the second stage. The subset of
variables that are assigned values in the irst stage is chosen so that:
1. using the Local Lemma, one can show that the random partial solution ixed in the
irst phase can be extended to a full solution of the problem without modifying any
of the variables ixed in the irst phase; and
2. the dependency graph H between events deined by the variables deferred to the
second phase has, with high probability, only small connected components.
When the dependency graph consists of connected components, a solution for the
variables of one component can be found independently of the other components. Thus,
the irst phase of the two-phase algorithm breaks the original problem into smaller
subproblems. Each of the smaller subproblems can then be solved independently in the
second phase by an exhaustive search.
152
6.8
∗
explicit constructions using the local lemma
subformula will have no more than O(k log m) deferred variables. An exhaustive search
of all the possible assignments for all variables in each subformula can then be done in
polynomial time. Hence we focus on the following lemma.
Lemma 6.15: All connected components in H ′ are of size O(log m) with probability
1 − o(1).
Proof: Consider a connected component R of r vertices in H. If R is a connected com-
ponent in H ′ , then all its r nodes are surviving clauses. A surviving clause is either
a dangerous clause or it shares at least one deferred variable with a dangerous clause
(i.e., it has a neighbor in H ′ that is a dangerous clause). The probability that a given
clause is dangerous is at most 2−k/2 , since exactly k/2 of its variables were given ran-
dom values in phase I yet none of these values satisied the clause. The probability that
a given clause survives is the probability that either this clause or at least one of its
direct neighbors is dangerous, which is bounded by
(d + 1)2−k/2 ,
where again d = kT > 1.
If the survival of individual clauses were independent events then we would be in
excellent shape. However, from our description here it is evident that such events are
not independent. Instead, we identify a subset of the vertices in R such that the survival
of the clauses represented by the vertices of this subset are independent events. A 4-tree
S of a connected component R in H is deined as follows:
1. S is a rooted tree;
2. any two nodes in S are at distance at least 4 in H;
3. there can be an edge in S only between two nodes with distance exactly 4 between
them in H;
4. any node of R is either in S or is at distance 3 or less from a node in S.
Considering the nodes in a 4-tree proves useful because the event that a node u in
a 4-tree survives and the event that another node v in a 4-tree survives are actually
independent. Any clause that could cause u to survive has distance at least 2 from
any clause that could cause v to survive. Clauses at distance 2 share no variables, and
hence the events that they are dangerous are independent. We can take advantage of
this independence to conclude that, for any 4-tree S, the probability that the nodes in
the 4-tree survive is at most
((d + 1)2−k/2 )|S| .
A maximal 4-tree S of a connected component R is the 4-tree with the largest possible
number of vertices. Since the degree of the dependency graph is bounded by d, there
are no more than
d + d(d − 1) + d(d − 1)(d − 1) ≤ d 3 − 1
nodes at distance 3 or less from any given vertex. We therefore claim that a maximal
4-tree of R must have at least r/d 3 vertices. Otherwise, when we consider the vertices
of the maximal 4-tree S and all neighbors within distance 3 or less of these vertices,
we obtain fewer than r vertices. Hence there must be a vertex of distance at least 4
154
6.9 lovász local lemma: the general case
from all vertices in S. If this vertex has distance exactly 4 from some vertex in S, then
it can be added to S and thus S is not maximal, yielding a contradiction. If its dis-
tance is larger than 4 from all vertices in S, consider any path that brings it closer to S;
such a path must eventually pass through a vertex of distance at least 4 from all ver-
tices in S and of distance 4 from some vertex in S, again contradicting the maximality
of S.
To show that with probability 1 − o(1) there is no connected component R of size
r ≥ c log2 m for some constant c in H ′ , we show that there is no 4-tree of H of size
r/d 3 that survives with probability 1 − o(1). Since a surviving connected component
R would have a maximal 4-tree of size r/d 3 , the absence of such a 4-tree implies the
absence of such a component.
We need to count the number of 4-trees of size s = r/d 3 in H. We can choose the
root of the 4-tree in m ways. A tree with root v is uniquely deined by an Eulerian tour
that starts and ends at v and traverses each edge of the tree twice, once in each direction.
Since an edge of S represents a path of length 4 in H, at each vertex in the 4-tree the
Eulerian path can continue in as many as d 4 different ways, and therefore the number
of 4-trees of size s = r/d 3 in H is bounded by
3
m(d 4 )2s = md 8r/d .
The probability that the nodes of each such 4-tree survive in H ′ is at most
3
((d + 1)2−k/2 )s = ((d + 1)2−k/2 )r/d .
Hence the probability that H ′ has a connected component of size r is bounded by
3 3 3
md 8r/d ((d + 1)2−k/2 )r/d ≤ m2(rk/d )(8α+2α−1/2)
= o(1)
for r ≥ c log2 m and for a suitably large constant c and a suficiently small constant
α > 0.
For completeness we include the statement and proof of the general case of the Lovász
Local Lemma.
155
the probabilistic method
Then
n n
Pr Ēi ≥ (1 − xi ).
i=1 i=1
Pr ⎝Ek Ē j ⎠ ≤ xk .
j∈S
As in the case of the symmetric version of the Local Lemma, we must be careful that
the conditional probability is well-deined. This follows using the same approach as in
the symmetric case, so we focus on the rest of the induction.
The base case s = 0 follows from the assumption that
Pr(Ek ) ≤ xk (1 − x j ) ≤ xk .
(k, j)∈E
Pr ⎝Ek Ē j ⎠ = Pr(Ek ) ≤ xk .
j∈S
We continue with the case |S2 | < s. We again use the notation
FS = Ē j
j∈S
Pr(FS1 | FS2 ) = Pr ⎝ Ē j Ē j ⎠
j∈S1 j∈S2
⎛ ⎛ i−1 ⎛ ⎞⎞⎞
r
= ⎝1 − Pr ⎝E ji Ē jt ∩ ⎝ Ē j ⎠⎠⎠
i=1 t=1 j∈S2
r
≥ (1 − x ji )
i=1
≥ (1 − x j ).
(k, j)∈E
Using the upper bound for the numerator and the lower bound for the denominator,
we can prove the induction hypothesis:
⎛ ⎞
Pr ⎝Ek Ē j ⎠ = Pr(Ek | FS )
j∈S
157
the probabilistic method
Recently, there have been several advances in extending the Lovász Local Lemma. We
briely summarize the key points here, and start by looking again to the k-SAT problem
to provide an example of these ideas in action.
We have shown previously that if no variable in a k-SAT formula appears in more
than 2k /(4k) clauses, then the formula has a satisfying assignment, and we have shown
that if each variable appears in no more that 2αk clauses for some constant α a solution
can be found in expected polynomial time. Here we provide an improved result which
again provides a solution in expected polynomial time.
Theorem 6.18: Suppose that every clause in a k-SAT formula shares one or more
variables with at most 2k−3 − 1 other clauses. Then a solution for the formula exists
and can be found in expected time that is polynomial in the number of clauses m.
Before starting the proof, we informally describe our algorithm. As before, let
x1 , x2 , . . . , xℓ be the ℓ variables and C1 , C2 , . . . , Cm be the m clauses in the formula.
We begin by choosing a random truth assignment (uniformly at random). We then look
for a clause Ci that is unsatisied; if no such clause exists we are done. If such a clause
exists, we look speciically at the variables in the clause, and randomly choose a new
truth assignment for those variables. Doing so will hopefully “ix” the clause Ci so that
it is satisied, but it may not; even worse, it may end up causing a clause C j that shares
a variable with Ci to become unsatisied. We recursively ix these neighboring clauses,
so that when the recursion is inished, we have that Ci is satisied and we have not
damaged any clause by making it become unsatisied. We therefore have improved the
situation by satisfying at least one previously unsatisied clause. We then continue to
the next unsatisied clause; we have to do this at most m times.
The underlying question that we need to answer to show that this algorithm works
is how we know that the recursion we have described stops successfully. Perhaps it
simply goes on forever, or for an exponential amount of time. The proof we provide
shows that this cannot be the case through a new type of argument. Speciically, we
show that if such bad recursions occur with non-trivial probability, then one could
compress a random string of n independent, unbiased lips into much fewer than n bits.
That should seem impossible, and it is. While compression is a theme we cover in much
more detail in Chapter 10, we explain the compression result we need here in the proof
of the theorem. All we need is that a string of r random bits, where each bit is chosen
independently and uniformly at random, cannot be compressed so that the average
length of the representation over all choices of the r random bits is less than r − 2.
To see that this must be true, assume the best possible setting for us, where we don’t
have to worry about the “end” of our compressed sequence, but can use each string of
bits of length less than r to represent one of the 2r possible strings we aim to compress.
That is, we won’t worry that one compressed string might be “0” and another one
might be “00”, in which case it might be hard to distinguish whether “00” was meant
to represent a single compressed string, or two copies of the string represented by “0”.
(Essentially, a compressed string can be terminated for free; this allowance can only
158
6.10
∗
the algorithmic lovász local lemma
hurt us in our argument.) Still, each string of s < r bits can only represent a single
possible string of length r. Hence we have available one string of length 0 (the empty
string), two strings of length 1, and so on. There are only 2r − 1 strings of length less
than r; even if we count only those in computing the average length of the compressed
string, which again can only hurt us, the average length would be at least
r−1
1
r−i
· i ≥ r − 2.
i=1
2
The same compression fact naturally holds true for any collection of 2r equally likely
strings; they do not have to be limited to strings of r random bits.
Given this fact, our proof proceeds as follows.
We note the algorithm produces a history, which we use in the analysis of the algo-
rithm.
It is important to realize that while a clause can become satisied and unsatisied
again multiple times through the recursive process, when we return to the main routine
and complete the call to localcorrect, we have satisied the clause Ci that localcorrect
was called on from the main routine, and further any clause that was previously satisied
has stayed satisied because of the recursion. What we wish to show is that the recursive
process has to stop.
159
the probabilistic method
Our analysis makes use of the fact that our algorithm makes use of a random string.
We provide two different ways to describe how our algorithm runs.
We can think of our algorithm as being described by the random string of bits it uses.
It takes n bits to initially assign random truth values to each of the variables. After that,
it takes k bits to resample values for a clause each time localcorrect is called. Let us refer
to each time localcorrect is called as a round. Then one way to describe our algorithm’s
actions for j rounds is with the random string of n + jk bits used by the algorithm.
But here is another way of describing how our algorithm works. We keep track of the
“history” of the algorithm as shown in the algorithm. The history includes a list of the
clauses that localcorrect is called on by the main routine. The history also includes a list
of the recursive calls to localcorrect, in a slightly non-obvious way. First, we note that
the algorithm uses a lag bit 0 and a lag bit 1 to mark the start and end of recursive calls,
so the algorithm tracks the stack of recursive calls in a natural way. Second, instead of
the natural approach of using ⌈log2 m⌉ bits to represent the index of the clause in our
recursive calls, the algorithm uses only k − 3 bits. We now explain why only k − 3 bits
are needed. Since there are at most 2k−3 possible clauses that share a variable with the
current clause (including the current clause itself) that could be the next one called, the
clause can be represented by an index of k − 3 bits. (Imagine having an ordered list of
the up to 2k−3 clauses that share a variable with each clause; we just need the index into
that list.) Finally, our history will also include the current truth assignment of n bits.
Note that the current truth assignment can be thought of as in an separate updatable
storage area for the history; every time the truth assignment is updated, so is this part
of the history.
We now show that when the algorithm has run j rounds, we can recover the random
string of n + jk bits that the algorithm has used from the history we have described.
Start with the current truth assignment, and break the history up, using the lags that
mark invocations of localcorrect. We can use the history to determine the sequence of
recursive calls, and what clauses localcorrect was called on. Then, going backwards
through the history, we know at each step which clause was being resampled. For that
clause to have to be resampled, it must have been unsatisied previously. But there is
only one setting of the variables that makes a clause unsatisied, and hence we know
what the truth values for those variables were before the clause was resampled. We
can therefore update the current truth assignment so that it represents the truth assign-
ment before the resampling, and continue backwards through the process. Repeating
this action, we can determine the original truth assignment, and since at each step we
can determine what variable values were changed and what their values were on each
resampling, we recover the whole string of n + jk random bits.
Our history takes at most n + m⌈log2 m⌉ + j(k − 1) bits; here we use the fact that
each resampling uses at most k − 1 bits, including the two bits that may be necessary as
lags for the start and end of the recursion given by that resampling. For large enough j,
our history yields a compressed form of the the random string used to run the algorithm,
since only k − 1 bits are used to represent each resampling in the history instead of the
k bits used by the algorithm.
Now suppose there were no truth assignment, in which case the algorithm would
run forever. Then after a large enough number of rounds J, the history will be at most
160
6.10
∗
the algorithmic lovász local lemma
n + m⌈log2 m⌉ + J(k − 1) bits, while the random string running the algorithm would
be n + Jk bits. By our result on compressing random strings, we must have
n + m⌈log2 m⌉ + J(k − 1) ≥ n + Jk − 2.
Hence
J ≤ m⌈log2 m⌉ + 2.
This contradicts that the algorithm can run forever, so there must be a truth assignment.
Similarly, the number of rounds J is more than m⌈log2 m⌉ + 2 + i with probability
at most 2−i . To see this, suppose the probability of lasting to this round is greater than
2−i . Again consider the algorithm after J = m⌈log2 m⌉ + 2 + i rounds, so the history
will be at most n + m⌈log2 m⌉ + J(k − 1) bits. The algorithm can also be described
by the n + Jk random bits that led to the current state. As there are at least 2n+Jk ran-
dom bit strings of this length, and the probability of lasting at least this many rounds
is greater than 2−i by assumption, there are at least 2n+Jk−i random bit strings associ-
ated with reaching this round. By our result on compressing random strings, it requires
more than n + Jk − i − 2 bits on average to represent the at least 2n+Jk−i random bit
strings associated with reaching this round. But the history, as we have already argued,
provides a representation of these random bit strings, in that we can reconstruct the
algorithm’s random bit string from the history. The number of bits the history uses is
only
n + m⌈log2 m⌉ + J(k − 1) = n + Jk − i − 2,
a contradiction.
Since the probability of lasting more than m⌈log2 m⌉ + 2 + i is at most 2−i , we can
bound the expected number of rounds by
∞
m⌈log2 m⌉ + 2 + i2−i .
i=1
The expected number rounds used by the algorithm is thus at most m⌈log2 m⌉ + 4.
The work done in each resampling round can easily be made to be polynomial in
m, so the total expected time to ind an assigment can be made polynomial in m as
well.
While already surprising, the proof above can be improved slightly. A more careful
encoding shows that the expected number of rounds required can be reduced to O(m)
instead of O(m log m). This is covered in the Exercise 6.21.
The algorithmic approach we have used for the satisiability problem in the proof of
Theorem 6.18 can be extended further to obtain an algorithmic version of the Lovász
local lemma, which we now describe. Let us suppose that we have a collection of n
events E1 , E2 , . . . , En that depend on a collection of ℓ mutually independent variables
y1 , y2 , . . . , yℓ . The dependency graph on events has an edge between two events if they
both depend on at least one shared variable yi . The idea is that at each step if there
is an event that is unsatisied, we resample only the random variables on which that
event depends. As with the k-Satisiability Algorithm using the algorithmic Lovász
161
the probabilistic method
Local Lemma, this resampling process has to be ordered carefully to ensure progress.
If the dependencies are not too great, then the right resampling algorithm terminates
with a solution.
The symmetric version is easier to state.
Theorem 6.19: Let E1 , E2 , . . . , En be a set of events in an arbitrary probability space
that are determined by mutually independent random variables y1 , y2 , . . . , yℓ , and let
G = (V, E ) be the dependency graph for these events. Suppose the following conditions
hold for values d and p:
1. each event Ei is adjacent to at most d other events in the dependency graph, or
equivalently, there are only d other events that also depend on one or more of the
y j that Ei depends on;
2. Pr(Ei ) ≤ p;
3. ep(d + 1) ≤ 1.
Then there exists an assignment of the yi so that the event ∩ni=1 Ēi holds, and a resam-
pling algorithm with the property that the expected number of times the algorithm
resamples the event Ei in inding such an assignment is at most 1/d. Hence the expected
total number of resampling steps taken by the algorithm is at most n/d.
However, we also have a corresponding theorem for the asymmetric version.
Theorem 6.20: Let E1 , E2 , . . . , En be a set of events in an arbitrary probability
space that are determined by mutually independent random variables y1 , y2 , . . . , yℓ ,
and let G = (V, E ) be the dependency graph for these events. Assume there exist
x1 , x2 , . . . , xn ∈ [0, 1] such that, for all 1 ≤ i ≤ n
Pr(Ei ) ≤ xi (1 − x j ).
(i, j)∈E
Then there exists an assignment of the yi so that the event ∩ni=1 Ēi holds, and a resam-
pling algorithm with the property that the expected number of times the algorithm
resamples the event Ei in inding such an assignment is at most xi /(1 − xi ). Hence
the
n expected total number of resampling steps taken by the algorithm is at most
i=1 xi /(1 − xi ).
The proofs of these theorems are beyond the scope of the book. Similar to the algo-
rithm for satisiability based on resampling given above, the proof relies on bounding
the expected number of resamplings that occur over the course of the algorithm.
6.11. Exercises
Exercise 6.1: Consider an instance of SAT with m clauses, where every clause has
exactly k literals.
(a) Give a Las Vegas algorithm that inds an assignment satisfying at least m(1 − 2−k )
clauses, and analyze its expected running time.
162
6.11 exercises
(b) Give a derandomization of the randomized algorithm using the method of condi-
tional expectations.
Exercise 6.2:
(a) Prove that, for every integer n, there exists a coloring of the edges of the complete
graph Kn by
two colors so that the total number of monochromatic copies of K4 is
at most n4 2−5 .
(b) Give a randomized algorithm for inding a coloring with at most n4 2−5 mono-
Exercise 6.3: Given an n-vertex undirected graph G = (V, E ), consider the following
method of generating an independent set. Given a permutation σ of the vertices, deine
a subset S(σ ) of the vertices as follows: for each vertex i, i ∈ S(σ ) if and only if no
neighbor j of i precedes i in the permutation σ .
(a) Show that each S(σ ) is an independent set in G.
(b) Suggest a natural randomized algorithm to produce σ for which you can show that
the expected cardinality of S(σ ) is
n
1
,
i=1
di + 1
where di denotes the degree of vertex i.
(c) Prove that G has an independent set of size at least ni=1 1/(di + 1).
Exercise 6.4: Consider the following two-player game. The game begins with k tokens
placed at the number 0 on the integer number line spanning [0, n]. Each round, one
player, called the chooser, selects two disjoint and nonempty sets of tokens A and B.
(The sets A and B need not cover all the remaining tokens; they only need to be disjoint.)
The second player, called the remover, takes all the tokens from one of the sets off the
board. The tokens from the other set all move up one space on the number line from
their current position. The chooser wins if any token ever reaches n. The remover wins
if the chooser inishes with one token that has not reached n.
(a) Give a winning strategy for the chooser when k ≥ 2n .
(b) Use the probabilistic method to show that there must exist a winning strategy for
the remover when k < 2n .
(c) Explain how to use the method of conditional expectations to derandomize the
winning strategy for the remover when k < 2n .
Exercise 6.5: We have shown using the probabilistic method that, if a graph G has n
nodes and m edges, then there exists a partition of the n nodes into sets A and B such
that at least m/2 edges cross the partition. Improve this result slightly: show that there
exists a partition such that at least mn/(2n − 1) edges cross the partition.
163
the probabilistic method
Exercise 6.6: We can generalize the problem of inding a large cut to inding a large
k-cut. A k-cut is a partition of the vertices into k disjoint sets, and the value of a cut is
the weight of all edges crossing from one of the k sets to another. In Section 6.2.1 we
considered 2-cuts when all edges had the same weight 1, showing via the probabilistic
method that any graph G with m edges has a cut with value at least m/2. Generalize
this argument to show that any graph G with m edges has a k-cut with value at least
(k − 1)m/k. Show how to use derandomization (following the argument of Section 6.3)
to give a deterministic algorithm for inding such a cut.
Exercise 6.7: A hypergraph H is a pair of sets (V, E ), where V is the set of vertices
and E is the set of hyperedges. Every hyperedge in E is a subset of V. In particular, an
r-uniform hypergraph is one where the size of each edge is r. For example, a 2-uniform
hypergraph is just a standard graph. A dominating set in a hypergraph H is a set of
vertices S ⊂ V such that e ∩ S = ∅ for every edge e ∈ E. That is, S hits every edge of
the hypergraph.
Let H = (V, E ) be an r-uniform hypergraph with n vertices and m edges. Show
that there is a dominating set of size at most np + (1 − p)r m for every real number
0 ≤ p ≤ 1. Also, show that there is a dominating set of size at most (m + n ln r)/r.
Exercise 6.8: Prove that, for every integer n, there exists a way to 2-color the edges
of Kx so that there is no monochromatic clique of size k when
n 1− 2k
x=n− 2 .
k
(Hint: Start by 2-coloring the edges of Kn , then ix things up.)
Exercise 6.9: A tournament is a graph on n vertices with exactly one directed edge
between each pair of vertices. If vertices represent players, then each edge can be
thought of as the result of a match between the two players: the edge points to the win-
ner. A ranking is an ordering of the n players from best to worst (ties are not allowed).
Given the outcome of a tournament, one might wish to determine a ranking of the play-
ers. A ranking is said to disagree with a directed edge from y to x if y is ahead of x in
the ranking (since x beat y in the tournament).
(a) Prove that, for every tournament, there exists a ranking that disagrees with at most
50% of the edges.
(b) Prove that, for suficiently large n, there exists a tournament such that every ranking
disagrees with at least 49% of the edges in the tournament.
164
6.11 exercises
Exercise 6.11: Consider a graph in Gn,p with n vertices and each pair of vertices
independently connected by an edge with probability p. We prove a threshold for the
existence of triangles in the graph.
Let t1 , . . . , t(n ) be an enumeration of all triplets of three vertices in the graph. Let
3
Xi = 1 if the the three edges of the triplet ti appear in the graph, so that ti forms a triangle
(n3 )
in the graph. Otherwise Xi = 0. Let X = i=1 Xi .
(a) Compute E[X].
(b) Use (a) to show that if pn → 0 then Pr(X > 0) → 0.
(c) Show that Var[Xi ] ≤ p3 .
(d) Show that Cov(Xi , X j ) = p5 − p6 for O(n4 ) pairs i = j, otherwise Cov(Xi , X j ) =
0.
(e) Show that Var[X] = O(n3 p3 + n4 (p5 − p6 )).
(f) Conclude that if p is such that pn → ∞ then Pr(X = 0) → 0.
Exercise 6.12: In Section 6.5.1, we bounded the variance of the number of 4-cliques
in a random graph in order to demonstrate the second moment method. Showhow to
calculate the variance directly by using the equality from Exercise 3.9: for X = n1=1 Xi
the sum of Bernoulli random variables,
n
E[X 2 ] = Pr(Xi = 1)E[X | Xi = 1].
i=1
Exercise 6.13: Consider the problem of whether graphs in Gn,p have cliques of con-
stant size k. Suggest an appropriate threshold function for this property. Generalize the
argument used for cliques of size 4, using either the second moment method or the
conditional expectation inequality, to prove that your threshold function is correct for
cliques of size 5.
Exercise 6.14: Consider a graph in Gn,p , with p = c ln n/n. Use the second moment
method or the conditional expectation inequality to prove that if c < 1 then, for any
constant ε > 0 and for n suficiently large, the graph has isolated vertices with proba-
bility at least 1 − ε.
Exercise 6.15: Consider a graph in Gn,p , with p = 1/n. Let X be the number of trian-
gles in the graph, where a triangle is a clique with three edges. Show that
Pr(X ≥ 1) ≤ 1/6
and that
lim Pr(X ≥ 1) ≥ 1/7.
n→∞
(Hint: Use the conditional expectation inequality.)
165
the probabilistic method
Exercise 6.18: Use the general form of the Lovász Local Lemma to prove that the
symmetric version of Theorem 6.11 can be improved by replacing the condition 4d p ≤
1 by the weaker condition ep(d + 1) ≤ 1.
You may want to let Au,v,c be the event that u and v are both colored with color c and
then consider the family of such events.
Exercise 6.20: A k-uniform hypergraph is an ordered pair G = (V, E ), but edges con-
sist of sets of k (distinct) vertices, instead of just 2. (So a 2-uniform hypergraph is just
what we normally call a graph.) A hypergraph is k-regular if all vertices have degree
k; that is, they are in k hypergraph edges.
Show that for suficiently large k, the vertices of a k-uniform, k-regular hypergraph
can be 2-colored so that no edge is monochromatic. What’s the smallest value of k you
can achieve?
Exercise 6.21: In our description of the k-Satisiability Algorithm using the algo-
rithmic Lovász local lemma, we used ⌈log2 m⌉ bits in the history to represent each
clause called in the main routine. Instead, however, we could simply record in the his-
tory which clauses are initially unsatisied with an array of m bits. Explain any other
changes you need to make in the algorithm in order to properly record a history that
you can “reverse” to obtain the initial assignment, and explain how this allows one to
modify the proof of Theorem 6.18 so that only O(m) rounds are needed in expectation.
Exercise 6.22: Implement the algorithmic Lovász Local Lemma for the following
scenario. Consider a 9-SAT formula where each variable appears in 8 clauses. Set up
a formula with 112,500 variables and 100,000 clauses in the following manner: set up
8 copies of each of the 112,500 variables (900,000 total variables), permute them, and
use the ordering to assign the variables to the 100,000 clauses. (If any clauses share a
variable, which is likely to happen, try to locally correct for this by swapping one copy
to another clause.) Then assign a random “sign” to each variable – with probability
1/2, use x̄ instead of x. This gives a formula that satisies the conditions of Theorem
6.18.
Your implementation of the algorithmic Lovász Local Lemma does not need to keep
track of the history. However, you should track how many times the local correction
procedure is required before termination. Repeat this experiment with 100 different
formulas derived from the process above, and report on the distribution of the number
of local corrections required. Note that you may want to take some care to make the
local correction step eficient in order to have your program run effectively.
167
chapter seven
Markov Chains and
Random Walks
Markov chains provide a simple but powerful framework for modeling random pro-
cesses. We start this chapter with the basic deinitions related to Markov chains and
then show how Markov chains can be used to analyze simple randomized algorithms
for the 2-SAT and 3-SAT problems. Next we study the long-term behavior of Markov
chains, explaining the classiications of states and conditions for convergence to a sta-
tionary distribution. We apply these techniques to analyzing simple gambling schemes
and a discrete version of a Markovian queue. Of special interest is the limiting behav-
ior of random walks on graphs. We prove bounds on the covering time of a graph and
use this bound to develop a simple randomized algorithm for the s–t connectivity prob-
lem. Finally, we apply Markov chain techniques to resolve a subtle probability problem
known as Parrondo’s paradox.
1 Strictly speaking, this is a time-homogeneous Markov chain; this will be the only type we study in this book.
168
7.1 markov chains: definitions and representations
This deinition expresses that the state Xt depends on the previous state Xt−1 but is
independent of the particular history of how the process arrived at state Xt−1 . This is
called the Markov property or memoryless property, and it is what we mean when we
say that a chain is Markovian. It is important to note that the Markov property does not
imply that Xt is independent of the random variables X0 , X1 , . . . , Xt−2 ; it just implies
that any dependency of Xt on the past is captured in the value of Xt−1 .
Without loss of generality, we can assume that the discrete state space of the Mar-
kov chain is {0, 1, 2, . . . , n} (or {0, 1, 2, . . .} if it is countably ininite). The transition
probability
Pi, j = Pr(Xt = j | Xt−1 = i)
is the probability that the process moves from i to j in one step. The Markov property
implies that the Markov chain is uniquely deined by the one-step transition matrix:
⎛ ⎞
P0,0 P0,1 · · · P0, j · · ·
⎜P
⎜ 1,0 P1,1 · · · P1, j · · ·⎟
⎟
⎜ . .. .. .. .. ⎟
P = ⎜ .. . . . .⎟⎟.
⎜
⎜ ⎟
⎜ Pi,0 Pi,1 · · · Pi, j · · ·⎟
⎝ ⎠
.. .. .. .. ..
. . . . .
2 Operations on vectors are generalized to a countable number of elements in the natural way.
169
markov chains and random walks
Figure 7.1: A Markov chain (left) and the corresponding transition matrix (right).
Let P(m) be the matrix whose entries are the m-step transition probabilities, so that the
entry in the ith row and jth column is Pi,mj . Then, applying Eqn. (7.1) yields
P(m) = P · P(m−1) ;
by induction on m,
P(m) = Pm .
2-SAT Algorithm:
3
The entry P0,3 = 41/192 gives the correct answer. The matrix is also helpful if we want
to know the probability of ending in state 3 after three steps when we begin in a state
chosen uniformly at random from the four states. This can be computed by calculating
(1/4, 1/4, 1/4, 1/4)P3 = (17/192, 47/384, 737/1152, 43/288);
here the last entry, 43/288, is the required answer.
1/2, Yi increases with probability exactly 1/2. It is therefore clear that the expected
time to reach n starting from any point is larger for the Markov chain Y than for the
process X, and we use this fact hereafter. (A stronger formal framework for such ideas
is developed in Chapter 12.)
This Markov chain models a random walk on an undirected graph G. (We elaborate
further on random walks in Section 7.4.) The vertices of G are the integers 0, . . . , n
and, for 1 ≤ i ≤ n − 1, node i is connected to node i − 1 and node i + 1. Let h j be the
expected number of steps to reach n when starting from j. For the 2-SAT algorithm, h j
is an upper bound on the expected number of steps to fully match S when starting from
a truth assignment that matches S in j locations.
Clearly, hn = 0 and h0 = h1 + 1, since from h0 we always move to h1 in one step.
We use linearity of expectations to ind an expression for other values of h j . Let Z j be a
random variable representing the number of steps to reach n from state j. Now consider
starting from state j, where 1 ≤ j ≤ n − 1. With probability 1/2, the next state is j − 1,
and in this case Z j = 1 + Z j−1 . With probability 1/2, the next step is j + 1, and in this
case Z j = 1 + Z j+1 . Hence
1 1
E[Z j ] = E (1 + Z j−1 ) + (1 + Z j+1 ) .
2 2
But E[Z j ] = h j and so, by applying the linearity of expectations, we obtain
h j−1 + 1 h j+1 + 1 h j−1 h j+1
hj = + = + + 1.
2 2 2 2
We therefore have the following system of equations:
hn = 0;
h j−1 h j+1
hj = + + 1, 1 ≤ j ≤ n − 1;
2 2
h0 = h1 + 1.
We can show inductively that, for 0 ≤ j ≤ n − 1,
h j = h j+1 + 2 j + 1.
It is true when j = 0, since h1 = h0 − 1. For other values of j, we use the equation
h j−1 h j+1
hj = + +1
2 2
to obtain
h j+1 = 2h j − h j−1 − 2
= 2h j − (h j + 2( j − 1) + 1) − 2
= h j − 2 j − 1,
using the induction hypothesis in the second line. We can conclude that
n−1
h0 = h1 + 1 = h2 + 1 + 3 = · · · = (2i + 1) = n2 .
i=0
173
markov chains and random walks
An alternative approach for solving the system of equations for the h j is to guess and
verify the solution h j = n2 − j2 . The system has n + 1 linearly independent equations
and n + 1 unknowns, and hence there is a unique solution for each value of n. Therefore,
if this solution satisies the foregoing equations then it must be correct. We have hn = 0.
For 1 ≤ j ≤ n − 1, we check
n2 − ( j − 1)2 n2 − ( j + 1)2
hj = + +1
2 2
= n2 − j2
and
h0 = (n2 − 1) + 1
= n2 .
Thus we have proven the following fact.
Lemma 7.1: Assume that a 2-SAT formula with n variables has a satisfying assign-
ment and that the 2-SAT algorithm is allowed to run until it inds a satisfying assign-
ment. Then the expected number of steps until the algorithm inds an assignment is at
most n2 .
We now return to the issue of dealing with unsatisiable formulas by forcing the algo-
rithm to stop after a ixed number of steps.
Theorem 7.2: The 2-SAT algorithm always returns a correct answer if the formula
is unsatisiable. If the formula is satisiable, then with probability at least 1 − 2−m the
algorithm returns a satisfying assignment. Otherwise, it incorrectly returns that the
formula is unsatisiable.
Proof: It is clear that if there is no satisfying assignment then the algorithm correctly
returns that the formula is unsatisiable. Suppose the formula is satisiable. Divide the
execution of the algorithm into segments of 2n2 steps each. Given that no satisfying
assignment was found in the irst i − 1 segments, what is the conditional probability that
the algorithm did not ind a satisfying assignment in the ith segment? By Lemma 7.1,
the expected time to ind a satisfying assignment, regardless of its starting position,
is bounded by n2 . Let Z be the number of steps from the start of segment i until the
algorithm inds a satisfying assignment. Applying Markov’s inequality,
n2 1
Pr(Z > 2n2 ) ≤ 2
= .
2n 2
Thus the probability that the algorithm fails to ind a satisfying assignment after m
segments is bounded above by (1/2)m .
3-SAT Algorithm:
surprising if a randomized algorithm could solve the problem in expected time poly-
nomial in n.3 We present a randomized 3-SAT algorithm that solves 3-SAT in expected
time that is exponential in n, but it is much more eficient than the naïve approach of
trying all possible truth assignments for the variables.
Let us irst consider the performance of a variant of the randomized 2-SAT algorithm
when applied to a 3-SAT problem. The basic approach is the same as in the previous
section; see Algorithm 7.2. In the algorithm, m is a parameter that controls the prob-
ability of success of the algorithm. We focus on bounding the expected time to reach
a satisfying assignment (assuming one exists), as the argument of Theorem 7.2 can be
extended once such a bound is found.
As in the analysis of the 2-SAT algorithm, assume that the formula is satisiable
and let S be a satisfying assignment. Let the assignment after i steps of the process be
Ai , and let Xi be the number of variables in the current assignment Ai that match S. It
follows from the same reasoning as for the 2-SAT algorithm that, for 1 ≤ j ≤ n − 1,
Pr(Xi+1 = j + 1 | Xi = j) ≥ 1/3;
Pr(Xi+1 = j − 1 | Xi = j) ≤ 2/3.
These equations follow because at each step we choose an unsatisied clause, so Ai and
S must disagree on at least one variable in this clause. With probability at least 1/3, we
increase the number of matches between the current truth assignment and S. Again we
can obtain an upper bound on the expected number of steps until Xi = n by analyzing
a Markov chain Y0 , Y1 , . . . such that Y0 = X0 and
Pr(Yi+1 = 1 | Yi = 0) = 1,
Pr(Yi+1 = j + 1 | Yi = j) = 1/3,
Pr(Yi+1 = j − 1 | Yi = j) = 2/3.
In this case, the chain is more likely to go down than up. If we let h j be the expected
number of steps to reach n when starting from j, then the following equations hold
3 Technically, this would not settle the P = NP question, since we would be using a randomized algorithm and not
a deterministic algorithm to solve an NP-hard problem. It would, however, have similar far-reaching implications
about the ability to solve all NP-complete problems.
175
markov chains and random walks
for h j :
hn = 0;
2h j−1 h j+1
hj = + + 1, 1 ≤ j ≤ n − 1;
3 3
h0 = h1 + 1.
Again, these equations have a unique solution, which is given by
h j = 2n+2 − 2 j+2 − 3(n − j).
Alternatively, the solution can be found by using induction to prove the relationship
h j = h j+1 + 2 j+2 − 3.
We leave it as an exercise to verify that this solution indeed satisies the foregoing
equations.
The algorithm just described takes (2n ) steps on average to ind a satisfying assign-
ment. This result is not very compelling, since there are only 2n truth assignments to
try! With some insight, however, we can signiicantly improve the process. There are
two key observations.
1. If we choose an initial truth assignment uniformly at random, then the number of
variables that match S has a binomial distribution with expectation n/2. With an
exponentially small but nonnegligible probability, the process starts with an initial
assignment that matches S in signiicantly more than n/2 variables.
2. Once the algorithm starts, it is more likely to move toward 0 than toward n. The
longer we run the process, the more likely it has moved toward 0. Therefore, we are
better off restarting the process with many randomly chosen initial assignments and
running the process each time for a small number of steps, rather than running the
process for many steps on the same initial assignment.
Based on these ideas, we consider the modiied procedure of Algorithm 7.3. The mod-
iied algorithm has up to 3n steps to reach a satisfying assignment starting from a ran-
dom assignment. If it fails to ind a satisfying assignment in 3n steps, it restarts the
search with a new randomly chosen assignment. We now determine how many times
the process needs to restart before it reaches a satisfying assignment.
Let q represent the probability that the modiied process reaches S (or some other
satisfying assignment) in 3n steps starting with a truth assignment chosen uniformly
at random. Let q j be a lower bound on the probability that our modiied algorithm
reaches S (or some other satisfying assignment) when it starts with a truth assignment
that includes exactly j variables that do not agree with S. Consider a particle moving
on the integer line, with probability 1/3 of moving up by one and probability 2/3 of
moving down by one. Notice that
k
j+k
j + 2k 2 1
k 3 3
is the probability of exactly k moves down and k + j moves up in a sequence of j + 2k
moves. It is therefore a lower bound on the probability that the algorithm reaches a
176
7.1 markov chains: definitions and representations
Eqn. (5.5) we have previously proven for factorials. Stirling’s formula is tighter, which
proves useful for this application. We use the following loose form.
Lemma 7.3 [Stirling’s Formula]: For m > 0,
m
√ m
m! = 2πm (1 ± o(1)).
e
In particular, for m > 0,
m
m
√ m √ m
2πm ≤ m! ≤ 2 2πm .
e e
Hence, when j > 0,
3j (3 j)!
=
j j! (2 j)!
√
3 j
2 j
j
2π (3 j) 3j e e
≥ √ √
4 2π j 2π (2 j) e 2j j
√
j
3 27
= √
8 πj 4
j
c 27
= √
j 4
177
markov chains and random walks
√ √
for a constant c = 3/8 π . Thus, when j > 0,
j
2 j
3j 2 1
qj ≥
j 3 3
j
j
2 j
c 27 2 1
≥√
j 4 3 3
c 1
≥ √ j.
j2
Also, q0 = 1.
Having established a lower bound for q j , we can now derive a lower bound for q, the
probability that the process reaches a satisfying assignment in 3n steps when starting
with a random assignment:
n
q≥ Pr(a random assignment has j mismatches with S) · q j
j=0
n
n
1 n 1 c 1
≥ + √ j
2n j=1
j 2 j2
n n
j
c 1 n 1
≥ √ (1)n− j (7.3)
n 2 j=0
j 2
n
n
c 1 3
= √
n 2 2
n
c 3
= √ ,
n 4
j n
where in (7.3) we used nj=0 nj 21 (1)n− j = 1 + 12 .
Assuming that a satisfying assignment exists, the number of random assignments the
process tries before inding a satisfying assignment is a geometric random variable with
parameter q. The expected number of assignments tried is 1/q, and for each assignment
the algorithm uses at most 3n steps. Thus, the expected number of steps until a solution
is found is bounded by O(n3/2 (4/3)n ). As in the case of 2-SAT (Theorem 7.2), the
modiied 3-SAT algorithm (Algorithm 7.3) yields a Monte Carlo algorithm for the 3-
SAT problem. If the expected number of steps until a satisfying solution is found is
bounded above by a and if m is set to 2ab, then the probability that no assignment is
found when the formula is satisiable is bounded above by 2−b .
A irst step in analyzing the long-term behavior of a Markov chain is to classify its
states. In the case of a inite Markov chain, this is equivalent to analyzing the connec-
tivity structure of the directed graph representing the Markov chain.
178
7.2 classification of states
Deinition 7.2: State j is accessible from state i if, for some integer n ≥ 0, Pi,n j > 0. If
two states i and j are accessible from each other, we say that they communicate and we
write i ↔ j.
In the graph representation of a chain, i ↔ j if and only if there are directed paths
connecting i to j and j to i.
The communicating relation deines an equivalence relation. That is, the communi-
cating relation is
Proving this is left as Exercise 7.4. Thus, the communication relation partitions the
states into disjoint equivalence classes, which we refer to as communicating classes. It
might be possible to move from one class to another, but in that case it is impossible to
return to the irst class.
Deinition 7.3: A Markov chain is irreducible if all states belong to one communicat-
ing class.
In other words, a Markov chain is irreducible if, for every pair of states, there is a
nonzero probability that the irst state can reach the second. We thus have the following
lemma.
Lemma 7.4: A inite Markov chain is irreducible if and only if its graph representation
is a strongly connected graph.
Next we distinguish between transient and recurrent states. Let ri,t j denote the proba-
bility that, starting at state i, the irst transition to state j occurs at time t; that is,
Deinition 7.4: A state is recurrent if t≥1 ri,it = 1, and it is transient if t≥1 ri,it < 1.
A Markov chain is recurrent if every state in the chain is recurrent.
If state i is recurrent then, once the chain visits that state, it will (with probability 1)
eventually return to that state. Hence the chain will visit state i over and over again,
ininitely often. On the other hand, if state i is transient then, starting at i, the chain
will return to i with some ixed probability p = t≥1 ri,it . In this case, the number of
times the chain visits i when starting at i is given by a geometric random variable. If
one state in a communicating class is transient (respectively, recurrent) then all states
in that class are transient (respectively, recurrent); proving this is left as Exercise 7.5.
We denote the expected time to return to state i when starting at state i by
t
hi,i = t≥1 t · ri,i . Similarly, for any pair of states i and j, we denote by hi, j =
t
t≥1 t · ri, j the expected time to irst reach j from state i. It may seem that if a chain is
recurrent, so that we visit a state i ininitely often, then hi,i should be inite. This is not
the case, which leads us to the following deinition.
179
markov chains and random walks
To give an example of a Markov chain that has null recurrent states, consider a chain
whose states are the positive integers. From state i, the probability of going to state
i + 1 is i/(i + 1). With probability 1/(i + 1), the chain returns to state 1. Starting at
state 1, the probability of not having returned to state 1 within the irst t steps is thus
t
j 1
= .
j=1
j+1 t +1
Hence the probability of never returning to state 1 from state 1 is 0, and state 1 is
recurrent. It follows that
t 1
r1,1 = .
t(t + 1)
However, the expected number of steps until the irst return to state 1 from state 1 is
∞ ∞
t
1
h1,1 = t · r1,1 = ,
t=1 t=1
t +1
which is unbounded.
In the foregoing example the Markov chain had an ininite number of states. This is
necessary for null recurrent states to exist. The proof of the following important lemma
is left as Exercise 7.16.
Finally, for our later study of limiting distributions of Markov chains we will need to
deine what it means for a state to be aperiodic. As an example of periodicity, consider
a random walk whose states are the positive integers. When at state i, with probability
1/2 the chain moves to i + 1 and with probability 1/2 the chain moves to i − 1. If
the chain starts at state 0, then it can be at an even-numbered state only after an even
number of moves, and it can be at an odd-numbered state only after an odd number of
moves. This is an example of periodic behavior.
Deinition 7.6: A state j in a discrete time Markov chain is periodic if there exists an
integer > 1 such that Pr(Xt+s = j | Xt = j) = 0 unless s is divisible by . A discrete
time Markov chain is periodic if any state in the chain is periodic. A state or chain that
is not periodic is aperiodic.
In our example, every state in the Markov chain is periodic because, for every state j,
Pr(Xt+s = j | Xt = j) = 0 unless s is divisible by 2.
We end this section with an important corollary about the behavior of inite Markov
chains.
180
7.2 classification of states
Corollary 7.6: Any inite, irreducible, and aperiodic Markov chain is an ergodic
chain.
Proof: A inite chain has at least one recurrent state by Lemma 7.5, and if the chain
is irreducible then all of its states are recurrent. In a inite chain, all recurrent states
are positive recurrent by Lemma 7.5 and thus all the states of the chain are positive
recurrent and aperiodic. The chain is therefore ergodic.
lim Pt = q.
t→∞ ℓ2
Since each round of the gambling game is fair, the expected gain of player 1 in each
step is 0. Let W t be the gain of player 1 after t steps. Then E[W t ] = 0 for any t by
induction. Thus,
ℓ2
t
E[W ] = iPit = 0
i=−ℓ1
181
markov chains and random walks
and
lim E[W t ] = ℓ2 q − ℓ1 (1 − q)
t→∞
= 0.
Thus,
ℓ1
q= .
ℓ1 + ℓ2
That is, the probability of winning (or losing) is proportional to the amount of money
a player is willing to lose (or win).
Another approach that yields the same answer is to let q j represent the probabil-
ity that player 1 wins ℓ2 dollars before losing ℓ1 dollars when having won j dollars
for −ℓ1 ≤ j ≤ ℓ2 . Clearly, q−ℓ1 = 0 and qℓ2 = 1. For −ℓ1 < j < ℓ2 , we compute by
considering the outcome of the irst game:
q j−1 q j+1
qj = + .
2 2
We have ℓ2 + ℓ1 − 2 linearly independent equations and ℓ2 + ℓ1 − 2 unknowns, so
there is a unique solution to this set of equations. It is easy to verify that q j = (ℓ1 + j)/
(ℓ1 + ℓ2 ) satisies the given equations.
In Exercise 7.20, we consider the question of what happens if, as is generally the
case in real life, one player is at a disadvantage and so is slightly more likely to lose
than to win any single game.
Recall that if P is the one-step transition probability matrix of a Markov chain and if
p̄(t) is the probability distribution of the state of the chain at time t, then
p̄(t + 1) = p̄(t)P.
Of particular interest are state probability distributions that do not change after a tran-
sition.
Deinition 7.8: A stationary distribution (also called an equilibrium distribution) of a
Markov chain is a probability distribution π̄ such that
π̄ = π̄ P.
If a chain ever reaches a stationary distribution then it maintains that distribution for all
future time, and thus a stationary distribution represents a steady state or an equilibrium
in the chain’s behavior. Stationary distributions play a key role in analyzing Markov
chains. The fundamental theorem of Markov chains characterizes chains that converge
to stationary distributions.
We discuss irst the case of inite chains and then extend the results to any discrete
space chain. Without loss of generality, assume that the inite set of states of the Markov
chain is {0, 1, . . . , n}.
182
7.3 stationary distributions
Theorem 7.7: Any inite, irreducible, and ergodic Markov chain has the following
properties:
Under the conditions of this theorem, the stationary distribution π̄ has two interpre-
tations. First, πi is the limiting probability that the Markov chain will be in state i
ininitely far out in the future, and this probability is independent of the initial state. In
other words, if we run the chain long enough, the initial state of the chain is almost for-
gottenand the probability of being in state i converges to πi . Second, πi is the inverse of
∞
hi,i = t=1 t · ri,it , the expected number of steps for a chain starting in state i to return
to i. This stands to reason; if the average time to return to state i from i is hi,i , then
we expect to be in state i for 1/hi,i of the time and thus, in the limit, we must have
πi = 1/hi,i .
Proof of Theorem 7.7: We prove the theorem using the following result, which we
state without proof.
Lemma 7.8: For any irreducible, ergodic Markov chain and for any state i, the limit
t
limt→∞ Pi,i exists and
1
lim Pt = .
t→∞ i,i hi,i
This lemma is a corollary of a basic result in renewal theory. We give an informal
justiication for Lemma 7.8: the expected time between visits to i is hi,i , and therefore
t
state i is visited 1/hi,i of the time. Thus limt→∞ Pi,i , which represents the probability a
state chosen far in the future is at state i when the chain starts at state i, must be 1/hi,i .
t
Using the fact that limt→∞ Pi,i exists, we now show that, for any j and i,
1
lim Ptj,i = lim Pi,i
t
= ;
t→∞ t→∞ hi,i
that is, these limits exist and are independent of the starting state j.
t
Recall that r j,i is the probability that starting
∞ at j, the chain irst visits i at time t.
t
Since the chain is irreducible we have that t=1 r j,i = 1, and for any ε > 0 there exists
(a inite) t1 = t1 (ε) such that tt=1
1 t
r j,i ≥ 1 − ε.
For j = i, we have
t
Ptj,i = rkj,i Pi,i
t−k
.
k=1
For t ≥ t1 ,
t1
t
rkj,i Pi,i
t−k
≤ rkj,i Pi,i
t−k
= Ptj,i .
k=1 k=1
183
markov chains and random walks
t
Using the facts that limt→∞ Pi,i exists and t1 is inite, we have
t1
lim Ptj,i ≥ lim rkj,i Pi,i
t−k
t→∞ t→∞
k=1
t1
= rkj,i lim Pi,i
t
t→∞
k=1
t1
t
= lim Pi,i rkj,i
t→∞
k=1
t
≥ (1 − ε) lim Pi,i .
t→∞
Similarly,
t
Ptj,i = rkj,i Pi,i
t−k
k=1
t1
≤ rkj,i Pi,i
t−k
+ ε,
k=1
184
7.3 stationary distributions
Letting t → ∞, we have
n
πi = πk Pk,i ,
k=0
π̄P = π̄.
This is particularly useful if one is given a speciic chain. For example, given the tran-
sition matrix
⎡ ⎤
0 1/4 0 3/4
⎢1/2 0 1/3 1/6⎥
P=⎢ ⎥,
⎢ ⎥
⎣1/4 1/4 1/2 0 ⎦
0 1/2 1/4 1/4
we have ive equations for the four unknowns π0 , π1 , π2 , and π3 given by π̄ P = π̄ and
3
i=0 πi = 1. The equations have a unique solution.
Another useful technique is to study the cut-sets of the Markov chain. For any state
i of the chain,
n
n
π j Pj,i = πi = πi Pi, j
j=0 j=0
185
markov chains and random walks
or
π j Pj,i = πi Pi, j .
j=i j=i
That is, in the stationary distribution the probability that a chain leaves a state equals
the probability that it enters the state. This observation can be generalized to sets of
states as follows.
Theorem 7.9: Let S be a set of states of a inite, irreducible, aperiodic Markov chain.
In the stationary distribution, the probability that the chain leaves the set S equals the
probability that it enters S.
In other words, if C is a cut-set in the graph representation of the chain, then in the
stationary distribution the probability of crossing the cut-set in one direction is equal
to the probability of crossing the cut-set in the other direction.
A basic but useful Markov chain that serves as an example of cut-sets is given in
Figure 7.2. The chain has only two states. From state 0, you move to state 1 with prob-
ability p and stay at state 0 with probability 1 − p. Similarly, from state 1 you move
to state 0 with probability q and remain in state 1 with probability 1 − q. This Markov
chain is often used to represent bursty behavior. For example, when bits are corrupted in
transmissions they are often corrupted in large blocks, since the errors are often caused
by an external phenomenon of some duration. In this setting, being in state 0 after t
steps represents that the tth bit was sent successfully, while being in state 1 represents
that the bit was corrupted. Blocks of successfully sent bits and corrupted bits both have
lengths that follow a geometric distribution. When p and q are small, state changes are
rare, and the bursty behavior is modeled.
The transition matrix is
1− p p
P= .
q 1−q
π0 (1 − p) + π1 q = π0 ;
π0 p + π1 (1 − q) = π1 ;
π0 + π1 = 1.
186
7.3 stationary distributions
πi Pi, j = π j Pj,i ,
then π̄ is the stationary distribution corresponding to P.
Proof: Consider the jth entry of π̄P. Using the assumption of the theorem, we ind
that it equals
n
n
πi Pi, j = π j Pj,i = π j .
i=0 i=0
Thus π̄ satisies π̄ = π̄P. Since ni=0 πi = 1, it follows from Theorem 7.7 that π̄ must
be the unique stationary distribution of the Markov chain.
187
markov chains and random walks
If Xt is the number of customers in the queue at time t, then under the foregoing rules
the Xt yield a inite-state Markov chain. Its transition matrix has the following nonzero
entries:
Pi,i+1 = λ if i < n;
Pi,i−1 = μ if i > 0;
⎧
⎨1 − λ
⎪ if i = 0,
Pi,i = 1 − λ − μ if 1 ≤ i ≤ n − 1,
⎪
⎩1 − μ if i = n.
The Markov chain is irreducible, inite, and aperiodic, so it has a unique stationary
distribution π̄. We use π̄ = π̄ P to write
π0 = (1 − λ)π0 + μπ1 ,
πi = λπi−1 + (1 − λ − μ)πi + μπi+1 , 1 ≤ i ≤ n − 1,
πn = λπn−1 + (1 − μ)πn .
For all 0 ≤ i ≤ n,
(λ/μ)i
πi = n i
. (7.4)
i=0 (λ/μ)
188
7.4 random walks on undirected graphs
Another way to compute the stationary probability in this case is to use cut-sets. For
any i, the transitions i → i + 1 and i + 1 → i constitute a cut-set of the graph represent-
ing the Markov chain. Thus, in the stationary distribution, the probability of moving
from state i to state i + 1 must be equal to the probability of moving from state i + 1
to i, or
λπi = μπi+1 .
A simple induction now yields
i
λ
πi = π0 .
μ
In the case where there is no upper limit n on the number of customers in a queue,
the Markov chain is no longer inite. The Markov chain has a countably ininite state
space. Applying Theorem 7.11, the Markov chain has a stationary distribution if and
only if the following set of linear equations has a solution with all πi > 0:
π0 = (1 − λ)π0 + μπ1 ;
(7.5)
πi = λπi−1 + (1 − λ − μ)πi + μπi+1 , i ≥ 1.
It is easy to verify that
i
(λ/μ)i
λ λ
πi = ∞ i
= 1−
i=0 (λ/μ) μ μ
is a solution of the system of equations (7.5). This naturally generalizes the solution
to the case where there is an upper bound n on the number of the customers in the
system given in Eqn. (7.4). All of the πi are greater than 0 if and only if λ < μ, which
corresponds to the situation when the rate at which customers arrive is lower than the
rate at which they are served. If λ > μ, then the rate at which customers arrive is higher
than the rate at which they depart. Hence there is no stationary distribution, and the
queue length will become arbitrarily long. In this case, each state in the Markov chain
is transient. The case of λ = μ is more subtle. Again, there is no stationary distribution
and the queue length will become arbitrarily long, but now the states are null recurrent.
(See the related Exercise 7.17.)
A random walk on an undirected graph is a special type of Markov chain that is often
used in analyzing algorithms. Let G = (V, E ) be a inite, undirected, and connected
graph.
Deinition 7.9: A random walk on G is a Markov chain deined by the sequence of
moves of a particle between vertices of G. In this process, the place of the particle at
a given time step is the state of the system. If the particle is at vertex i and if i has d(i)
outgoing edges, then the probability that the particle follows the edge (i, j) and moves
to a neighbor j is 1/d(i).
189
markov chains and random walks
We have already seen an example of such a walk when we analyzed the randomized
2-SAT algorithm.
For a random walk on an undirected graph, we have a simple criterion for aperiod-
icity as follows.
Lemma 7.12: A random walk on an undirected graph G is aperiodic if and only if G
is not bipartite.
Proof: A graph is bipartite if and only if it does not have cycles with an odd number
of edges. In an undirected graph, there is always a path of length 2 from a vertex to
itself. If the graph is bipartite then the random walk is periodic with period d = 2.
If the graph is not bipartite then it has an odd cycle, and by traversing that cycle we
have an odd-length path from any vertex to itself. It follows that the Markov chain is
aperiodic.
For the remainder of this section we assume that G is not bipartite. A random walk
on a inite, undirected, connected, and non-bipartite graph G satisies the conditions of
Theorem 7.7, and hence the random walk converges to a stationary distribution. We
show that this distribution depends only on the degree sequence of the graph.
Theorem 7.13: A random walk on G converges to a stationary distribution π̄, where
d(v)
πv = .
2|E|
Proof: Since v∈V d(v) = 2|E|, it follows that
d(v)
πv = = 1,
v∈V v∈V
2|E|
Recall that we have used hu,v to denote the expected time to reach state v when
starting at state u. The value hu,v is often referred to as the hitting time from u to v,
or just the hitting time where the meaning is clear. Another value related to the hitting
time is the commute time between u and v, given by hu,v + hv,u . Unlike the hitting time,
the commute time is symmetric; it represents the time to go from u to v and back to u,
and this is the same as the time to go from v to u and back to v. Finally, for random
walks on graphs, we are also interested in a quantity called the cover time.
Deinition 7.10: The cover time of a graph G = (V, E ) is the maximum over all ver-
tices v ∈ V of the expected time to visit all of the nodes in the graph by a random walk
starting from v.
190
7.4 random walks on undirected graphs
We consider here some basic bounds on the commute time and the cover time for
standard random walks on a inite, undirected, connected graph G = (V, E ).
Lemma 7.14: If (u, v) ∈ E, the commute time hu,v + hv,u is at most 2|E|.
Proof: Let D be a set of directed edges such that for every edge (u, v) ∈ E we have
the two directed edges u → v and v → u in D. We can view the random walk on G as
a Markov chain with state space D, where the state of the Markov chain at time t is the
directed edge taken by the random walk in its tth transition. The Markov chain has 2|E|
states and it is easy to verify that it has a uniform stationary distribution. (This is left
as Exercise 7.29.) Since the stationary probability of being in state u → v is 1/2|E|,
once the original random walk traverses the directed edge u → v the expected time
to traverse that directed edge again is 2|E|. Because the random walk is memoryless,
once it reaches vertex v we can “forget” that it reached it through the edge u → v, and
therefore the expected time starting at v to reach u and then traverse the edge u → v
back to v is bounded above by 2|E|. As this is only one of the possible ways to go from
v to u and back to v, we have shown that hv,u + hu,v ≤ 2|E|.
Lemma 7.15: The cover time of G = (V, E ) is bounded above by 2|E|(|V | − 1).
Proof: Choose a spanning tree T of G; that is, choose any subset of the edges that gives
an acyclic graph connecting all the vertices of G. Starting from any vertex v, there exists
a cyclic (Eulerian) tour on the spanning tree in which every edge is traversed once in
each direction; for example, such a tour can be found by considering the sequence of
vertices passed through by a depth irst search. The maximum expected time to go
through the vertices in the tour, where the maximum is over the choice of starting
vertex, is an upper bound on the cover time. Let v0 , v1 , . . . , v2|V |−2 be the sequence of
vertices in the tour starting from v0 = v. Then the expected time to go through all the
vertices in sequence order is
2|V |−3
hvi ,vi+1 = (hx,y + hy,x ) ≤ 2|E|(|V | − 1).
i=0 (x,y)∈T
In words, the commute time for every pair of adjacent vertices in the tree is bounded
above by 2|E|, and there are |V | − 1 pairs of adjacent vertices.
The following result is known as Matthews’ theorem, which relates the cover time
of
n graph to the hitting time. Recall that we use H(n) to denote the harmonic number
a
i=1 1/i ≈ ln n.
Proof: For convenience let B = maxu,v∈V :u=v hu,v . Consider a random walk starting
from a vertex u. We choose an ordering of the vertices according to a uniform permu-
tation; let Z1 , Z2 , . . . , Zn be the ordering. Let Tj be the irst time when all of the irst
j vertices in the order, Z1 , Z2 , . . . , Z j , have been visited, and let A j be the last vertex
from the set {Z1 , . . . , Z j } that was visited. Following the spirit of the coupon collector’s
191
markov chains and random walks
analysis, we consider the successive time intervals Tj − Tj−1 . If the chain’s history is
given by X1 , X2 , . . ., then in particular for j ≥ 2 we consider
Y j = E[Tj − Tj−1 | Z1 , . . . , Z j ; X1 , . . . , XTj−1 ].
The expected time to cover the graph starting from u is
n
Y j + E[T1 ].
j=2
One can similarly obtain lower bounds using the same technique. A natural lower
bound is
CG ≥ H(n − 1) min hu,v .
u,v∈V :u=v
However, the minimum hitting time can be very small for some graphs, making this
bound less useful. In some cases, the lower bound can be made stronger by considering
a subset of vertices V ′ ⊂ V . In this case, the proof can be modiied to give
CG ≥ H(|V ′ | − 1) min hu,v .
u,v∈V ′ :u=v
The term from the harmonic series is smaller, but the minimum hitting time used in the
bound may correspondingly be larger.
Here we develop a randomized algorithm that works with only O(log n) bits of mem-
ory. This could be even less than the number of bits required to write the path between
s and t. The algorithm is simple: perform a random walk on G for enough steps so that
a path from s to t is likely to be found. We use the cover time result (Lemma 7.16)
to bound the number of steps that the random walk has to run. For convenience,
assume that the graph G has no bipartite connected components, so that the results of
Theorem 7.13 apply to any connected component of G. (The results can be made to
apply to bipartite graphs with some additional technical work.)
Theorem 7.17: The s–t connectivity algorithm (Algorithm 7.4) returns the correct
answer with probability 1/2, and it only errs by returning that there is no path from s
to t when there is such a path.
Proof: If there is no path then the algorithm returns the correct answer. If there is a path,
the algorithm errs if it does not ind the path within 2n3 steps of the walk. The expected
time to reach t from s (if there is a path) is bounded from above by the cover time
of their shared component, which by Lemma 7.15 is at most 2nm < n3 . By Markov’s
inequality, the probability that a walk takes more than 2n3 steps to reach t from s is at
most 1/2.
The algorithm must keep track of its current position, which takes O(log n) bits, as well
as the number of steps taken in the random walk, which also takes only O(log n) bits
(since we count up only to 2n3 ). As long as there is some mechanism for choosing a
random neighbor from each vertex, this is all the memory required.
In game B, we also repeatedly lip coins, but the coin that is lipped depends on how
you have been doing so far in the game. Let w be the number of your wins so far and
ℓ the number of your losses. Each round we bet one dollar, so w − ℓ represents your
winnings; if it is negative, you have lost money. Game B uses two biased coins, coin b
and coin c. If your winnings in dollars are a multiple of 3, then you lip coin b, which
comes up heads with probability pb and tails with probability 1 − pb . Otherwise, you
lip coin c, which comes up heads with probability pc and tails with probability 1 − pc .
Again, you win a dollar if the coin comes up heads and lose a dollar if it comes up tails.
This game is more complicated, so let us consider a speciic example. Suppose coin
b comes up heads with probability pb = 0.09 and tails with probability 0.91 and that
coin c comes up heads with probability pc = 0.74 and tails with probability 0.26. At
irst glance, it might seem that game B is in your favor. If we use coin b for the 1/3 of
the time that your winnings are a multiple of 3 and use coin c the other 2/3 of the time,
then your probability w of winning is
1 9 2 74 157 1
w= + = > .
3 100 3 100 300 2
The problem with this line of reasoning is that coin b is not necessarily used 1/3 of
the time! To see this intuitively, consider what happens when you irst start the game,
when your winnings are 0. You use coin b and most likely lose, after which you use
coin c and most likely win. You may spend a great deal of time going back and forth
between having lost one dollar and breaking even before either winning one dollar or
losing two dollars, so you may use coin b more than 1/3 of the time.
In fact, the speciic example for game B is a losing game for you. One way to show
this is to suppose that we start playing game B when your winnings are 0, continuing
until you either lose three dollars or win three dollars. If you are more likely to lose
than win in this case, by symmetry you are more likely to lose three dollars than win
three dollars whenever your winnings are a multiple of 3. On average, then, you would
obviously lose money on the game.
One way to determine if you are more likely to lose than win is to analyze the absorb-
ing states. Consider the Markov chain on the state space consisting of the integers
{−3, . . . , 3}, where the states represent your winnings. We want to know, when you
start at 0, whether or not you are more likely to reach −3 before reaching 3. We can
determine this by setting up a system of equations. Let zi represent the probability you
will end up having lost three dollars before having won three dollars when your current
winnings are i dollars. We calculate all the probabilities z−3 , z−2 , z−1 , z0 , z1 , z2 , and z3 ,
although what we are really interested in is z0 . If z0 > 1/2, then we are more likely to
lose three dollars than win three dollars starting from 0. Here z−3 = 1 and z3 = 0; these
are boundary conditions. We also have the following equations:
z−2 = (1 − pc )z−3 + pc z−1 ,
z−1 = (1 − pc )z−2 + pc z0 ,
z0 = (1 − pb )z−1 + pb z1 ,
z1 = (1 − pc )z0 + pc z2 ,
z2 = (1 − pc )z1 + pc z3 .
194
7.5 parrondo’s paradox
This is a system of ive equations with ive unknowns, and hence it can be solved easily.
The general solution for z0 is
(1 − pb )(1 − pc )2
z0 = .
(1 − pb )(1 − pc )2 + pb p2c
For the speciic example here, the solution yields z0 = 15,379/27,700 ≈ 0.555, show-
ing that one is much more likely to lose than win playing this game over the long run.
Instead of solving these equations directly, there is a simpler way of determining
the relative probability of reaching −3 or 3 irst. Consider any sequence of moves that
starts at 0 and ends at 3 before reaching −3. For example, a possible sequence is
s = 0, 1, 2, 1, 2, 1, 0, −1, −2, −1, 0, 1, 2, 1, 2, 3.
We create a one-to-one and onto mapping of such sequences with the sequences that
start at 0 and end at −3 before reaching 3 by negating every number starting from the
last 0 in the sequence. In this example, s maps to f (s), where
f (s) = 0, 1, 2, 1, 2, 1, 0, −1, −2, −1, 0, −1, −2, −1, −2, −3.
It is simple to check that this is a one-to-one mapping of the relevant sequences.
The following lemma provides a useful relationship between s and f (s).
Lemma 7.18: For any sequence s of moves that starts at 0 and ends at 3 before reach-
ing −3, we have
Pr(s occurs) pb p2c
= .
Pr( f (s) occurs) (1 − pb )(1 − pc )2
Proof: For any given sequence s satisfying the properties of the lemma, let t1 be the
number of transitions from 0 to 1; t2 , the number of transitions from 0 to −1; t3 , the
sum of the number of transitions from −2 to −1, −1 to 0, 1 to 2, and 2 to 3; and t4 ,
the sum of the number of transitions from 2 to 1, 1 to 0, −1 to −2, and −2 to −3. Then
the probability that the sequence s occurs is ptb1 (1 − pb )t2 ptc3 (1 − pc )t4 .
Now consider what happens when we transform s to f (s). We change one transition
from 0 to 1 into a transition from 0 to −1. After this point, in s the total number of
transitions that move up 1 is two more than the number of transitions that move down
1, since the sequence ends at 3. In f (s), then, the total number of transitions that move
down 1 is two more than the number of transitions that move up 1. It follows that
the probability that the sequence f (s) occurs is ptb1 −1 (1 − pb )t2 +1 ptc3 −2 (1 − pc )t4 +2 . The
lemma follows.
By letting S be the set of all sequences of moves that start at 0 and end at 3 before
reaching −3, it immediately follows that
pb p2c
Pr(3 is reached before −3) Pr(s occurs)
= s∈S = .
Pr(−3 is reached before 3) s∈S Pr( f (s) occurs) (1 − pb )(1 − pc )2
If this ratio is less than 1, then you are more likely to lose than win. In our speciic
example, this ratio is 12,321/15,379 < 1.
195
markov chains and random walks
Finally, yet another way to analyze the problem is to use the stationary distribution.
Consider the Markov chain on the states {0, 1, 2}, where here the states represent the
remainder when our winnings are divided by 3. (That is, the state keeps track of w − ℓ
mod 3.) Let πi be the stationary probability of this chain. The probability that we win
a dollar in the stationary distribution, which is the limiting probability that we win a
dollar if we play long enough, is then
pb π0 + pc π1 + pc π2 = pb π0 + pc (1 − π0 )
= pc − (pc − pb )π0 .
Again, we want to know if this is greater than or less than 1/2.
The equations for the stationary distribution are easy to write:
π0 + π1 + π2 = 1,
pb π0 + (1 − pc )π2 = π1 ,
pc π1 + (1 − pb )π0 = π2 ,
pc π2 + (1 − pc )π1 = π0 .
Indeed, since there are four equations and only three unknowns, one of these equations
is actually redundant. The system is easily solved to ind
1 − pc + p2c
π0 = ,
3 − 2pc − pb + 2pb pc + p2c
pb pc − pc + 1
π1 = ,
3 − 2pc − pb + 2pb pc + p2c
pb pc − pb + 1
π2 = .
3 − 2pc − pb + 2pb pc + p2c
Recall that you lose if the probability of winning in the stationary distribution is less
than 1/2 or, equivalently, if pc − (pc − pb )π0 < 1/2. In our speciic example, π0 =
673/1759 ≈ 0.3826 . . . , and
86,421 1
pc − (pc − pb )π0 = < .
175,900 2
Again, we ind that game B is a losing game in the long run.
We have now completely analyzed game A and game B. Next let us consider what
happens when we try to combine these two games. In game C, we repeatedly perform
the following bet. We start by lipping a fair coin, call it coin d. If coin d is heads, we
proceed as in game A: we lip coin a, and if the coin is heads, you win. If coin d is tails,
we then proceed to game B: if your current winnings are a multiple of 3, we lip coin b;
otherwise, we lip coin c, and if the coin is heads then you win. It would seem that this
must be a losing game for you. After all, game A and game B are both losing games,
and this game just lips a coin to decide which of the two games to play.
In fact, game C is exactly like game B, except the probabilities are slightly different.
If your winnings are a multiple of 3, then the probability that you win is p∗b = 12 pa +
1
p . Otherwise, the probability that you win is p∗c = 21 pa + 12 pc . Using p∗b and p∗c in
2 b
place of pb and pc , we can repeat any of the foregoing analyses we used for game B.
196
7.5 parrondo’s paradox
197
markov chains and random walks
7.6. Exercises
Exercise 7.1: Consider a Markov chain with state space {0, 1, 2, 3} and a transition
matrix
⎡ ⎤
0 3/10 1/10 3/5
⎢1/10 1/10 7/10 1/10⎥
P=⎢ ⎥,
⎢ ⎥
⎣1/10 7/10 1/10 1/10⎦
9/10 1/10 0 0
so P0,3 = 3/5 is the probability of moving from state 0 to state 3.
(a) Find the stationary distribution of the Markov chain.
(b) Find the probability of being in state 3 after 32 steps if the chain begins at state 0.
(c) Find the probability of being in state 3 after 128 steps if the chain begins at a state
chosen uniformly at random from the four states.
(d) Suppose that the chain begins in state 0. What is the smallest value of t for which
t
maxs |P0,s − πs | ≤ 0.01? Here π̄ is the stationary distribution. What is the smallest
t
value of t for which maxs |P0,s − πs | ≤ 0.001?
Exercise 7.2: Consider the two-state Markov chain with the following transition
matrix
p 1− p
P= .
1− p p
t
Find a simple expression for P0,0 .
Exercise 7.3: Consider a process X0 , X1 , X2 , . . . with two states, 0 and 1. The process
is governed by two matrices, P and Q. If k is even, the values Pi, j give the probability
of going from state i to state j on the step from Xk to Xk+1 . Likewise, if k is odd then
the values Qi, j give the probability of going from state i to state j on the step from Xk to
Xk+1 . Explain why this process does not satisfy Deinition 7.1 of a (time-homogeneous)
Markov chain. Then give a process with a larger state space that is equivalent to this
process and satisies Deinition 7.1.
Exercise 7.4: Prove that the communicating relation deines an equivalence relation.
Exercise 7.5: Prove that if one state in a communicating class is transient (respectively,
recurrent) then all states in that class are transient (respectively, recurrent).
198
7.6 exercises
walk stays at 0. Everywhere else the random walk moves either up or down 1, each with
probability 1/2. Find the expected number of moves to reach n, starting from position
i and using a random walk with a partially relecting boundary.
Exercise 7.7: Suppose that the 2-SAT Algorithm 7.1 starts with an assignment chosen
uniformly at random. How does this affect the expected time until a satisfying assign-
ment is found?
Exercise 7.8: Generalize the randomized algorithm for 3-SAT to k-SAT. What is the
expected time of the algorithm as a function of k?
Exercise 7.9: In the analysis of the randomized algorithm for 3-SAT, we made the pes-
simistic assumption that the current assignment Ai and the truth assignment S differ on
just one variable in the clause chosen at each step. Suppose instead that, independently
at each step, the two assignments disagree on one variable in the clause with probabil-
ity p and at least two variables with probability 1 − p. What is the largest value of p
for which you can prove that the expected number of steps before Algorithm 7.2 termi-
nates is polynomial in the number of variables n? Give a proof for this value of p and
give an upper bound on the expected number of steps in this case.
Exercise 7.11: An n × n matrix P with entries Pi, j is called stochastic if all entries are
nonnegative and if the sum of the entries in each row is 1. It is called doubly stochastic
if, additionally, the sum of the entries in each column is 1. Show that the uniform
distribution is a stationary distribution for any Markov chain represented by a doubly
stochastic matrix.
Exercise 7.12: Let Xn be the sum of n independent rolls of a fair die. Show that, for
any k ≥ 2,
1
lim Pr(Xn is divisible by k) = .
n→∞ k
199
markov chains and random walks
Exercise 7.13: Consider a inite Markov chain on n states with stationary distribution
π̄ and transition probabilities Pi, j . Imagine starting the chain at time 0 and running it for
m steps, obtaining the sequence of states X0 , X1 , . . . , Xm . Consider the states in reverse
order, Xm , Xm−1 , . . . , X0 .
(a) Argue that given Xk+1 , the state Xk is independent of Xk+2 , Xk+3 , . . . , Xm . Thus the
reverse sequence is Markovian.
(b) Argue that for the reverse sequence, the transition probabilities Qi, j are given by
π j Pj,i
Qi, j = .
πi
(c) Prove that if the original Markov chain is time reversible, so that πi Pi, j = π j Pj,i ,
then Qi, j = Pi, j . That is, the states follow the same transition probabilities whether
viewed in forward order or reverse order.
Exercise 7.14: Prove that the Markov chain corresponding to a random walk on an
undirected, non-bipartite graph that consists of one component is time reversible.
Exercise 7.15: Let Pi,it be the probability that a Markov chain returns to state i when
started in state i after t steps. Prove that
∞
Pi,it
t=1
Exercise 7.17: Consider the following Markov chain, which is similar to the 1-dimen-
sional random walk with a completely relecting boundary at 0. Whenever position 0
is reached, with probability 1 the walk moves to position 1 at the next step. Otherwise,
the walk moves from i to i + 1 with probability p and from i to i − 1 with probability
1 − p. Prove that:
(a) if p < 1/2, each state is positive recurrent;
(b) if p = 1/2, each state is null recurrent;
(c) if p > 1/2, each state is transient.
Exercise 7.18: (a) Consider a random walk on the 2-dimensional integer lattice, where
each point has four neighbors (up, down, left, and right). Is each state transient, null
recurrent, or positive recurrent? Give an argument.
(b) Answer the problem in (a) for the 3-dimensional integer lattice.
Exercise 7.19: Consider the gambler’s ruin problem, where a player plays until she
lose ℓ1 dollars or win ℓ2 dollars. Prove that the expected number of games played is
ℓ1 ℓ2 .
200
7.6 exercises
Exercise 7.20: We have considered the gambler’s ruin problem in the case where the
game is fair. Consider the case where the game is not fair; instead, the probability of
losing a dollar each game is 2/3 and the probability of winning a dollar each game is
1/3. Suppose that you start with i dollars and inish either when you reach n or lose it
all. Let Wt be the amount you have gained after t rounds of play.
Exercise 7.21: Consider a Markov chain on the states {0, 1, . . . , n}, where for i < n
we have Pi,i+1 = 1/2 and Pi,0 = 1/2. Also, Pn,n = 1/2 and Pn,0 = 1/2. This process
can be viewed as a random walk on a directed graph with vertices {0, 1, . . . , n}, where
each vertex has two directed edges: one that returns to 0 and one that moves to the
vertex with the next higher number (with a self-loop at vertex n). Find the stationary
distribution of this chain. (This example shows that random walks on directed graphs
are very different than random walks on undirected graphs.)
Exercise 7.22: A cat and a mouse each independently take a random walk on a con-
nected, undirected, non-bipartite graph G. They start at the same time on different
nodes, and each makes one transition at each time step. The cat eats the mouse if they
are ever at the same node at some time step. Let n and m denote, respectively, the
number of vertices and edges of G. Show an upper bound of O(m2 n) on the expected
time before the cat eats the mouse. (Hint: Consider a Markov chain whose states are
the ordered pairs (a, b), where a is the position of the cat and b is the position of the
mouse.)
(a) Explain how this problem can be viewed in terms of Markov chains.
(b) Determine a method for computing the probability that j hosts have received the
message after round k given that i hosts have received the message after round
k − 1. (Hint: There are various ways of doing this. One approach is to let P(i, j, c)
be the probability that j hosts have the message after the irst c of the i hosts have
made their choices in a round; then ind a recurrence for P.)
(c) As a computational exercise, write a program to determine the number of rounds
required for a message starting at one host to reach all other hosts with probability
0.9999 when n = 128.
201
markov chains and random walks
Exercise 7.24: The lollipop graph on n vertices is a clique on n/2 vertices connected
to a path on n/2 vertices, as shown in Figure 7.3. The node u is a part of both the clique
and the path. Let v denote the other end of the path.
(a) Show that the expected covering time of a random walk starting at v is (n2 ).
(b) Show that the expected covering time for a random walk starting at u is (n3 ).
The game ends when a player reaches the goal position, 36.
(a) Let Xi be a random variable representing the number of rolls needed to get to 36
from position i for 0 ≤ i ≤ 35. Give a set of equations that characterize E[Xi ].
(b) Using a program that can solve systems of linear equations, ind E[Xi ] for 0 ≤ i ≤
35.
Exercise 7.26: Let n equidistant points be marked on a circle. Without loss of gen-
erality, we think of the points as being labeled clockwise from 0 to n − 1. Initially, a
wolf begins at 0 and there is one sheep at each of the remaining n − 1 points. The wolf
takes a random walk on the circle. For each step, it moves with probability 1/2 to one
neighboring point and with probability 1/2 to the other neighboring point. At the irst
visit to a point, the wolf eats a sheep if there is still one there. Which sheep is most
likely to be the last eaten?
Exercise 7.27: Suppose that we are given n records, R1 , R2 , . . . , Rn . The records are
kept in some order. The cost of accessing the jth record in the order is j. Thus, if we
had four records ordered as R2 , R4 , R3 , R1 , then the cost of accessing R4 would be 2
and the cost of accessing R1 would be 4.
Suppose further that, at each step, record R j is accessed with probability p j , with
each step being independent of other steps. If we knew the values of the p j in advance,
202
7.6 exercises
we would keep the R j in decreasing order with respect to p j . But if we don’t know the
p j in advance, we might use the “move to front” heuristic: at each step, put the record
that was accessed at the front of the list. We assume that moving the record can be done
with no cost and that all other records remain in the same order. For example, if the
order was R2 , R4 , R3 , R1 before R3 was accessed, then the order at the next step would
be R3 , R2 , R4 , R1 .
In this setting, the order of the records can be thought of as the state of a Markov
chain. Give the stationary distribution of this chain. Also, let Xk be the cost for accessing
the kth requested record. Determine an expression for limk→∞ E[Xk ]. Your expression
should be easily computable in time that is polynomial in n, given the p j .
Exercise 7.28: Consider the following variation of the discrete time queue. Time is
divided into ixed-length steps. At the beginning of each time step, a customer arrives
with probability λ. At the end of each time step, if the queue is nonempty then the
customer at the front of the line completes service with probability μ.
(a) Explain how the number of customers in the queue at the beginning of each time
step forms a Markov chain, and determine the corresponding transition probabili-
ties.
(b) Explain under what conditions you would expect a stationary distribution π̄ to
exist.
(c) If a stationary distribution exists, then what should be the value of π0 , the proba-
bility that no customers are in the queue at the beginning of the time step? (Hint:
Consider that, in the long run, the rate at which customers enter the queue and the
rate at which customers leave the queue must be equal.)
(d) Determine the stationary distribution and explain how it corresponds to your con-
ditions from part (b).
(e) Now consider the variation where we change the order of incoming arrivals and
service. That is: at the beginning of each time step, if the queue is nonempty then
a customer is served with probability μ; and at the end of a time step a customer
arrives with probability λ. How does this change your answers to parts (a)–(d)?
Exercise 7.29: Prove that the Markov chain from Lemma 7.14 where the states are
the 2|E| directed edges of the graph has a uniform stationary distribution.
Exercise 7.30: We consider the covering time for the standard random walk on a
hypercube with N = 2n nodes. (See Deinition 4.3 if needed to recall the deinition of
a hypercube.) Let (u, v) be an edge in the hypercube.
(a) Prove that the expected time between traversals of the edge (u, v) from u to v is
Nn.
(b) We consider the time between transitions from u to v in a different way. After
moving from u to v, the walk must irst return to u. When it returns to u, the walk
might next move to v, or it might move to another neighbor of u, in which case it
must return to u again before moving to v for there to be a transition from u to v.
203
markov chains and random walks
Use symmetry and the above description to prove the following recurrence:
∞
1 n − 1 i−1
Nn = (i(hu,v + 1)) = n(hu,v + 1).
i=1
n n
204
chapter eight
Continuous Distributions and
the Poisson Process
This chapter introduces the general concept of continuous random variables, focusing
on two examples of continuous distributions: the uniform distribution and the expo-
nential distribution. We then proceed to study the Poisson process, a continuous time
counting process that is related to both the uniform and exponential distributions. We
conclude this chapter with basic applications of the Poisson process in queueing theory.
Let S(k) be a set of k distinct points in the range [0, 1), and let p be the probability that
any given point in [0, 1) is the outcome of the roulette experiment. Since the probability
205
continuous distributions and the poisson process
0
x
206
8.1 continuous random variables
Because
and the expectation and higher moments of a random variable X with density function
f (x) are deined by the integrals
∞
i
E[X ] = xi f (x) dx.
−∞
Lemma 8.1: Let X be a continuous random variable that takes on only nonnegative
values. Then
∞
E[X] = Pr(X ≥ x) dx.
0
The interchange of the order of the integrals is justiied because the expression being
integrated is nonnegative.
207
continuous distributions and the poisson process
Again, we denote
∂2
f (x, y) = F (x, y)
∂x ∂y
when the derivative exists. These deinitions are generalized to joint distribution func-
tions over more than two variables in the obvious way.
Given a joint distribution function F (x, y) over X and Y, one may consider the
marginal distribution functions
FX (x) = Pr(X ≤ x), FY (y) = Pr(Y ≤ y),
and the corresponding marginal density functions fX (x) and fY (y).
Deinition 8.2: The random variables X and Y are independent if, for all x and y,
Pr((X ≤ x) ∩ (Y ≤ y)) = Pr(X ≤ x) Pr(Y ≤ y).
From the deinition, two random variables are independent if and only if their joint
distribution function is the product of their marginal distribution functions:
F (x, y) = FX (x)FY (y).
It follows from taking the derivatives with respect to x and y that, if X and Y are inde-
pendent, then
f (x, y) = fX (x) fY (y),
and this condition is suficient as well.
As an example, let a and b be positive constants, and consider the joint distribution
function for two random variables X and Y given by
F (x, y) = 1 − e−ax − e−by + e−(ax+by)
over the range x, y ≥ 0. We can compute that
FX (x) = F (x, ∞) = 1 − e−ax ,
and similarly FY (y) = 1 − e−by . Alternatively, we could compute
f (x, y) = abe−(ax+by) ,
208
8.1 continuous random variables
We obtain
so X and Y are independent. Alternatively, working with the density functions we verify
their independence by
Pr(E ∩ F )
Pr(E | F ) = ,
Pr(F )
Pr((X ≤ 3) ∩ (Y ≤ 6))
Pr(X ≤ 3 | Y ≤ 6) =
Pr(Y ≤ 6)
Pr(X ≤ 3 | Y = 4),
but since Pr(Y = 4) is an event with probability 0, the deinition is not applicable.
If we did apply the deinition, it would yield
Pr((X ≤ 3) ∩ (Y = 4))
Pr(X ≤ 3 | Y = 4) = .
Pr(Y = 4)
Both the numerator and denominator are zero, suggesting that we should be taking a
limit as they both approach zero. The natural choice is
Here we have assumed that we can interchange the limit with the integration and that
fY (y) = 0.
The value
f (x, y)
fX|Y (x, y) =
fY (y)
is also called a conditional density function. We may similarly use
f (x, y)
fY |X (x, y) = .
fX (x)
Our deinition yields the natural interpretation that, in order to compute Pr(X ≤ x |
Y = y), we integrate the corresponding conditional density function over the appropri-
ate range. You can check that this deinition yields the standard deinition for Pr(X ≤ x |
Y ≤ y) through appropriate integration. Similarly, we may compute the conditional
expectation
∞
E[X | Y = y] = x fX|Y (x, y) dx
x=−∞
When a random variable X assumes values in the interval [a, b] such that all subintervals
of equal length have equal probability, we say that X has the uniform distribution over
the interval [a, b] or alternatively that it is uniform over the interval [a, b]. We denote
such a random variable by U[a, b]. We may also talk about uniform distributions over
210
8.2 the uniform distribution
the interval [a, b), (a, b], or (a, b). Indeed, since the probability of taking on any speciic
value is 0 when b > a, the distributions are essentially the same.
The probability distribution function of such an X is
⎧
⎨0
⎪ if x ≤ a,
x−a
F (x) = b−a if a ≤ x ≤ b,
⎪
1 if x ≥ b,
⎩
Lemma 8.2: Let X be a uniform random variable on [a, b]. Then, for c ≤ d,
c−a
Pr(X ≤ c | X ≤ d) = .
d−a
That is, conditioned on the fact that X ≤ d, X is uniform on [a, d].
Proof:
Pr((X ≤ c) ∩ (X ≤ d))
Pr(X ≤ c | X ≤ d) =
Pr(X ≤ d)
Pr(X ≤ c)
=
Pr(X ≤ d)
c−a
= .
d−a
It follows that X, conditioned on being less than or equal to d, has a distribution function
that is exactly that of a uniform random variable on [a, d].
Figure 8.3: A correspondence between random points on a circle and random points on a line.
1 − e−θx for x ≥ 0,
F (x) =
0 otherwise.
213
continuous distributions and the poisson process
Proof:
Pr(X > s + t )
Pr(X > s + t | X > t ) =
Pr(X > t )
1 − Pr(X ≤ s + t )
=
1 − Pr(X ≤ t )
−θ (s+t )
e
=
e−θt
−θs
=e
= Pr(X > s).
214
8.3 the exponential distribution
Proof: It sufices to prove the statement for two exponential random variables; the gen-
eral case then follows by induction. Let X1 and X2 be independent exponential random
variables with parameters θ1 and θ2 . Then
For example, suppose that an airline ticket counter has n service agents, where the time
that agent i takes per customer has an exponential distribution with parameter θi . You
stand at the head of the line at time T0 , and all of the n agents are busy. What is the
average time you wait for an agent?
215
continuous distributions and the poisson process
Because service times are exponentially distributed, it does not matter for how long
each agent has been helping another customer before time T0 ; the remaining time for
each customer is still exponentially distributed. This is a feature of the memoryless
property of the exponential distribution. Lemma 8.5 therefore applies. The time until
the irst agent becomes free is exponentially distributed with parameter ni=1 θi , so the
expected waiting time is 1/ ni=1 θi . Indeed, you can even determine the probability
that
each agent is the irst to become free; the jth agent is irst with probability θ j / ni=1 θi .
Theorem 8.6: Under any starting conditions, if p > 1 then with probability 1 there
exists a number c such that one of the two bins gets no more than c balls.
Note the careful wording of the theorem. We are not saying that there is some ixed c
(perhaps dependent on the initial conditions) such that one bin gets no more than c balls.
(If we meant this, we would say that there exists a number c such that, with probabil-
ity 1, one bin gets no more than c balls.) Instead, we are saying that, with probability 1,
at some point (which we do not know ahead of time) one bin stops receiving balls.
Proof: For convenience, assume that both bins start with one ball; this does not affect
the result.
216
8.3 the exponential distribution
Figure 8.5: In the setup where the time between ball arrivals is exponentially distributed, each bin
can be considered separately; an outcome of the original process is obtained by simply combining
the timelines of the two bins.
We start by considering a very closely related process. Consider two bins that start
with one ball at time 0. Balls arrive at each of the bins. If bin 1 obtains its zth ball
at time t then it obtains its next ball at a time t + Tz , where Tz is a random variable
exponentially distributed with parameter z p . Similarly, if bin 2 obtains its zth ball at
time t then it obtains its next ball at a time t + Uz , where Uz is also a random variable
exponentially distributed with parameter z p . All values of Tz and Uz are independent.
Each bin can be considered independently in this setup; what happens at one bin does
not affect the other.
Although this process may not seem related to the original problem, we now claim
that it mimics it exactly. Consider the point at which a ball arrives, leaving x balls in
bin 1 and y balls in bin 2. By the memoryless nature of the exponential distribution,
it does not matter which bin the most recently arrived ball has landed in; the time for
the next ball to land in bin 1 is exponentially distributed with mean x−p and the time
for the next ball to land in bin 2 is exponentially distributed with mean y−p. Moreover,
by Lemma 8.5, the next ball lands in bin 1 with probability x p /(x p + y p ) and in bin 2
with probability y p /(x p + y p ). Therefore, this setup mimics exactly what happens in
the original problem. See Figure 8.5. ∞
∞ Let us deine the saturation time F 1 for bin 1 by F1 = j=1 T j , and similarly F2 =
j=1 U j . The saturation time represents the irst time in which the total number of balls
received by a bin is unbounded. It is not clear that saturation times are well-deined
random variables: What if the sum does not converge, and thus its value is ininity? It
is here that we make use of the fact that p > 1. We have
⎡ ⎤
∞ ∞ ∞
1
E[F1 ] = E ⎣ Tj ⎦ = E[Tj ] = .
j=1 j=1 j=1
jp
Here we used linearity of expectations for a countably ininite summation of random
variables, which holds if ∞ j=1 E[|T j |] converges. (Chapter 2 discusses the applicabil-
ity of the linearity of expectations to countably ininite summations; see in particular
Exercise 2.29.) It sufices to show that ∞ p
j=1 1/ j converges to a inite number when-
ever p > 1. This follows from bounding the summation by the appropriate integral:
∞ ∞
1 1 1
p
≤ 1 + p
du = 1 + .
j=1
j u=1 u p − 1
217
continuous distributions and the poisson process
Indeed, all of the integral moments converge to a inite number. It follows that both F1
and F2 are, with probability 1, inite and hence well-deined.
Furthermore, F1 and F2 are distinct with probability 1. To see this, suppose that the
values for all of the random variables Tz and Uz are given except for T1 . Then, for F1 to
equal F2 , it must be the case that
∞
∞
T1 = Uj − Tj .
j=1 j=2
But the probability that T1 takes on any speciic value is 0, just as the probability that
our roulette wheel takes on any speciic value is 0. Hence, F1 = F2 with probability 1.
Suppose that F1 < F2 . Then we must have for some n that
n
n+1
U j < F1 < Uj.
j=1 j=1
which means that bin 1 has obtained m balls before bin 2 has obtained its (n + 1)th
ball. Since our new process corresponds exactly to the original balls-and-bins process,
this is also what happens in the original process. But this means that, once bin 2 has
n balls, it does not receive any others; they all go to bin 1. The argument is the same
if F2 < F1 . Hence, with probability 1, there exists some n such that one bin obtains no
more than n balls.
When p is close to 1 or when the bins start with a large and nearly equal number
of balls, it can take a long time before one bin dominates enough to obtain such a
monopoly. On the other hand, monopoly happens quickly when p is much greater than
1 (such as p = 2) and the bins start with just one ball each. You are asked to simulate
this process in Exercise 8.25.
The Poisson process is an important counting process that is related to both the uniform
and the exponential distribution. Consider a sequence of random events, such as arrivals
of customers to a queue or emissions of alpha particles from a radioactive material. Let
N(t ) denote the number of events in the interval [0, t]. The process {N(t ), t ≥ 0} is a
stochastic counting process.
Deinition 8.4: A Poisson process with parameter (or rate) λ is a stochastic counting
process {N(t ), t ≥ 0} such that the following statements hold.
1. N(0) = 0.
2. The process has independent and stationary increments. That is, for any t, s > 0, the
distribution of N(t + s) − N(s) is identical to the distribution of N(t), and for any
218
8.4 the poisson process
two disjoint intervals [t1 , t2 ] and [t3 , t4 ], the distribution of N(t2 ) − N(t1 ) is inde-
pendent of the distribution of N(t4 ) − N(t3 ).
3. limt→0 Pr(N(t) = 1)/t = λ. That is, the probability of a single event in a short inter-
val t tends to λt.
4. limt→0 Pr(N(t) ≥ 2)/t = 0. That is, the probability of more than one event in a short
interval t tends to zero.
The surprising fact is that this set of broad, relatively natural conditions deines a unique
process. In particular, the number of events in a given time interval follows the Poisson
distribution deined in Section 5.3.
Theorem 8.7: Let {N(t) | t ≥ 0} be a Poisson process with parameter λ. For any t, s ≥
0 and any integer n ≥ 0,
(λt)n
Pn (t) = Pr(N(t + s) − N(s) = n) = e−λt .
n!
Proof: We irst observe that Pn (t) is well-deined since, by the second property of Def-
inition 8.4, the distribution of N(t + s) − N(s) depends only on t and is independent
of s.
To compute P0 (t), we note that the number of events in the intervals [0, t] and
(t, t + h] are independent random variables and therefore
P0 (t + h) = P0 (t)P0 (h).
We now write
P0 (t + h) − P0 (t) P0 (h) − 1
= P0 (t)
h h
1 − Pr(N(h) = 1) − Pr(N(h) ≥ 2) − 1
= P0 (t)
h
− Pr(N(h) = 1) − Pr(N(h) ≥ 2)
= P0 (t) .
h
Taking the limit as h → 0 and applying properties 2–4 of Deinition 8.4, we obtain
P0 (t + h) − P0 (t)
P0′ (t) = lim
h→0 h
− Pr(N(h) = 1) − Pr(N(h) ≥ 2)
= lim P0 (t)
h→0 h
= −λP0 (t).
To solve
P0′ (t) = −λP0 (t),
we rewrite it as
P0′ (t)
= −λ.
P0 (t)
Integrating with respect to t gives
ln P0 (t) = −λt + C,
219
continuous distributions and the poisson process
or
P0 (t) = e−λt+C .
Since P0 (0) = 1, we conclude that
P0 (t) = e−λt . (8.1)
For n ≥ 1, we write
n
Pn (t + h) = Pn−k (t)Pk (h)
k=0
n
= Pn (t)P0 (h) + Pn−1 (t)P1 (h) + Pn−k (t) Pr(N(h) = k).
k=2
To solve
Pn′ (t) = −λPn (t) + λPn−1 (t)
we write
eλt (Pn′ (t) + λPn (t)) = eλt λPn−1 (t),
which gives
d λt
(e Pn (t)) = λeλt Pn−1 (t). (8.2)
dt
Using Eqn. (8.1) then yields
d λt
(e P1 (t)) = λeλt P0 (t) = λ,
dt
220
8.4 the poisson process
implying
P1 (t) = (λt + c)e−λt .
Since P1 (0) = 0, we conclude that
P1 (t) = λte−λt . (8.3)
We continue by induction on n to prove that, for all n ≥ 0,
(λt)n
Pn (t) = e−λt.
n!
Using Eqn. (8.2) and the induction hypothesis, we have
d λt λnt n−1
(e Pn (t)) = λeλt Pn−1 (t) = .
dt (n − 1)!
Integrating and using the fact that Pn (0) = 0 gives the result.
The parameter λ is also called the rate of the Poisson process, since (as we have proved)
the number of events during any time period of length t is a Poisson random variable
with expectation λt.
The reverse is also true. That is, we could equivalently have deined the Poisson
process as a process with Poisson arrivals, as follows.
Theorem 8.8: Let {N(t) | t ≥ 0} be a stochastic process such that:
1. N(0) = 0;
2. the process has independent increments (i.e., the number of events in disjoint time
intervals are independent events); and
3. the number of events in an interval of length t has a Poisson distribution with mean
λt.
Then {N(t) | t ≥ 0} is a Poisson process with rate λ.
Proof: The process clearly satisies conditions 1 and 2 of Deinition 8.4. To prove
condition 3, we have
Pr(N(t) = 1) e−λt λt
lim = lim = λ.
t→0 t t→0 t
Condition 4 follows from
Pr(N(t) ≥ 2) e−λt (λt)k
lim = = 0.
t→0 t k≥2
k!t
Using the fact that the Poisson process has independent and stationary increments, we
can prove the following stronger result.
Theorem 8.10: The random variables Xi , i = 1, 2, . . . , are independent, identically
distributed, exponential random variables with parameter λ.
Proof: The distribution of Xi is given by
Pr(Xi > ti | (X0 = t0 ) ∩ (X1 = t1 ) ∩ · · · ∩ (Xi−1 = ti−1 ))
i i−1
= Pr N tk − N tk = 0
k=0 k=0
−λti
=e .
Thus, the distribution of Xi is exponential with parameter λ, and it is independent of
other interarrival values.
Theorem 8.10 states that, if we have a Poisson arrival process, then the interarrival times
are identically distributed exponential random variables. In fact, it is easy to check that
the reverse is also true (this is left as Exercise 8.17).
Theorem 8.11: Let {N(t) | t ≥ 0} be a stochastic process such that:
1. N(0) = 0; and
2. the interarrival times are independent, identically distributed, exponential random
variables with parameter λ.
Then {N(t) | t ≥ 0} is a Poisson process with rate λ.
Theorem 8.12: Let N1 (t) and N2 (t) be independent Poisson processes with parame-
ters λ1 and λ2 , respectively. Then N1 (t) + N2 (t) is a Poisson process with parameter
λ1 + λ2 , and each event of the process N1 (t) + N2 (t) arises from the process N1 (t) with
probability λ1 /(λ1 + λ2 ).
Proof: Clearly N1 (0) + N2 (0) = 0, and since the two processes are independent and
each has independent increments, the sum of the two processes also has independent
increments. The number of arrivals N1 (t) + N2 (t) is a sum of two independent Poisson
random variables, which (as we saw in Lemma 5.2) has a Poisson distribution with
parameter λ1 + λ2 . Thus, by Theorem 8.8, N1 (t) + N2 (t) is a Poisson process with rate
λ1 + λ2 .
By Theorem 8.9, the interarrival time for N1 (t) + N2 (t) is exponentially distributed
with parameter λ1 + λ2 , and by Lemma 8.5 an event in N1 (t) + N2 (t) comes from the
process N1 (t) with probability λ1 /(λ1 + λ2 ).
Theorem 8.13: Suppose that we have a Poisson process N(t) with rate λ. Each event
is independently labeled as being type 1 with probability p or type 2 with probabil-
ity 1 − p. Then the type-1 events form a Poisson process N1 (t) of rate λp, the type-2
events form a Poisson process N2 (t) of rate λ(1 − p), and the two Poisson processes
are independent.
Proof: We irst show that the type-1 events in fact form a Poisson process. Clearly
N1 (t) = 0, and since the process N(t) has independent increments, so does the process
N1 (t). Next we show that N1 (t) has a Poisson distribution:
∞
Pr(N1 (t) = k) = Pr(N1 (t) = k | N(t) = j) Pr(N(t) = j)
j=k
∞
j e−λt (λt) j
= pk (1 − p) j−k
k j!
j=k
∞
e−λpt (λpt )k e−λt(1−p) (λt(1 − p)) j−k
=
k! j=k
( j − k)!
e−λpt (λpt )k
= .
k!
Thus, by Theorem 8.8, N1 (t) is a Poisson process with rate λp.
To show independence, we need to show that N1 (t) and N2 (u) are independent for
any t and u. In fact, it sufices to show that N1 (t) and N2 (t) are independent for any t;
223
continuous distributions and the poisson process
we can then show that N1 (t) and N2 (u) are independent for any t and u by taking advan-
tage of the fact that Poisson processes have independent and stationary increments (see
Exercise 8.18). We have:
Theorem 8.14: Given that N(t) = n, the n arrival times have the same distribution as
the order statistics of n independent random variables with uniform distribution over
[0, t].
224
8.4 the poisson process
Proof: We irst compute the distribution of the order statistics of n independent obser-
vations X1 , X2 , . . . , Xn drawn from a uniform distribution in [0, t]. Let Y(1) , . . . , Y(n)
denote the order statistics.
We want an expression for
For any permutation i1 , i2 , . . . , in of the numbers from 1 to n, let Ei1 ,i2 ,...,in be the event
that
The events Ei1 ,i2 ,...,in are disjoint, except for the cases where Xi j = Xi j+1 for some j. Since
two uniform random variables are equal with probability 0, the total probability of
such events is 0 and can be ignored. By symmetry, all events Ei1 ,i2 ,...,in have the same
probability. Also,
E= Ei1 ,i2 ,...,in ,
where the sum in the second line is over all n! permutations. If we now think of ui as
representing the value taken on by Xi , then
Pr(X1 ≤ s1 , X1 ≤ X2 ≤ s2 , . . . , Xn−1 ≤ Xn ≤ sn )
s1 s2 sn
n
1
= ··· dun · · · du1 ,
u1 =0 u2 =u1 un =un−1 t
where we use the fact that the density function of a uniform random variable on [0, t]
is f (t) = 1/t. This gives
n! s1
s2 sn
Pr(Y(1) ≤ s1 , Y(2) ≤ s2 , . . . , Y(n) ≤ sn ) = n ··· dun · · · du1 .
t u1 =0 u2 =u1 un =un−1
We now consider the distribution of the arrival times for a Poisson process, condi-
tioned on N(t) = n. Let S1 , . . . , Sn+1 be the irst n + 1 arrival times. Also, let T1 = S1
and Ti = Si − Si−1 be the length of the interarrival intervals. By Theorem 8.10, we
know that (a) without the condition N(t) = n, the distributions of the random variables
T1 , . . . , Tn are independent, and (b) for each i, Ti has an exponential distribution with
parameter λ. Recalling that the density function of the exponential distribution is λe−λt ,
225
continuous distributions and the poisson process
we have
Pr(S1 ≤ s1 , S2 ≤ s2 , . . . , Sn ≤ sn , N(t) = n)
n−1 n
= Pr T1 ≤ s1 , T2 ≤ s2 − T1 , . . . , Tn ≤ sn − Ti , Tn+1 > t − Ti
i=1 i=1
n−1
s1 s2 −t1 sn − i=1 ti
∞ n+1
= ··· λn+1 e−λ( i=1 ti ) dtn+1 · · · dt1 .
n
t1 =0 t2 =0 tn =0 tn+1 =t− i=1 ti
= λn e−λt .
Thus,
Pr(S1 ≤ s1 , S2 ≤ s2 , . . . , Sn ≤ sn , N(t) = n)
s1 s2 −t1 sn −n−1
i=1 ti
n −λt
=λ e ··· dtn · · · dt1
t1 =0 t2 =0 tn =0
s1 s2 sn
= λn e−λt ··· dun · · · du1 ,
u1 =0 u2 =u1 un =un−1
i
where the last equation is obtained by substituting ui = j=1 ti .
Since
(λt)n
Pr(N(t) = n) = e−λt
n!
and because the number of events in an interval of length t has a Poisson distribution
with parameter λt, the conditional probability computation gives
Pr(S1 ≤ s1 , S2 ≤ s2 , . . . , Sn ≤ sn | N(t) = n)
Pr(S1 ≤ s1 , S2 ≤ s2 , . . . , Sn ≤ sn , N(t) = n)
=
Pr(N(t) = n)
n! s1
s2 sn
= n ··· dun · · · du1 .
t u1 =0 u2 =u1 un =un−1
This is exactly the distribution function of the order statistics, proving the theorem.
In Chapter 7 we studied discrete time and discrete space Markov chains. With the intro-
duction of continuous random variables, we can now study the continuous time ana-
logue of Markov chains, where the process spends a random interval of time in a state
before moving to the next one. To distinguish between the discrete and continuous
processes, when dealing with continuous time we speak of Markov processes.
226
8.5 continuous time markov processes
The deinition says that distribution of the state of the system at time X (s + t), condi-
tioned on the history up to time t, depends only on the state X (t) and is independent of
the particular history that led the process to state X (t).
Restricting our discussion to discrete space, continuous time Markov processes,
there is another equivalent way of formulating such processes that is more convenient
for analysis. Recall that a discrete time Markov chain is determined by a transition
matrix P = (Pi, j ), where Pi, j is the probability of a transition from state i to state j in
one step. A continuous time Markov process can be expressed as a combination of two
random processes as follows.
1. A transition matrix P = (pi, j ), where pi, j is the probability that the next state is
j given that the current state is i. (We use lowercase letters here for the transition
probabilities in order to distinguish them from the transition probabilities for cor-
responding discrete time processes.) The matrix P is the transition matrix for what
is called the embedded or skeleton Markov chain of the corresponding Markov pro-
cess.
2. A vector of parameters (θ1 , θ2 , . . . ) such that the distribution of time that the process
spends in state i before moving to the next step is exponential with parameter θi . The
distribution of time spent at a given state must be exponential in order to satisfy the
memoryless requirement of the Markov process.
A formal treatment of continuous time Markov processes is more involved than their
discrete counterparts, and a full discussion is beyond the scope of this book. We limit
our discussion to the question of computing the stationary distribution (also called
equilibrium distribution) for discrete space, continuous time processes, assuming that
a stationary distribution exists. As for the discrete time case, the value πi in a stationary
distribution π̄ gives the limiting probability that the Markov process will be in state i
ininitely far out in the future, regardless of the initial state. That is, if we let Pj,i (t)
be the probability of being in state i at time t when starting from state j at time 0,
then
Similarly, πi gives the long-term proportion of the time the process is in state i. Further-
more, if the initial state j is chosen from the stationary distribution, then the probability
of being in state i at time t is πi for all t.
2 Technically, as with the discrete time Markov chains, this is a time-homogeneous Markov process; this will be
the only type we study in this book.
227
continuous distributions and the poisson process
Otherwise, Pj,i (t) would not converge to a stationary value. Hence, in the stationary
distribution π̄ we have the following rate equations:
πi θi = πk θk pk,i . (8.4)
k
This set of equations has a nice interpretation. The expression on the left,πi θi , is the
rate at which transitions occur out of state i. The expression on the right, k πk θk pk,i ,
228
8.6 example: markovian queues
is the rate at which transitions occur into state i. (A transition that goes from state i
back to state i is counted both as a transition into and as a transition out of state i.)
At the stationary distribution, these rates must be equal, so that the long-term rates
of transitions into and out of the state are equal. This equalization of rates into and
out of every state provides a simple, intuitive way to ind stationary distributions for
continuous Markovian processes. This observation can be generalized to sets of states,
showing that a result similar to the cut-set equations of Theorem 7.9 for discrete time
Markov chains can be formulated for continuous time Markov processes.
If the exponential distributions governing the time spent in all of the states have the
same parameter, so that all the θi are equal, then Eqn. (8.4) becomes
πi = πk pk,i .
k
This corresponds to
π̄ = π̄ P,
where P is the transition matrix of the embedded Markov chain. We can conclude that
the stationary distribution of the continuous time process is the same as the stationary
distribution of the embedded Markov chain in this case.
3 Again, the proof that the system indeed converges relies on renewal theory and is beyond the scope of this book.
230
8.6 example: markovian queues
Since k≥0 πk = 1, we must have
λ
k
π0 = 1. (8.7)
k≥0
μ
If λ > μ, then the summation in Eqn. (8.7) does not converge and, in fact, the system
does not reach a stationary distribution. This is intuitively clear; if the rate of arrival of
new customers is larger than the rate of service completions, then the system cannot
reach a stationary distribution. If λ = μ, the system also cannot reach an equilibrium
distribution, as discussed in Exercise 8.23.
To compute the expected number of customers in the system in equilibrium, which
we denote by L, we write
∞
L= kπk
k=0
∞
k−1
λ λ λ
= k 1−
μ k=1 μ μ
λ 1
=
μ 1 − λ/μ
λ
= ,
μ−λ
where in the third equation we used the fact that the sum is the expectation of a geo-
metric random variable with parameter 1 − λ/μ.
It is interesting that we have nowhere used the fact that the service rule was to serve
the customer that had been waiting the longest. Indeed, since all service times are expo-
nentially distributed and since the exponential distribution is memoryless, all customers
appear equivalent to the queue in terms of the distribution of the service time required
until they leave, regardless of how long they have already been served. Thus, our equa-
tions for the equilibrium distribution and the expected number of customers in the sys-
tem hold for any service rule that serves some customer whenever at least one customer
is in the queue.
Next we compute the expected time a customer spends in the system when the sys-
tem is in equilibrium, denoted by W, assuming a FIFO queue. Let L(k) denote the event
that a new customer inds k customers in the queue. We can write
∞
W = E[W | L(k)] Pr(L(k)).
k=0
231
continuous distributions and the poisson process
Since the service times are independent, memoryless, and have expectation 1/μ, it
follows that
1
E[W | L(k)] = (k + 1) .
μ
To compute Pr(L(k)), we observe that if the system is in equilibrium then the rate
of transitions out of state k is πk θk , where θ0 = λ and θk = λ + μ for k ≥ 1. Applying
Lemma 8.5, the probability that the next transition from state k is caused by the arrival
of a new customer is λ/θk . Therefore, the rate at which customers arrive and ind k
customers already in the queue is
λ
πk θk = πk λ.
θk
Since the total rate of new arrivals to the system is λ, we conclude that the probability
that a new arrival inds k customers in the system is
πk λ
Pr(L(k)) = = πk .
λ
This is an example of the PASTA principle, which states that Poisson Arrivals See
Time Averages. That is, if a Markov process with Poisson arrivals has a stationary
distribution and if the fraction of time the system is in state k is πk , then πk is also
the proportion of arrivals that ind the system in state k when they arrive. The PASTA
principle, which is due to the independence and memoryless properties of the Poisson
process, is a useful tool that often simpliies analysis. A proof of the PASTA principle
for more general situations is beyond the scope of this book.
We can now compute
∞
W = E[W | L(k)] Pr(L(k))
k=0
∞
k+1
= πk
k=0
μ
∞
1
= 1+ kπk
μ k=0
1
= (1 + L)
μ
1 λ
= 1+
μ μ−λ
1
=
μ−λ
L
= .
λ
The relationship L = λW is known as Little’s result, and it holds not only for M/M/1
queues but for any stable queueing system. The proof of this fundamental result is
beyond the scope of this book.
232
8.6 example: markovian queues
Although the M/M/1 queue represents a very simple process, it can be useful for
studying more complicated processes. For example, suppose that we have several types
of customers entering a queue, with each type arriving according to a Poisson process,
and that all customers have exponentially distributed service times of mean μ. Since
Poisson processes combine, the arrival process to the queue is Poisson, and this can be
modeled as an M/M/1 queue. Similarly, suppose that we have a single Poisson arrival
process, and we establish a separate queue for each type of customer. If each arriving
customer is of type i with some ixed probability pi , then the Poisson process splits
into independent Poisson processes for each type of customer, and hence the queue for
each type is an M/M/1 queue. This type of splitting might occur, for example, if we
use separate processors for different types of jobs in a computer network.
service of one of the k current customers or the arrival of a new customer. Thus, the
time to the irst event is the minimum of k + 1 independent exponentially distributed
random variables; k of these variables have parameter μ, and one has parameter λ.
Applying Lemma 8.5 shows that, when there are k customers in the system, the time
to the irst event has an exponential distribution with parameter kμ + λ. Furthermore,
the lemma implies that, given that an event occurs, the probability that the event is an
arrival of a new customer is
λ
pk,k+1 = ,
λ + kμ
and when k ≥ 1 the probability that the event is the departure of a customer is
kμ
pk,k−1 = .
λ + kμ
Plugging these values into (8.4), we have that the stationary distribution π̄ satisies
π0 λ = π1 μ
and, for k ≥ 1,
πk (λ + kμ) = πk−1 λ + πk+1 (k + 1)μ. (8.8)
We rewrite (8.8) as
πk+1 (k + 1)μ = πk (λ + kμ) − πk−1 λ
= πk λ + πk kμ − πk−1 λ.
A simple induction yields that
πk kμ = πk−1 λ,
and therefore
λ
πk+1 = πk .
μ(k + 1)
Now, again a simple induction yields
k
λ 1
πk = π0 ,
μ k!
and therefore
∞
k
λ 1
1= π0 = π0 eλ/μ .
k=0
μ k!
We now proceed with our second approach: computing the distribution of the num-
ber of customers in the system at time t, denoted by M(t), and then considering the limit
of M(t) as t goes to ininity. Let N(t) be the total number of users that have joined the
network in the interval [0, t]. Since N(t) has a Poisson distribution, we can condition
on this value and write
∞
(λt)n
Pr(M(t) = j) = Pr(M(t) = j | N(t) = n)e−λt . (8.9)
n=0
n!
If a user joins the network at time x, then the probability that she is still connected
at time t is e−μ(t−x) . From Section 8.4.3, we know that the arrival time of an arbitrary
user is uniform on [0, t]. Thus, the probability that an arbitrary user is still connected
at time t is given by
t
1
dx
p= e−μ(t−x) = (1 − e−μt ).
0 t μt
Because the events for different users are independent, for j ≤ n we have
n j
Pr(M(t) = j | N(t) = n) = p (1 − p)n− j .
j
Thus, the number of users at time t has a Poisson distribution with parameter λtp.
Since
1 λ
lim λtp = lim λt (1 − e−μt ) = ,
t→∞ t→∞ μt μ
it follows that, in the limit, the number of customers has a Poisson distribution with
parameter λ/μ, matching our previous calculation.
235
continuous distributions and the poisson process
8.7. Exercises
Exercise 8.1: Let X and Y be independent, uniform random variables on [0, 1]. Find
the density function and distribution function for X + Y .
Exercise 8.3: Let X be a uniform random variable on [0, 1]. Determine Pr(X ≤ 1/2 |
1/4 ≤ X ≤ 3/4) and Pr(X ≤ 1/4 | (X ≤ 1/3) ∪ (X ≥ 2/3)).
Exercise 8.4: We agree to try to meet between 12 and 1 for lunch at our favorite
sandwich shop. Because of our busy schedules, neither of us is sure when we’ll arrive;
we assume that, for each of us, our arrival time is uniformly distributed over the hour.
So that neither of us has to wait too long, we agree that we will each wait exactly 15
minutes for the other to arrive, and then leave. What is the probability we actually meet
each other for lunch?
Exercise 8.5: In Lemma 8.3, we found the expectation of the smallest of n independent
uniform random variables over [0, 1] by directly computing the probability that it was
larger than y for 0 ≤ y ≤ 1. Perform a similar calculation to ind the probability that
the kth smallest of the n random variables is larger than y, and use this to show that its
expected value is k/(n + 1).
Exercise 8.8: Consider a complete graph on n vertices. Each edge is assigned a weight
chosen independently and uniformly at random from the real interval [0, 1]. We propose
the following greedy method for inding a small-weight Hamiltonian cycle in the graph.
At each step, there is a head vertex. Initially the head is vertex 1. At each step, we ind
the edge of least weight between the current head vertex and a new vertex that has
never been the head. We add this edge to the cycle and set the head vertex to the new
vertex. After n − 1 steps, we have a Hamiltonian path, which we complete to make
a Hamiltonian cycle by adding the edge from the last head vertex back to vertex 1.
What is the expected weight of the Hamiltonian cycle found by this greedy approach?
Also, ind the expectation when each edge is independently assigned a weight from an
exponential distribution with parameter 1.
Exercise 8.9: You would like to write a simulation that uses exponentially dis-
tributed random variables. Your system has a random number generator that produces
236
8.7 exercises
independent, uniformly distributed numbers from the real interval (0, 1). Give a proce-
dure that transforms a uniform random number as given to an exponentially distributed
random variable with parameter λ.
Exercise 8.10: Let n points be placed uniformly at random on the boundary of a circle
of circumference 1. These n points divide the circle into n arcs. Let Zi for 1 ≤ Zi ≤ n
be the length of these arcs in some arbitrary order.
(a) Prove that all Zi are at most c ln n/(n − 1) with probability at least 1 − 1/nc−1 .
(b) Prove that, for suficiently large n, there exists a constant c′ such that at least one
Zi is at least c′ ln n with probability at least 1/2. (Hint: Use the second moment
method.)
(c) Prove that all Zi are at least 1/2n2 with probability at least 1/2.
(d) Prove that, for suficiently large n, there exists a constant c′ such that at least one
Zi is at most c′/n2 with probability at least 1/2. (Hint: Use the second moment
method.)
(e) Explain how these results relate to the following problem: X1 , X2 , . . . , Xn−1 are
values chosen independently and uniformly at random from the real interval [0, 1].
We let Y1 , Y2 , . . . , Yn−1 represent these values in increasing sorted order, and we
also deine Y0 = 0 and Yn = 1. The points Yi break the unit interval into n segments.
What can we say about the shortest and longest of these segments?
Exercise 8.11: Bucket sort is a simple sorting algorithm discussed in Section 5.2.2.
(a) Explain how to implement Bucket sort so that its expected running time is O(n)
when the n elements to be sorted are independent, uniform random numbers that
are chosen from [0, 1].
(b) We now consider how to implement Bucket sort when the elements to be sorted
are not necessarily uniform over an interval. Speciically, suppose the elements to
be sorted are numbers of the form X + Y , where (for each element) X and Y are
independent, uniform random numbers chosen from [0, 1]. How can you modify
the buckets so that Bucket sort still has expected running time O(n)? What if the
elements to be sorted were numbers of the form max(X, Y ) instead of X + Y ?
Exercise 8.12: Let n points be placed uniformly at random on the boundary of a circle
of circumference 1. These n points divide the circle into n arcs. Let Zi for 1 ≤ Zi ≤ n
be the length of these arcs in some arbitrary order, and let X be the number of Zi that
are at least 1/n. Find E[X] and Var[X].
Exercise 8.13: A digital camera needs two batteries. You buy a pack of n batteries,
labeled 1 to n. Initially, you install batteries 1 and 2. Whenever a battery is drained,
you immediately replace the drained battery with the lowest numbered unused battery.
Assume that each battery lasts for an amount of time that is exponentially distributed
with mean μ before being drained, independent of all other batteries. Eventually, all
the batteries but one will be drained.
237
continuous distributions and the poisson process
(a) Find the probability that the battery numbered i is the one that is not eventually
drained.
(b) Find the expected time your camera will be able to run with this pack of batteries.
Exercise 8.16: There are n tasks that are given to n processors. Each task has two
phases, and the time for each phase is given by an exponentially distributed random
variable with parameter 1. The times for all phases and for all tasks are independent.
We say that a task is half-done if it has inished one of its two phases.
(a) Derive an expression for the probability that there are k tasks that are half-done at
the instant when exactly one task becomes completely done.
(b) Derive an expression for the expected time until exactly one task becomes com-
pletely done.
(c) Explain how this problem is related to the birthday paradox.
Exercise 8.18: Complete the proof of Theorem 8.13 by showing formally that, if N1 (t)
and N2 (t) are independent, then so are N1 (t) and N2 (u) for any t, u > 0.
Exercise 8.19: You are waiting at a bus stop to catch a bus across town. There are
actually n different bus lines you can take, each following a different route. Which bus
you decide to take will depend on which bus gets to the bus stop irst. As long as you are
waiting, the time you have to wait for a bus on the ith line is exponentially distributed
with mean μi minutes. Once you get on a bus on the ith line, it will take you ti minutes
to get across town.
238
8.7 exercises
Design an algorithm for deciding – when a bus arrives – whether or not you should
get on the bus, assuming your goal is to minimize the expected time to cross town.
(Hint: You want to determine the set of buses that you want to take as soon as they
arrive. There are 2n possible sets, which is far too large for an eficient algorithm. Argue
that you need only consider a small number of these sets.)
Exercise 8.20: Given a discrete space, continuous time Markov process X (t), we can
derive a discrete time Markov chain Z(t) by considering the states the process visits.
That is, let Z(0) = X (0), let Z(1) be the state that process X (t) irst moves to after
time t = 0, let Z(2) be the next state process X (t) moves to, and so on. (If the Markov
process X (t) makes a transition from state i to state i, which can occur when pi,i = 0
in the associated transition matrix, then the Markov chain Z(t) should also make a
transition from state i to state i.)
(a) Suppose that, in the process X (t), the time spent in state i is exponentially dis-
tributed with parameter θi = θ (which is the same for all i). Further suppose that
the process X (t) has a stationary distribution. Show that the Markov chain Z(t) has
the same stationary distribution.
(b) Give an example showing that, if the θi are not all equal, then the stationary distri-
butions for X (t) and Z(t) may differ.
Exercise 8.21: The Ehrenfest model is a basic model used in physics. There are n
particles moving randomly in a container. We consider the number of particles in the
left and right halves of the container. A particle in one half of the container moves to
the other half after an amount of time that is exponentially distributed with parameter 1,
independently of all other particles. See Figure 8.6.
(a) Find the stationary distribution of this process.
(b) What state has the highest probability in the stationary distribution? Can you sug-
gest an explanation for this?
Exercise 8.22: The following type of geometric random graph arises in the analy-
sis of dynamic wireless and sensor networks. We have n points uniformly distributed
in a square S of area n. Each point is connected to its k closest points in the square.
Denote this random graph by Gn,k . We show that there is a constant c > 0 such that if
k = c log n then with probability at least 1 − 1/n the graph Gn,k is connected. Consider
tessellating (tiling with smaller squares) the square S with n/(b log n) squares of area
b log n each, for some constant b; we assume that b log n divides n for convenience.
239
continuous distributions and the poisson process
(a) Choose constants b and c1 such that with suficiently high probability, every square
has at least 1 point, and at most c1 log n points.
(b) Conclude that with c ≥ 25c1 , the graph is connected with probability at least 1 −
1/n.
Exercise 8.23: We can obtain a discrete time Markov chain from the M/M/1 queueing
process in the manner described in Exercise 8.20. The discrete time chain tracks the
number of customers in the queue. It is useful to allow departure events to occur with
rate λ at the queue even when it is empty; this does not affect the queue behavior, but
it gives transitions from state 0 to state 0 in the corresponding Markov chain.
(a) Describe the possible transitions of this discrete-time chain and give their proba-
bilities.
(b) Show that the stationary distribution of this chain when λ < μ is the same as for
the M/M/1 process.
(c) Show that, in the case λ = μ, there is no valid stationary distribution for the Mar-
kov chain.
Exercise 8.25: Write a program to simulate the model of balls and bins with feedback.
(a) Start your simulation with 51 balls in bin 1 and 49 balls in bin 2, using p = 2. Run
your program 100 times, having it stop each time one bin has 60% of the balls. On
average, how many balls are in the bins when the program stops? How often does
bin 1 have the majority?
(b) Perform the same experiment as in part (a) but start with 52 balls in bin 1 and 48
balls in bin 2. How much does this change your answers?
(c) Perform the same experiment as in part (a) but start with 102 balls in bin 1 and 98
balls in bin 2. How much does this change your answers?
(d) Perform the same experiment as in part (a), but now use p = 1.5. How much does
this change your answers?
Exercise 8.26: We consider here one approach for studying a FIFO queue with a
constant service time of duration 1 and Poisson arrivals with parameter λ < 1. We
replace the constant service time by k exponentially distributed service stages, each
of mean duration 1/k. A customer must pass through all k stages before leaving the
queue, and once one customer begins going through the k stages, no other customer
can receive service until that customer inishes.
240
8.7 exercises
(a) Derive Chernoff bounds for the probability that the total time taken in k exponen-
tially distributed stages, each of mean 1/k, deviates signiicantly from 1.
(b) Derive a set of equations that deine the stationary distribution for this situation.
(Hint: Try letting π j be the limiting probability of having j stages of service left to
be served the queue. Each waiting customer requires k stages; the one being served
requires between 1 and k stages.) You should not try to solve these equations to give
a closed form for π j .
(c) Use these equations to numerically determine the average number of customers
in the queue in equilibrium, say for λ = 0.8 and for k = 10, 20, 30, 40, and 50.
Discuss whether your results seem to be converging as k increases, and compare
the expected number of customers to an M/M/1 queue with arrival rate λ < 1 and
expected service time μ = 1.
Exercise 8.27: Write a simulation for a bank of n M/M/1 FIFO queues, each with
Poisson arrivals of rate λ < 1 per second and each with service times exponentially
distributed with mean 1 second. Your simulation should run for t seconds and return
the average amount of time spent in the system per customer who completed service.
You should present results for your simulations for n = 100 and for t = 10,000 seconds
with λ = 0.5, 0.8, 0.9, and 0.99.
A natural way to write the simulation that we now describe is to keep a priority
queue of events. Such a queue stores the times of all pending events, such as the next
time a customer will arrive or the next time a customer will inish service at a queue.
A priority queue can answer queries of the form, “What is the next event?” Priority
queues are often implemented as heaps, for example.
When a customer bound for queue k arrives, the arrival time for the next customer
to queue k must then be calculated and entered in the priority queue. If queue k is
empty, the time that the arriving customer will complete service should be put in the
priority queue. If queue k is not empty, the customer is put at the tail of the queue. If
a queue is not empty after completing service for a customer, then the time that the
next customer (at the head of the queue) will complete service should be calculated
and put in the priority queue. You will have to track each customer’s arrival time and
completion time.
You may ind ways to simplify this general scheme. For example, instead of con-
sidering a separate arrival process for each queue, you can combine them into a single
arrival process based on what we know from Section 8.4.2. Explain whatever simplii-
cations you use.
You may wish to use Exercise 8.9 to help construct exponentially distributed random
variables for your simulation.
Modify your simulation so that, instead of service times being exponentially dis-
tributed with mean 1 second, they are always exactly 1 second. Again present results
for your simulation for n = 100 and for t = 10,000 seconds with λ = 0.5, 0.8, 0.9,
and 0.99. Do customers complete more quickly with exponentially distributed service
times or constant service times?
241
chapter nine
The Normal Distribution
The normal (or Gaussian) distribution plays a central role in probability theory and
statistics. Empirically, many real-world observable quantities, such as height, weight,
grades, and measurement error, are often well approximated by the normal distribution.
Furthermore, the central limit theorem states that under very general conditions the dis-
tribution of the average of a large number of independent random variables converges
to the normal distribution.
In this chapter we introduce the basic properties of univariate and multivariate nor-
mal random variables, prove the central limit theorem, compute maximum likelihood
estimates for the parameters of the normal distribution, and demonstrate the applica-
tion of the Expectation Maximization (EM) algorithm to the analysis of a mixture of
Gaussian distributions.1
1 Following the conventions of the different communities we use the term normal distribution in the context of
probability theory and the term Gaussian distribution in the context of machine learning.
242
9.1 the normal distribution
Since the density of a standard normal random variable Z is symmetric with respect
to z = 0, it follows that E[Z] = 0. The variance of Z can be found by using integration
2
by parts (where the parts used below are u = x and dv = xe−x /2 dx):
∞
2 1 2
Var[Z] = E[Z ] = √ x2 e−x /2 dx
2π −∞
∞
1 2
(x) xe−x /2 dx
= √
2π −∞
∞
1 2 1 2
= − √ xe−x /2 |∞ −∞ + √ e−x /2 dx = 1.
2π 2π −∞
In the last equality, the irst term is 0, and we have already observed that the second
term equals 1.
1 2
fX (x) = √ e−((x−μ)/σ ) /2 ,
2π σ
243
the normal distribution
z
Pr(Z ≤ z) = ∫−∞ φ(t)dt
Z ∼ N (0,1)
−3 −2 −1 0 1 2 3
Table 9.1: Standard normal distribution table. For z < 0 use (z) = 1 − (−z).
These expressions generalize the corresponding expressions for standard normal ran-
dom variables, where μ = 0 and σ = 1.
244
9.1 the normal distribution
Indeed, let X be a normal random variable with parameters μ and σ , and let Z =
(X − μ)/σ . Then
σ z+μ
1 2
Pr(Z ≤ z) = Pr(X ≤ σ z + μ) = √ e−((t−μ)/σ ) /2 dt.
2π σ −∞
Substituting x = (t − μ)/σ and using dt = σ dx, we ind the density of the standard
normal distribution,
z
1 2
Pr(Z ≤ z) = √ e−x /2 dx = (z).
2π −∞
We see that the normal distribution X is a linear transformation of the standard normal
distribution. That is, if X is a normal random variable with parameters μ and σ , then
Z = (X − μ)/σ is a standard normal random variable (with mean 0 and variance 1),
and similarly, if Z is a standard normal random variable, then X = σ Z + μ is a normal
random variable with parameters μ and σ . We have shown the following.
Lemma 9.1: A random variable has a normal distribution if and only if it is a linear
transformation of a standard normal random variable.
Since a random variable X from the distribution N(μ, σ 2 ) has the same distribution
as σ Z + μ, we have that E[X] = μ and Var[X] = σ 2 , so μ and σ are indeed the mean
and standard deviation.
is the distribution function of a normal random variable with mean tσ and standard
deviation 1, and hence when x goes to ininity the integral is 1.
We can use the moment generating function to verify our computation of the expec-
tation and variance of the normal distribution from Section 9.1.2.
2
σ 2 /2+μt
MX′ (t) = (μ + tσ 2 )et
and
2
σ 2 /2+μt 2
σ 2 /2+μt
MX′′ (t) = (μ + tσ 2 )2 et + σ 2 et .
Thus, E[X] = M ′ (0) = μ, E[X 2 ] = M ′′ (0) = μ2 + σ 2 , and Var[X] = E[X 2 ] − (E[X])2
= σ 2.
Another important property of the normal distribution is that a linear combination
of normal random variables has a normal distribution:
Theorem 9.2: Let X and Y be independent random variables with distributions
N(μ1 , σ12 ) and N(μ2 , σ22 ), respectively. Then X + Y is distributed according to the nor-
mal distribution N(μ1 + μ2 , σ12 + σ22 ).
Proof: The moment generating function of a sum of independent random variables is
the product of their moment generating functions (Theorem 4.3). Thus,
2
σ12 /2+μ1 t t 2 σ22 /2+μ2 t 2
(σ12 +σ22 )/2+(μ1 +μ2 )t
MX+Y (t) = MY (t)MY (t) = et e = et .
246
9.2
∗
limit of the binomial distribution
Using the moment generating function we can also obtain a large deviation bound
for a normal random variable.
Proof: Let Z = (X − μ)/σ , so Z has distribution N(0, 1). Using the general technique
presented in Section 4.2, for any t > 0,
2
≤ e−a /2 ,
where in the last inequality we set t = a. The case Z ≤ a similarly yields the same
bound, proving the claim.
A function similar to the density of the normal distribution appeared irst (around 1738)
in the De Moivre–Laplace approximation of the binomial distribution. De Moivre used
it to approximate the number of heads in a sequence of coin tosses, Laplace extended the
result to a sequence of Bernoulli trials. We present this result here since it gives insight
into the density function of the normal distribution and to the central limit theorem
presented in the next section.
we have
n k n!
p (1 − p)n−k = pk (1 − p)n−k
k k!(n − k)!
√
2πn ek en−k nn
= √ √ n kk (n − k)n−k
pk (1 − p)n−k (1 ± o(1))
2πk 2π (n − k) e
−k
n − k −(n−k)
1 k
= 6 (1 ± o(1)).
2πn nk n−k np nq
n
√
For k = np ± O( npq),
1 1
lim 6 =√ .
n→∞
2πn nk n−k 2πnpq
n
√ √ √
Let t = (k − np)/ npq = O(1). Then k = np + t npq, and n − k = nq − t npq.
Thus
√
√
t q −k t p −(n−k)
−k
n − k −(n−k)
k
= 1+ √ 1− √ .
np nq np nq
Using the Taylor series expansion
x2
ln(1 + x) = x − + O(x3 ),
2
we have
k −k n − k −(k−n)
ln
np nq
√
√
t q −k t p −(n−k)
= ln 1 + √ 1− √
np nq
√ √
t q t 2q t2 p
3
√ √ t p t
= −(np + t npq) √ − −(nq − t npq) − √ − +O √
np 2np nq 2nq n
2
2
3
√ t q √ t p t
= −t npq + − t 2 q + t npq + − t2 p + O √
2 2 n
2 3
t t
=− +O √ .
2 n
Thus,
−k
−(k−n)
k n−k 2
lim = e−t /2
.
n→∞ np nq
Combining the two limits we obtain
n k 1 2 1 2
lim p (1 − p)n−k = √ e−t /2 = √ e−(k−np) /(2npq) .
n→∞ k 2πnpq 2πnpq
248
9.3 the central limit theorem
Theorem 9.4 estimates a discrete probability; it does not deine a density function.
However, as n → ∞, it implies the following estimate which is a simple version of the
central limit theorem for a sum of independent Bernoulli random variables:
√
np+b npq
k − np 1 2
lim Pr a ≤ √ ≤b = √ e−(k−np) /(2npq)
n→∞ npq √ 2πnpq
k=np−a npq
np+b√npq
1 2
≈ √
√ e−(k−np) /(2npq) dk
k=np−a npq 2πnpq
b
1 2
≈ √ e−t /2 dt,
2π a
where we have approximated the sum by an integral and again used the substitution
√
t = (k − np)/ npq.
The central limit theorem is one of the most fundamental results in probability theory,
giving the theoretical foundation for many statistical analysis techniques. The theorem
states that, under various mild conditions, the distribution of the average of a large num-
ber of independent random variables converges to the normal distribution, regardless
of the distribution of each of the random variables. The convergence is in distribution.
Deinition 9.1: A sequence of distributions F1 , F2 , . . . converges in distribution to a
D
distribution F, denoted Fn −→ F, if for any a ∈ R at which F is continuous,
lim Fn (a) = F (a).
n→∞
Convergence in distribution is a relatively weak notion of convergence. In particular,
it does not guarantee a uniform bound on the rate of convergence of Fn (a) for different
values of a.
We prove here a basic version of the central limit theorem for the average of inde-
pendent, identically distributed, random variables with inite mean and variance.
Theorem 9.5 [The Central Limit Theorem]: Let X1 , . . . , Xn be n independent, iden-
tically distributed random variables with mean μ and variance σ 2 . Let X̄n = n1 ni=1 Xi .
Then for any a and b,
X̄n − μ D
lim Pr a ≤ √ ≤ b −→ (b) − (a).
n→∞ σ/ n
That is, the average X̄n converges in distribution to a normal distribution with the
appropriate mean and variance. Our proof of the central limit theorem uses the follow-
ing result, which we quote without proof.
Theorem 9.6 [Lévy’s Continuuity Theorem]: Let Y1 , Y2 , . . . be a sequence of ran-
dom variables with Yi having distribution Fi and moment generating functions Mi . Let
Y be a random variable with distribution F and moment generating function M. If
D
limn→∞ Mn (t) = M(t) for all t, then Fn −→ F for all t for which F (t) is continuous.
249
the normal distribution
Proof of the Central Limit Theorem: Let Zi = (Xi − μ)/σ . Then Z1 , Z2 , . . . are inde-
pendent, identically distributed, random variables with expectation E[Zi ] = 0, variance
Var[Zi ] = 1, and
√ n n
X̄n − μ n Xi − μ i=1 Zi
√ = = √ .
σ/ n n i=1 σ n
To apply Theorem
9.6 we show that the moment generating functions for the random
√
variables Yn = ni=1 Zi / n converge to the moment generating function of the standard
normal distribution. That is, we need to show that
n √ 2
lim E et i=1 Zi / n = et /2
n→∞
for all t.
Let M(t) = E[etZi ] be the moment generation function of Zi , so the moment gener-
√
ating function of Zi / n is
tZi /√n t
Ee =M √ .
n
Since the Zi are independent and identically distributed,
n
t n Zi /√n t
Ee i=1 = M √ .
n
M ′ (0)
Let L(t) = ln M(t). Since M(0) = 1, we have L(0) = 0, and L′ (0) = M(0)
=
E[Zi ] = 0. We also can compute the second derivative:
M(0)M ′′ (0) − (M ′ (0))2
L′′ (0) = = E Zi2 = 1.
(M(0)) 2
√ n 2 √
We need to show that (M(t/ n)) → et /2 , or equivalently nL(t/ n) → t 2 /2.
Applying L’Hôpital’s rule (twice), we have
√ √
L(t/ n) −L′ (t/ n)n−3/2t
lim = lim
n→∞ n−1 n→∞ −2n−2
′
√
L (t/ n)t
= lim
n→∞ 2n−1/2
√
−L′′ (t/ n)n−3/2t 2
= lim
n→∞ −2n−3/2
√ t2
= lim L′′ (t/ n)
n→∞ 2
2
t
= .
2
The central limit theorem can be proven under a variety of conditions. For example,
in the following version of the theorem that we do not provide a proof for, the random
variables are not required to be identically distributed.
250
9.3 the central limit theorem
Then
⎛ ⎞
n
i=1 (Xi − μi ) D
Pr ⎝a ≤ 6 n ≤ b⎠ −→ (b) − (a).
2
i=1 σi
While the central limit theorem only gives convergence in distribution, under some-
what more stringent conditions one can prove a uniform convergence result.
Pr(p ∈ [ p̃ − δ, p̃ + δ]) ≥ 1 − γ .
Let Xi = 1 if the ith polled person supports the party and Xi = 0 otherwise. The
fraction of support observed in the sample is given by
n
1
X̄n = Xi .
n i=1
Solving
√
δ n
√ ≥ 1.96,
p(1 − p)
we obtain
√
2
1.96 p(1 − p)
n≥ .
δ
Assume that we want to study the relationship between the heights of parents and their
children. More concretely, consider families with at least one daughter and one son,
and for each such family let (x1 , x2 , x3 , x4 ) be the heights of the mother, father, irst
daughter, and irst son, respectively. We know that each component in this vector can
be approximated by a univariate normal distribution, but what is the joint distribution
of the four variables? It turns out that for many natural phenomena, such as this one,
the joint distribution is well approximated by a multivariate normal distribution.
We saw in Lemma 9.1 that a univariate normal variable is always a linear transfor-
mation of a standard normal random variable. Similarly, the multivariate normal distri-
bution is a linear transformation of a number of independent, standard normal random
variables.
Let X = (X1 , . . . , Xn )T denote a vector of n independent, standard normal random
variables. Let x̄ = (x1 , . . . , xn )T be a vector of real values. We deine
252
9.4
∗
multivariate normal distributions
If A has a full rank, then X = A−1 (Y − μ̄), and we can derive a density function for
the joint distribution.
Pr(Y ≤ ȳ) = Pr(Y − μ̄ ≤ ȳ − μ̄)
= Pr(AX ≤ ȳ − μ̄)
= Pr(X ≤ A−1 (ȳ − μ̄))
1
T
= n/2
e−w̄ w̄/2 dw1 · · · dwn .
(2π ) w̄≤A−1 (ȳ−μ̄)
253
the normal distribution
Here |AAT | denotes the determinant of AAT , a term which arises under the multivariate
change of variables.
Applying (A−1 )T A−1 = (AT )−1 A−1 = (AAT )−1 = −1 , we can write the distribu-
tion function of Y as
y1 yn
1 T −1
Pr(Y ≤ ȳ) = √ . . . e−(z̄−μ̄) (z̄−μ̄)/2 dz1 · · · dzn (9.1)
(2π )n || −∞ −∞
where, again,
= AAT = E[(Y − μ̄)(Y − μ̄)T ].
In general we have the following deinition.
Deinition 9.2: A vector Y = (Y1 , . . . , Yn )T has a multivariate normal distribution,
denoted Y ∼ N(μ̄, ), if and only if there is an n × k matrix A, a vector X =
(X1 , . . . , Xk )T of k independent standard normal random variables, and a vector μ̄ =
(μ1 , . . . , μn )T , such that
Y = AX + μ̄.
If = AAT = E[(Y − μ̄)(Y − μ̄)T ] has full rank, then the density of Y is
1 1 T −1
√ n
e− 2 (Y −μ̄) (Y −μ̄) .
(2π ) ||
If is not invertible then the joint distribution has no density function.2
Note that sometimes instead of saying that random variables have a multivariate
normal distribution, one says that they are jointly normal.
The corollary below follows readily, keeping in mind that a multivariate random
variable is a vector of random variables.
Corollary 9.9: Any linear combination of mutually independent multivariate normal
random variables of equal dimension has a multivariate normal distribution.
The special case of the bivariate normal density, with just two random variables,
has a simpler expression, using the correlation coeficient between the two random
variables.
Deinition 9.3: The correlation coeficient between random variables X and Y is
Cov(X, Y )
ρXY = .
σX σY
As noted in Exercise 9.4, the correlation coeficient is always in [−1, 1].
If (X, Y )T has a bivariate normal distribution,
X μX σX ρXY σX σY
∼N , ,
Y μY ρXY σX σY σY
2 In some settings multivariate normal distributions are deined so that is required to be a symmetric positive
deinite matrix and therefore invertible, but there are also settings where it is sensible to have not invertible,
and therefore a distribution with no density function.
254
9.4
∗
multivariate normal distributions
f(X,Y)
f(X,Y)
0.15 0.08 0.15
0.1 0.06 0.1
0.04
0.05 0.05
0.02
0 0 0
2 2 2
1 2 1 2 1 2
0 1 0 1 0 1
0 Y 0 0
Y -1 -1 -1 -1 Y -1 -1
-2 -2 X -2 -2 X -2 -2 X
2 2 2
1 1 1
0 0 0
-1 -1 -1
-2 -2 -2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
Figure 9.1: Bivariate Normal density function f (X, Y ) (top) and the contour of the function at
increasing values of f (X, Y ) (bottom), with μX = μY = 0, σX = σY = 1, and ρ = −0.8, 0, 0.8.
then for any |ρXY | < 1 (and σX , σY > 0) the joint density function of X and Y is given
by
1
e−((X−μX ) /(2σX )+(Y −μY ) /(2σY )−ρXY (X−μX )(Y −μY )/(σX σY ) )/(1−ρXY ) .
2 2 2 2
6
2
2πσX σY 1 − ρXY
(9.2)
Examples of bivariate normal distributions are shown in Figure 9.1.
(3) X1 and X2 are independent if and only if the matrix 12 = 0 (i.e. all its entries
are 0).
One natural question is how to produce random values that follow a normal distribution.
More speciically, given random values distributed uniformly in (0, 1), which is a useful
and standard primitive operation to assume that we have available, can we generate
random values that follow a normal distribution? As we have seen, it sufices to generate
random values with a standard normal distribution. These values can then be scaled to
follow a normal distribution with mean μ and variance σ 2 .
The standard approach to convert a random variable X uniformly distributed in (0, 1)
to a random variable Z with cumulative distribution function F (Z) is to set Z = F −1 (X ).
That is, for X = x we set Z = z where F (z) = x. Indeed
This method cannot be readily used for the normal distribution because the cumula-
tive distribution function (X ) does not have a closed form, and therefore we cannot
directly compute −1 (X ), although there are various means to approximate −1 (X )
accurately at some expense in computation or memory.
One simple approach to approximating a standard normal random variable, sug-
gested by the central limit theorem, is to generate U1 , U2 , . . . , U12 , with the Ui inde-
pendent and uniform on (0, 1), and compute
12
X= Ui − 6.
i=1
In this case X is obviously not distributed as a standard normal; in particular, it can only
take on values in the range (−6, 6). It is, however, a surprisingly good approximation,
as shown further in Exercise 9.6. You should note that X has both expectation 0 and
variance 1.
It turns out, however, that a standard normal random variable can be generated
exactly using random numbers from (0, 1), with some additional mathematical oper-
ations. In fact, there are two natural related ways to do this. Both rely on the same
approach: instead of generating just one value, we produce two.
To start, consider the joint cumulative distribution for two independent standard nor-
mal random variables X and Y :
y′ x′
′ ′ 1 −(x2 +y2 )/2
F (x , y ) = e dx dy. (9.3)
−∞ −∞ 2π
256
9.5 application: generating normally distributed random values
The x2 + y2 term in the double integral actually helps us; it allows us to naturally move
the problem into polar coordinates.
We can think of X and Y as being plotted as a pair (X, Y ) on the two-dimensional
plane. Then we let R2 = X 2 + Y 2 , where R is the radius of the circle centered at the
origin that the point (X, Y ) lies on, and similarly let = tan−1 YX be the angle the point
makes with the x-axis. Thus
X = R cos , Y = R sin .
We use a change of variables in Eqn. (9.3) above; with x = r cos θ and y = r sin θ, we
have
θ ′ r′
′ ′ 1 −r2 /2
F (r , θ ) = e rdr dθ.
0 0 2π
Here we have used that under this change of variables, dx dy = r dr dθ. More formally,
from multivariable calculus, the Jacobian for the transformation is
∂x ∂x
∂ (x, y) ∂r ∂θ cos θ −r sin θ
= = = r(cos2 θ + sin2 θ ) = r.
∂ (r, θ ) ∂y ∂y sin θ −r cos θ
∂r ∂θ
Hence the additional factor of r when we change variables. This expression integrates
nicely to
θ′ ′ 2
F (r′ , θ ′ ) = 1 − e−(r ) /2 .
2π
′
θ
In particular, notice that F (r′ , θ ′ ) = G(r′ )H(θ ′ ) where H(θ ′ ) = 2π and G(r′ ) =
−(r′ )2 /2
1−e . This means that the radius R and the angle determined by X and Y
are independent random variables.
But now, thinking in the other direction, we can directly generate R and to obtain
our independent standard normal random variables X and Y . We can see from the form
of H(θ ′ ) that the angle is uniformly distributed over [0, 2π]; this also follows nat-
urally from symmetry considerations. Hence, given a uniform random variable U on
(0, 1), we can take = 2πU. For R, we have
′ 2
Pr(R ≥ r′ ) = e−(r ) /2 ,
while for a uniform (0, 1) random variable V ,
Pr(V ≥ v) = 1 − v.
′
√
√ the right-hand sides equal, we ind r = −2 ln(1 − v). Hence, we can take
Setting
R = −2 ln(1 − V ) to obtain a suitably distributed radius R. Since
√ 1 − V and V both
have uniform (0, 1) distributions, equivalently we can take R = −2 ln V instead. To
conclude, given two uniform (0, 1) random variables U and V , we can take
√ √
X = −2 ln V cos(2πU ), Y = −2 ln V sin(2πU )
to obtain two independent standard normal random variables. This is commonly
referred to as the Box–Muller transform.
257
the normal distribution
A common variation avoids using the sine and cosine functions. Let U ′ and V ′ be
independent and uniform on (0, 1), and let U = 2U ′ − 1 and V = 2V ′ − 1, so that U
and V are independent and uniform on (−1, 1). If S = U 2 + V 2 ≥ 1, we throw these
values away and start over again. Otherwise, generate
7 7
−2 ln S −2 ln S
X =U , Y =V .
S S
The result is that X and Y are independent standard normal√ random
√ variables. Think
of the point (U, V ) in the xy-plane, and note that U/ S and V/ S play the role of
cos and sin in the Box–Muller transform; note here that the corresponding is
independent of the value of S itself. Also, S is uniformly distributed over [0, 1), because
(U, V ) is uniform over the circle of radius 1 centered at (0, 0), and hence
Pr(S ≤ s) = Pr(U 2 + V 2 ≤ s)
is the probabilility that a random point in the unit circle lies within a circle of radius
√
s around the origin. But this probability is just
πs
Pr(S ≤ s) = Pr(U 2 + V 2 ≤ s) = = s,
π
√ √
so S has a uniform distribution. Hence −2 ln S takes the role of −2 ln V in the Box–
Muller transform. (As S takes on the value 0 with probability 0, we can equivalently
think of S as uniform on (0, 1).)
Normal distributions are often used to model observed data. We see a sample of data
points, such as the heights of individuals, and then we try to it a normal distribution to
that data. How do we characterize and ind the best it? This is similar in some ways to
the question of the conidence interval. We have seen in Chapter 4 and in this chapter
that when obtaining a sample by polling for a yes–no question, the natural estimate is
to take the fraction of responders that answer “yes” and use that as our guess for the
fraction of the population that would answer yes to the question. We could also ind
probabilistic guarantees that this fraction was within some interval of the true answer,
under assumptions about the independence of our samples.
Here we also consider the question of inding the best parameter for a distribution
given a collection of data points, but more generally. Our goal is to ind the best rep-
resentative value for the parameter. Maximum likelihood (ML) estimators give such a
value.3
3 Here we follow a classical statistics approach, where the ML estimator assumes no prior knowledge on the
parameter and just reports the value that maximizes the probability or the density of the observed data, as
described in Deinition 9.4. In contrast, the Bayesian statistics approach starts with a prior distribution on the
space of possible parameter choices and computes a maximum a posterior estimation, which is the value of the
parameter conditioned on the observed data. The two estimators give the same value when the prior distribution
is uniform on the range of the parameter, as discussed further in Exercise 9.13.
258
9.6 maximum likelihood point estimates
Deinition 9.4:
1. Let PX (x, θ ) be the probability function of a discrete random variable X that depends
on a parameter θ. Let x1 , . . . , xn be n independent observations of X. The Maximum
Likelihood (ML) estimator of θ is
n
arg max PX (xi , θ ).
θ
i=1
2. Let fX (x, θ ) be the density function of a continuous random variable X that depends
on a parameter θ. Let x1 , . . . , xn be n independent observations of X. The Maximum
Likelihood (ML) estimator of θ is
n
arg max fX (xi , θ ).
θ
i=1
Note that θ in Deinition 9.4 can correspond to a single parameter (such as for the
exponential distribution) or a vector of parameters (such as the mean and variance for
the normal distribution).
As a simple example consider a sequence of n independent, identically distributed,
Bernoulli experiments with k successes. What is the maximum likelihood estimate for
the probability of success p? Intuitively, it seems that it should be the fraction of lips
that came up heads. We prove this intuition holds.
For a success probability of p, the probability of obtaining k heads is
n k
f (p) = p (1 − p)n−k .
k
We therefore need to compute
n k
arg max f (p) = arg max p (1 − p)n−k .
p p k
Computing the irst derivative of f (p) yields
n k−1 n k
f ′ (p) = k p (1 − p)n−k − (n − k) p (1 − p)n−k−1 .
k k
We see that f ′ (p) = 0 when p = k/n. A further check shows that k/n provides the local
maximum, not a local minimum. It follows that the maximum likelihood value is the
fraction of successes, matching our intuition.
We turn now to computing an ML estimator for the expectation and the variance of
the normal distribution. In the case of continuous random variables it is often easier
to maximize the logarithm
n of the likelihood function, which is referred to as the log-
likelihood function, i=1 ln( fX (xi , θ )). This is equivalent to maximizing the likelihood
function, since
n
n
arg max fX (xi , θ ) = arg max ln( fX (xi , θ )).
θ θ
i=1 i=1
259
the normal distribution
The derivative is 0 when σ 2 = Sn2 . We therefore ind that μ = Mn , σ 2 = Sn2 provide the
maximum likelihood estimates of the parameters.
The estimator of a parameter is itself a random variable that is a function of the
observed data. It seems reasonable that its expectation would be equal to the value it
estimates, but this is not always the case.
E[n ] = θ.
260
9.7 application: em algorithm for a mixture of gaussians
lim E[n ] = θ.
n→∞
For example, for samples Xi taken from any distribution with inite expectation,
1 n
the estimator Mn = n i=1 Xi gives an unbiased estimate for the expectation. How-
ever, the ML estimate for the variance of a normal distribution we found above,
Sn2 = n1 ni=1 (Xi − Mn )2 , is not an unbiased estimator for the variance. Since the Xi
are independent and identically distributed, let E[X] = E[Xi ], and note that for i = j,
E[Xi X j ] = (E[X])2 . Then,
n
1
E Sn2 =
2
E Xi − 2E[Mn Xi ] + E[(Mn )2 ]
n i=1
= E[X 2 ] − E (Mn )2
⎛ ⎞
n
1
= E[X 2 ] − 2 ⎝E Xi2 +
E Xi X j ⎠
n i=1 i= j
n−1
= (E[X 2 ] − (E[X])2 )
n
n−1 2
= σ .
n
Thus, the ML estimator for the variance of the normal distribution is only asymptoti-
n
cally unbiased. However, n−1 Sn2 is an unbiased estimator for σ 2 .
Indeed, in statistical analysis, when sampling one generally uses
n
n 2 1
Sn = (xi − Mn )2
n−1 n − 1 i=1
as the estimate of the variance because it is unbiased. This quantity is called the sample
variance, while
n
1
(xi − Mn )2
n i=1
is called the population variance. The population variance is the correct formula for
the variance when you have the entire population.
To estimate the parameters of the two normal distributions for the sample x̄ we want
to compute a maximum likelihood estimator for the vector (γ , μ1 , μ2 , σ1 , σ2 ). The
likelihood function is
⎛ ⎞
n
1 2 2 1 2 2
L(x̄, γ , μ1 , μ2 , σ1 , σ2 ) = ⎝γ 6 e−(xi −μ1 ) /(2σ1 ) +(1−γ ) 6 e−(xi −μ2 ) /(2σ2 )⎠
2
2πσ1 2
2πσ2
i=1
In practice the initialization can affect the running time. One approach would be to
choose two random observation values as the initial means, and calculate the initial
variance σ12 by assuming all observations came from a single Gaussian with mean μ1
(and similarly for σ22 ). An initial value of γ = 1/2 is natural as it corresponds to each
point coming with equal probability from each distribution. More complex methods
for initialization can be used.
The following theorem shows that the likelihood function is nondecreasing through-
out the execution of the algorithm. Thus, the EM algorithm always terminates at a local
maximum. The algorithm is not guaranteed to ind the maximum likelihood estimate,
which is a global maximum. Nevertheless, the algorithm gives good results in practice.
Theorem 9.11: Let γ t , μt1 , μt2 , σ1t , σ2t be the estimated parameters at the end of the
tth iteration of the EM algorithm for a mixture of two Gaussians, with the initial values
corresponding to t = 0. Then for all t ≥ 0,
L x̄, γ t+1 , μt+1 t+1 t+1 t+1
≥ L x̄, γ t , μt1 , μt2 , σ1t , σ2t .
1 , μ2 , σ1 , σ2
Proof: The proof has two parts. We irst show that, given values μt1 , μt2 , σ1t , σ2t , the
algorithm’s choice of γ t+1 maximizes the likelihood, so that
γ t+1 = arg max L x̄, γ , μt1 , μt2 , σ1t , σ2t .
γ
263
the normal distribution
Thus
n
L(x̄, γ , μ1 , μ2 , σ1 , σ2 ) = (γ f (xi , μ1 , σ1 ) + (1 − γ ) f (xi , μ2 , σ2 )) .
i=1
Now
λ = γ λ + (1 − γ )λ
n
γ f (xi , μ1 , σ1 )
=
i=1
γ f (xi , μ1 , σ1 ) + (1 − γ ) f (xi , μ2 , σ2 )
n
(1 − γ ) f (xi , μ2 , σ2 )
+
i=1
γ f (xi , μ1 , σ1 ) + (1 − γ ) f (xi , μ2 , σ2 )
= n.
Since λ = n,
n
f (xi , μ1 , σ1 )
n= ,
i=1
γ f (xi , μ1 , σ1 ) + (1 − γ ) f (xi , μ2 , σ2 )
and hence
n n
1 γ f (xi , μ1 , σ1 ) 1
γ = = p1 (xi ).
n i=1 γ f (xi , μ1 , σ1 ) + (1 − γ ) f (xi , μ2 , σ2 ) n i=1
Thus, the choice of probabilities p1 (xi ) in each iteration maximizes the likelihood
function with respect to γ .
264
9.8 exercises
or
n
p1 (xi )xi
μ1 = i=1
n .
i=1 p1 (xi )
Similar computations show the choices used in the algorithm for μ1 , μ2 , σ1 , and σ2
all individually maximize the likelihood function over that iteration given γ t+1 and
regardless of the values of the other parameters. We conclude
9.8. Exercises
Exercise 9.2: Let X be a standard normal random variable. Prove that E[X n ] = 0 for
odd n ≥ 1, and E[X n ] ≥ 1 for even n ≥ 2. (Hint: you can use integration by parts to
derive an expression for E[X n ] in terms of E[X n−2 ].)
265
the normal distribution
Here the ai j and μi are constants, and the Xi are independent standard normal random
variables. Prove that
n
Cov(Yi , Y j ) = ai,k a j,k .
k=1
Cov(X,Y )
Exercise 9.4: Let ρXY = σX σY
be the correlation coeficient of X and Y .
(a) Prove that for any two random variables X and Y , |ρXY | ≤ 1.
(b) Prove that if X and Y are independent then ρXY = 0.
(c) Give an example of two random variables X and Y that are not independent but
ρXY = 0.
Exercise 9.5: Prove that for the bivariate normal distribution the right hand side of
Eqn. (9.1) and the expression given in (9.2) are equal.
where the Ui are independent and uniform on (0, 1). Let Y be a random variable from
N(0, 1). Using a computer program or other tools, ind as good an approximation as you
can to maxz | Pr(X ≤ z) − Pr(Y ≤ z)|. If you can, try to ensure that your approximation
is an upper bound.
Exercise 9.7: Write a program that generates independent uniform (0, 1) random vari-
ables U and V , and compute
√ √
X = −2 ln V cos(2πU ), Y = −2 ln V sin(2πU )
to obtain two values according to the Box–Muller method. Repeat this experiment
100,000 times, and plot the corresponding distribution function for the 200,000 sam-
ples. Also, determine how many sampled values x satisfy |x| ≤ k for k = 1, 2, 3, 4. Do
your results seem reasonable? Explain.
Exercise 9.8: Write a program that generates independent uniform (0, 1) random vari-
ables U and V , and repeats this process until inding a pair such that S = U 2 + V 2 < 1.
266
9.8 exercises
Then compute
7 7
−2 ln S −2 ln S
X =U , Y =V
S S
to obtain two values. Repeat this experiment 100,000 times, and plot the correspond-
ing distribution function for the 200,000 samples. Also, determine how many sampled
values x satisfy |x| ≤ k for k = 1, 2, 3, 4. Do your results seem reasonable? Explain.
Exercise 9.9: Suppose that X is normally distributed with expectation 1 and standard
deviation 0.25, Y is normally distributed with expectation 1.5 and standard deviation
0.4, and X and Y have correlation coeficient 0.4. Calculate the following probabilities:
(a) Pr(X + Y ≥ 2);
(b) Pr(X + Y ≥ 3);
(c) Pr(Y ≤ X );
(d) Perform the same calculations above but with correlation coeficient 0.6;
(e) How do these probabilities seem to change as the correlation ceoficient increases?
Explain.
(This is relatively easy to show for the bivariate distribution, more challenging for
a higher dimension multivariate distribution.)
(c) Show that when 12 = 0 the joint density function of X1 and X2 can be written
as a product of the marginal densities of X1 and X2 , proving that the two random
variables are independent.
Exercise 9.11: Assume that (x1 , yi ), . . . , (xn , yn ) are n independent samples from a
mixture of two bivariate normal distributions. Write and analyze an EM algorithm for
computing a maximum likelihood estimate of the parameters of the mixed distribution.
Exercise 9.14: Let X be a standard normal random variable, and let Y = XZ, where
Z is a random variable independent of X that takes on the value 1 with probability 1/2
and the value −1 with probability 1/2.
(a) Show that Y is also a standard normal random variable.
(b) Explain why X and Y are not independent.
(c) Provide a reasoning that shows that X and Y are not jointly normal.
(d) Calculate the correlation coeficient of X and Y .
268
chapter ten
Entropy, Randomness,
and Information
Suppose that we have two biased coins. One comes up heads with probability 3/4, and
the other comes up heads with probability 7/8. Which coin produces more randomness
per lip? In this chapter, we introduce the entropy function as a universal measure of
randomness. In particular, we show that the number of independent unbiased random
bits that can be extracted from a sequence of biased coin lips corresponds to the entropy
of the coin. Entropy also plays a fundamental role in information and communication.
To demonstrate this role, we examine some basic results in compression and coding
and see how they relate to entropy. The main result we prove is Shannon’s coding
theorem for the binary symmetric channel, one of the fundamental results of the ield
of information theory. Our proof of Shannon’s theorem uses several ideas that we have
developed in previous chapters, including Chernoff bounds, Markov’s inequality, and
the probabilistic method.
The entropy of a random variable is a function of its distribution that, as we shall see,
gives a measure of the randomness of the distribution.
Deinition 10.1:
1. The entropy in bits of a discrete random variable X is given by
H(X ) = − Pr(X = x) log2 Pr(X = x),
x
where the summation is over all values x in the range of X. Equivalently, we may
write
1
H(X ) = E log2 .
Pr(X )
2. The binary entropy function H(p) for a random variable that assumes only two
possible outcomes, one of which occurs with probability p, is
H(p) = −p log2 p − (1 − p) log2 (1 − p).
269
entropy, randomness, and information
We deine H(0) = H(1) = 0, so the binary entropy function is continuous in the inter-
val [0, 1]. The function is drawn in Figure 10.1.
For our two biased coins, the entropy of the coin that comes up heads with proba-
bility 3/4 is
3 3 3 1 1 3
H = − log2 − log2 = 2 − log2 3 ≈ 0.8113,
4 4 4 4 4 4
while the entropy of the coin that comes up heads with probability 7/8 is
7 7 7 1 1 7
H = − log2 − log2 = 3 − log2 7 ≈ 0.5436.
8 8 8 8 8 8
Hence the coin that comes up heads with probability 3/4 has a larger entropy.
Taking the derivative of H(p),
dH(p) 1− p
= − log2 p + log2 (1 − p) = log2 ,
dp p
we see that H(p) is maximized when p = 1/2 and that H(1/2) = 1 bit. One way of
interpreting this statement is to say: each time we lip a two-sided coin, we get out at
most 1 bit worth of randomness, and we obtain exactly 1 bit of randomness when the
coin is fair. Although this seems quite clear, it is not yet clear in what sense H(3/4) =
2 − 34 log2 3 means that we obtain H(3/4) random bits each time we lip a coin that
lands heads with probability 3/4. We clarify this later in the chapter.
As another example, the entropy of a standard six-sided die that comes up on each
side with probability 1/6 has entropy log2 6. In general, a random variable that has n
equally likely outcomes has entropy
n
1 1
− log2 = log2 n.
i=1
n n
270
10.1 the entropy function
The entropy of an eight-sided die is therefore 3 bits. This result should seem quite
natural; if the faces of the die were numbered from 0 to 7 written in binary, then the
outcome of the die roll would give a sequence of 3 bits uniform over the set {0, 1}3 ,
which is equivalent to 3 bits generated independently and uniformly at random.
It is worth emphasizing that the entropy of a random variable X depends not on the
values that X can take but only on the probability distribution of X over those values.
The entropy of an eight-sided die does not depend on what numbers are on the faces of
the die; it only matters that all eight sides are equally likely to come up. This property
does not hold for the expectation or variance of X, but it does makes sense for a measure
of randomness. To measure the randomness in a die, we should not care about what
numbers are on the faces but only about how often the die comes up on each side.
Often in this chapter we consider the entropy of a sequence of independent random
variables, such as the entropy of a sequence of independent coin lips. For such situa-
tions, the following lemma allows us to consider the entropy of each random variable
to ind the entropy of the sequence.
Lemma 10.1: Let X1 and X2 be independent random variables, and let Y = (X1 , X2 ).
Then
H(Y ) = H(X1 ) + H(X2 ).
Of course, the lemma is trivially extended by induction to the case where Y is any inite
sequence of independent random variables.
Proof: In what follows, the summations are to be taken over all possible values that
can be assumed by X1 and X2 . The result follows by using the independence of X1 and
X2 to simplify the expression:
H(Y ) = − Pr((X1 , X2 ) = (x1 , x2 )) log2 Pr((X1 , X2 ) = (x1 , x2 ))
x1 ,x2
=− Pr(X1 = x1 ) Pr(X2 = x2 ) log2 (Pr(X1 = x1 ) Pr(X2 = x2 ))
x1 ,x2
=− Pr(X1 = x1 ) Pr(X2 = x2 )(log2 Pr(X1 = x1 ) + log2 Pr(X2 = x2 ))
x1 ,x2
=− Pr(X2 = x2 ) Pr(X1 = x1 ) log2 Pr(X1 = x1 )
x1 x2
− Pr(X1 = x1 ) Pr(X2 = x2 ) log2 Pr(X2 = x2 )
x2 x1
=− Pr(X1 = x1 ) log2 Pr(X1 = x1 ) Pr(X2 = x2 )
x x
1 2
− Pr(X2 = x2 ) log2 Pr(X2 = x2 ) Pr(X1 = x1 )
x2 x1
=− Pr(X1 = x1 ) log2 Pr(X1 = x1 ) − Pr(X2 = x2 ) log2 Pr(X2 = x2 )
x1 x2
= H(X1 ) + H(X2 ).
271
entropy, randomness, and information
Lemma 10.2: Suppose that nq is an integer in the range [0, n]. Then
2nH(q)
n
≤ ≤ 2nH(q) .
n+1 nq
Proof: The statement is trivial if q = 0 or q = 1, so assume that 0 < q < 1. To prove
the upper bound, notice that by the binomial theorem we have
n
n qn (1−q)n
n k
q (1 − q) ≤ q (1 − q)n−k ≤ (q + (1 − q))n = 1.
nq k=0
k
Hence,
n
≤ q−qn (1 − q)−(1−q)n = 2−qn log2 q 2−(1−q)n log2 (1−q) = 2nH(q) .
nq
n qn
For the lower
n n k bound, we know that nq
q (1 − q)(1−q)n is one term of the expression
n−k
k=0 k q (1 − q) . We show that it is the largest term. Consider the difference
between two consecutive terms as follows:
n k n−k n
q (1 − q) − qk+1 (1 − q)n−k−1
k k+1
n k n−k q
= q (1 − q)n−k 1 − .
k k+11−q
This difference is nonnegative whenever
n−k q
1− ≥0
k+11−q
or (equivalently, after some algebra) whenever
k ≥ qn − 1 + q.
The terms are therefore increasing up to k = qn and decreasing after that point. Thus
k = qn gives the largest term in the summation. n qn
Since the summation has n + 1 terms and since nq q (1 − q)(1−q)n is the largest
term, we have
n 1
qqn (1 − q)(1−q)n ≥
nq n+1
or
q−qn (1 − q)−(1−q)n 2nH(q)
n
≥ = .
nq n+1 n+1
272
10.2 entropy and binomial coefficients
For 1/2 ≤ q ≤ 1,
2nH(q)
n
≤ ; (10.3)
n+1 ⌊nq⌋
2nH(q)
n
≤ . (10.4)
n+1 ⌈nq⌉
Proof: We irst prove Eqn. (10.1); the proof of Eqn. (10.2) is entirely similar. When
0 ≤ q ≤ 1/2,
n
n qn (1−q)n n n k
q (1 − q) ≤ q⌊qn⌋ (1 − q)n−⌊qn⌋ ≤ q (1 − q)n−k = 1,
⌊nq⌋ ⌊nq⌋ k=0
k
2nH(⌊nq⌋/n) 2nH(q)
n
≥ ≥ ;
⌊nq⌋ n+1 n+1
Although these bounds are loose, they are suficient for our purposes. The relation
between the combinatorial coeficients and the entropy function arises repeatedly in
the proofs of this chapter when we consider a sequence of biased coin tosses, where
the coin lands heads with probability p > 1/2. Applying the Chernoff bound, we know
that, for suficiently large n, the number of heads will almost always be close to np. Thus
n
≈ 2nH(p) sequences, where the
the sequence will almost always be one of roughly np
approximation follows from Lemma 10.2. Moreover, each such sequence occurs with
probability roughly
273
entropy, randomness, and information
Hence, when we consider the outcome of n lips with a biased coin, we can essen-
tially restrict ourselves to the roughly 2nH(p) outcomes that occur with roughly equal
probability.
One way of interpreting the entropy of a random variable is as a measure of how many
unbiased, independent bits can be extracted, on average, from one instantiation of the
random variable. We consider this question in the context of a biased coin, showing
that, for suficiently large n, the expected number of bits that can be extracted from n
lips of a coin that comes up heads with probability p > 1/2 is essentially nH(p). In
other words, on average, one can generate approximately H(p) independent bits from
each lip of a coin with entropy H(p). This result can be generalized to other random
variables, but we focus on the speciic case of biased coins here (and throughout this
chapter) to keep the arguments more transparent.
We begin with a deinition that clariies what we mean by extracting random bits.
Deinition 10.2: Let |y| be the number of bits in a sequence of bits y. An extraction
function Ext takes as input the value of a random variable X and outputs a sequence
of bits y such that
In the case of a biased coin, the input X is the outcome of n lips of our biased coin.
The number of bits in the output is not ixed but instead can depend on the input. If
the extraction function outputs k bits, we can think of these bits as having been gener-
ated independently and uniformly at random, since each sequence of k bits is equally
likely to appear. Also, there is nothing in the deinition that requires that the extraction
function be eficient to compute. We do not concern ourselves with eficiency here,
although we do consider an eficient extraction algorithm in Exercise 10.12.
As a irst step toward proving our results about extracting unbiased bits from biased
coins, we consider the problem of extracting random bits from a uniformly distributed
integer random variable. For example, let X be an integer chosen uniformly at ran-
dom from {0, . . . , 7}, and let Y be the sequence of 3 bits obtained when we write
X as a binary number. If X = 0 then Y = 000, and if X = 7 then Y = 111. It is
easy to check that every sequence of 3 bits is equally likely to arise, so we have
a trivial extraction function Ext by associating any input X with the corresponding
output Y.
Things are slightly harder when X is uniform over {0, . . . , 11}. If X ≤ 7, then we can
again let Y be the sequence of 3 bits obtained when we write X in binary. This leaves the
case X ∈ {8, 9, 10, 11}. We can associate each of these four possibilities with a distinct
sequence of 2 bits, for example, by letting Y be the sequence of 2 bits obtained from
writing X − 8 as a binary number. Thus, if X = 8 then Y = 00, and if X = 11 then
274
10.3 entropy: a measure of randomness
Figure 10.2: Extraction functions for numbers that are chosen uniformly at random from {0, . . . , 7}
and {0, . . . , 11}.
Y = 11. The entire extraction function is shown in Figure 10.2. Every 3-bit sequence
arises with the same probability 1/12, and every 2-bit sequence arises with the same
probability 1/12, so Deinition 10.2 is satisied.
We generalize from these examples to the following theorem.
Theorem 10.4: Suppose that the value of a random variable X is chosen uniformly
at random from the integers {0, . . . , m − 1}, so that H(X ) = log2 m. Then there is an
extraction function for X that outputs on average at least ⌊log2 m⌋ − 1 = ⌊H(X )⌋ − 1
independent and unbiased bits.
Proof: If m > 1 is a power of 2, then the extraction function can simply output the
bit representation of the input X using log2 m bits. (If m = 1, then it outputs nothing
or, equivalently, an empty sequence.) All output sequences have log2 m bits, and all
sequences of log2 m bits are equally likely to appear, so this satisies Deinition 10.2. If
m is not a power of 2, then matters become more complicated. We describe the extrac-
tion function recursively. (A nonrecursive description is given in Exercise 10.8.) Let
α = ⌊log2 m⌋. If X ≤ 2α − 1, then the function outputs the α-bit binary representation
of X; all sequences of α bits are equally likely to be output in this case. If X ≥ 2α ,
then X − 2α is uniformly distributed in the set {0, . . . , m − 2α − 1}, which is smaller
than the set {0, . . . , m}. The extraction function can then recursively produce the output
from the extraction function for the variable X − 2α .
The recursive extraction function maintains the property that, for every k, each of the
k
2 sequences of k bits is output with the same probability. We claim by induction that
the expected number of unbiased, independent bits produced by this extraction function
is at least ⌊log2 m⌋ − 1. The cases where m is a power of 2 are trivial. Otherwise, by
induction, the number of bits Y in the output satisies
2α m − 2α
E[Y ] ≥ α+ (⌊log2 (m − 2α )⌋ − 1)
m m
m − 2α
=α− (α − ⌊log2 (m − 2α )⌋ + 1).
m
275
entropy, randomness, and information
probability
n of occurring, p j (1 − p)n− j . For each value of j, 0 ≤
8 j ≤ n,we map9each of
n
the j sequences with j heads to a unique integer in the set 0, . . . , j − 1 . When
j heads come up, we map the sequence to the corresponding 8 number.
Conditioned on
there being j heads, this number is uniform on the integers 0, . . . , nj − 1 , and hence
9
we can apply the extraction function of Theorem 10.4 designed for this case. Let Z be
a random variable representing the number of heads lipped, and let B be the random
variable representing the number of bits our extraction function produces. Then
n
E[B] = Pr(Z = k)E[B | Z = k]
k=0
|Ext(x)| ≤ log2 (1/q) bits. This is because all sequences with |Ext(x)| bits would have
probability at least q, so
2|Ext(x)| q ≤ 1,
giving the desired bound on Ext(x). Given any extraction function, if B is a random
variable representing the number of bits our extraction function produces on input X,
then
E[B] = Pr(X = x)|Ext(x)|
x
1
≤ Pr(X = x) log2
x
Pr(X = x)
1
= E log2
Pr(X )
= H(X ).
Another natural question to ask is how we can generate biased bits from an unbiased
coin. This question is partially answered in Exercise 10.11.
10.4. Compression
A second way of interpreting the entropy value comes from compression. Again sup-
pose we have a coin that comes up heads with probability p > 1/2 and that we lip
it n times, keeping track of which lips are heads and which lips are tails. We could
represent every outcome by using one bit per lip, with 0 representing heads and 1 rep-
resenting tails, and use a total of n bits. If we take advantage of the fact that the coin is
biased, we can do better on average. For example, suppose that p = 3/4. For a pair of
consecutive lips, we use 0 to represent that both lips were heads, 10 to represent that
the irst lip was heads and the second tails, 110 to represent that the irst lip was tails
and the second heads, and 111 to represent that both lips were tails. Then the average
number of bits we use per pair of lips is
9 3 3 1 27
1· +2· +3· +3· = < 2.
16 16 16 16 16
Hence, on average, we can use less than the 1 bit per lip of the standard scheme by
breaking a sequence of n lips into pairs and representing each pair in the manner shown.
This is an example of compression.
It is worth emphasizing that the representation that we used here has a special prop-
erty: if we write the representation of a sequence of lips, it can be uniquely decoded
simply by parsing it from left to right. For example, the sequence
011110
corresponds to two heads, followed by two tails, followed by a heads and a tails.
There is no ambiguity, because no other sequence of lips could produce this output.
278
10.4 compression
Our representation has this property because no bit sequence we use to represent a pair
of lips is the preix of another bit sequence used in the representation. Representations
with this property are called preix codes, which are discussed further in Exercise 10.15.
Compression continues to be a subject of considerable study. When storing or trans-
mitting information, saving bits usually corresponds to saving resources, so inding
ways to reduce the number of used bits by taking advantage of the data’s structure is
often worthwhile.
We consider here the special case of compressing the outcome of a sequence of
biased coin lips. For a biased coin with entropy H(p), we show (a) that the outcome
of n lips of the coin can be represented by approximately nH(p) bits on average and
(b) that approximately nH(p) bits on average are necessary. In particular, any represen-
tation of the outcome of n lips of a fair coin essentially requires n bits. The entropy is
therefore a measure of the average number of bits generated by each coin lip after com-
pression. This argument can be generalized to any discrete random variable X, so that
n independent, identically distributed random variables X1 , X2 , . . . , Xn with the same
distribution X can be represented using approximately nH(X ) bits on average. In the
setting of compression, entropy can be viewed as measuring the amount of information
in the input sequence. The larger the entropy of the sequence, the more bits are needed
in order to represent it.
We begin with a deinition that clariies what we mean by compression in this con-
text.
Deinition 10.3: A compression function Com takes as input a sequence of n coin
lips, given as an element of {H, T }n , and outputs a sequence of bits such that each
input sequence of n lips yields a distinct output sequence.
Deinition 10.3 is rather weak, but it will prove suficient for our purposes. Usually,
compression functions must satisfy stronger requirements; for example, we may require
a preix code to simplify decoding. Using this weaker deinition makes our lower-bound
proof stronger. Also, though we are not concerned here with the eficiency of compress-
ing and decompressing procedures, there are very eficient compression schemes that
perform nearly optimally in many situations. We will consider an eficient compression
scheme in Exercise 10.17.
The following theorem formalizes the relationship between the entropy of a biased
coin and compression.
Theorem 10.6: Consider a coin that comes up heads with probability p > 1/2. For
any constant δ > 0, when n is suficiently large:
1. there exists a compression function Com such that the expected number of bits output
by Com on an input sequence of n independent coin lips is at most (1 + δ)nH(p);
and
2. the expected number of bits output by any compression function on an input
sequence of n independent coin lips is at least (1 − δ)nH(p).
Theorem 10.6 is quite similar to Theorem 10.5. The lower bound on the expected num-
ber of bits output by any compression function is slightly weaker. In fact, we could raise
279
entropy, randomness, and information
this lower bound to nH(p) if we insisted that the code be a preix code – so that no out-
put is the preix of any other – but we do not prove this here. The compression function
we design to prove an upper bound on the expected number of output bits does yield
a preix code. Our construction of this compression function follows roughly the same
intuition as Theorem 10.5. We know that, with high probability, the outcome from the
n lips will be one of roughly 2nH(p) sequences with roughly np heads. We can use
about nH(p) bits to represent each one of these sequences, yielding the existence of an
appropriate compression function.
Proof of Theorem 10.6: We irst show that there exists a compression function as guar-
anteed by the theorem. Let ε > 0 be a suitably small constant with p − ε > 1/2. Let
X be the number of heads in n lips of the coin. The irst bit output by the compres-
sion function we use as a lag. We set it to 0 if there are at least n(p − ε) heads in the
sequence and to 1 otherwise. When the irst bit is a 1, the compression function uses
the expensive default scheme, using 1 bit for each of the n lips. This requires that n + 1
total bits be output; however, by the Chernoff bound of Eqn. (4.5), the probability that
this case happens is bounded by
2
Pr(X < n(p − ε)) ≤ e−nε /2p
.
Now let us consider the case where there are at least n(p − ε) heads. The number of
coin-lip sequences of this form is
n
n
n n n nH(p−ε)
≤ ≤ 2 .
j=⌈n(p−ε)⌉
j j=⌈n(p−ε)⌉
⌈n(p − ε)⌉ 2
The irst inequality arises because the binomial terms are decreasing as long as j > n/2,
and the second is a consequence of Corollary 10.3. For each such sequence of coin
lips, the compression function can assign a unique sequence of exactly ⌊nH(p − ε) +
log2 n⌋ bits to represent it, since
n nH(p−ε)
2⌊nH(p−ε)+log2 n⌋ ≥ 2 .
2
Including the lag bit, it therefore takes at most nH(p − ε) + log2 n + 1 bits to represent
the sequences of coin lips with this many heads.
Totaling these results, we ind that the expected number of bits required by the com-
pression function is at most
2 2
e−nε /2p
(n + 1) + (1 − e−nε /2p
)(nH(p − ε) + log2 n + 1) ≤ (1 + δ)nH(p),
where the inequality holds by irst taking ε suficiently small and then taking n sufi-
ciently large in a manner similar to that of Theorem 10.5.
We now show the lower bound. To begin, recall that the probability that a speciic
sequence with k heads is lipped is pk (1 − p)n−k . Because p > 1/2, if sequence S1 has
more heads than another sequence S2 , then S1 is more likely to appear than S2 . Also,
we have the following lemma.
280
10.5
∗
coding: shannon’s theorem
Lemma 10.7: If sequence S1 is more likely than S2 , then the compression function that
minimizes the expected number of bits in the output assigns a bit sequence to S2 that is
at least as long as S1 .
Proof: Suppose that a compression function assigns a bit sequence to S2 that is shorter
than the bit sequence it assigns to S1 . We can improve the expected number of bits
output by the compression function by switching the output sequences associated with
S1 and S2 , and therefore this compression function is not optimal.
Hence sequences with more heads should get shorter strings from an optimal compres-
sion function.
We also make use of the following simple fact. If the compression function assigns
distinct sequences of bits to represent each of s coin-lip sequences, then one of the out-
put bit sequences for the s input sequences must have length at least log2 s − 1 bits. This
is because there are at most 1 + 2 + 4 + · · · + 2b = 2b+1 − 1 distinct bit sequences
with up to b bits, so if each of s sequences of coin lips is assigned a bit sequence of at
most b bits, then we must have 2b+1 > s and hence b > log2 s − 1.
Fix a suitably small ε > 0 and count the number of input sequences that have
n
⌊(p + ε)n⌋ heads. There are ⌊(p+ε)n⌋ sequences with ⌊(p + ε)n⌋ heads and, by
Corollary 10.3,
2nH(p+ε)
n
≥ .
⌊(p + ε)n⌋ n+1
Hence any compression function must output at least log2 (2nH(p+ε) /(n + 1)) − 1 =
nH(p + ε) − log2 (n + 1) − 1 bits on at least one of the sequences of coin lips with
⌊(p + ε)n⌋ heads. The compression function that minimizes the expected output length
must therefore use at least this many bits to represent any sequence with fewer heads,
by Lemma 10.7.
By the Chernoff bound of Eqn. (4.2), the number of heads X satisies
2 2
Pr(X ≥ ⌊n(p + ε)⌋) ≤ Pr(X ≥ n(p + ε − 1/n)) ≤ e−n(ε−1/n) /3p ≤ e−nε /12p
as long as n is suficiently large (speciically, n > 2/ε). We thus obtain, with probability
2
at least 1 − e−nε /12p , an input sequence with fewer than ⌊n(p + ε)⌋ heads, and by our
previous reasoning the compression function that minimizes the expected output length
must still output at least nH(p + ε) − log2 (n + 1) − 1 bits in this case. The expected
number of output bits is therefore at least
2
(1 − e−nε /12p
)(nH(p + ε) − log2 (n + 1) − 1).
This can be made to be at least (1 − δ)nH(p) by irst taking ε to be suficiently small
and then taking n to be suficiently large.
We have seen how compression can reduce the expected number of bits required to
represent data by changing the representation of the data. Coding also changes the
281
entropy, randomness, and information
representation of the data. Instead of reducing the number of bits required to represent
the data, however, coding adds redundancy in order to protect the data against loss
or errors.
In coding theory, we model the information being passed from a sender to a receiver
through a channel. The channel may introduce noise, distorting the value of some of
the bits during the transmission. The channel can be a wired connection, a wireless
connection, or a storage network. For example, if I store data on a recordable medium
and later try to read it back, then I am both the sender and the receiver, and the storage
medium acts as the channel. In this section, we focus on one speciic type of channel.
Deinition 10.4: The input to a binary symmetric channel with parameter p is a
sequence of bits x1 , x2 , . . . , and the output is a sequence of bits y1 , y2 , . . . , such that
Pr(xi = yi ) = 1 − p independently for each i. Informally, each bit sent is lipped to the
wrong value independently with probability p.
To get useful information out of the channel, we may introduce redundancy to help
protect against the introduction of errors. As an extreme example, suppose the sender
wants to send the receiver a single bit of information over a binary symmetric channel.
To protect against the possibility of error, the sender and receiver agree to repeat the bit
n times. If p < 1/2, a natural decoding scheme for the receiver is to look at the n bits
received and decide that the value that was received more frequently is the bit value
the sender intended. The larger n is, the more likely the receiver determines the correct
bit; by repeating the bit enough times, the probability of error can be made arbitrarily
small. This example is considered more extensively in Exercise 10.18.
Coding theory studies the trade-off between the amount of redundancy required and
the probability of a decoding error over various types of channels. For the binary sym-
metric channel, simply repeating bits may not be the best use of redundancy. Instead
we consider more general encoding functions.
Deinition 10.5: A (k, n) encoding function Enc: {0,1}k → {0,1}n takes as input
a sequence of k bits and outputs a sequence of n bits. A (k, n) decoding function
Dec: {0,1}n → {0,1}k takes as input a sequence of n bits and outputs a sequence of
k bits.
With coding, the sender takes a k-bit message and encodes it into a block of n ≥ k
bits via the encoding function. These bits are then sent over the channel. The receiver
examines the n bits received and attempts to determine the original k-bit message using
the decoding function.
Given a binary channel with parameter p and a target encoding length of n, we wish
to determine the largest value of k so that there exist (k, n) encoding and decoding
functions with the property that, for any input sequence of k bits, with suitably large
probability the receiver decodes the correct input from the corresponding n-bit encod-
ing sequence after it has been distorted by the channel.
Let m ∈ {0,1}k be the message to be sent and Enc(m) the sequence of bits sent over
the channel. Let the random variable X denote the sequence of received bits. We require
that Dec(X ) = m with probability at least 1 − γ for all possible messages m and a pre-
chosen constant γ . If there were no noise, then we could send the original k bits over
282
10.5
∗
coding: shannon’s theorem
the channel. The noise reduces the information that the receiver can extract from each
bit sent, and so the sender can reliably send messages of only about k = n(1 − H(p))
bits within each block of n bits. This result is known as Shannon’s theorem, which we
prove in the following form.
Theorem 10.8: For a binary symmetric channel with parameter p < 1/2 and for any
constants δ, γ > 0, when n is suficiently large:
1. for any k ≤ n(1 − H(p) − δ), there exist (k, n) encoding and decoding functions
such that the probability the receiver fails to obtain the correct message is at most
γ for every possible k-bit input message; and
2. there are no (k, n) encoding and decoding functions with k ≥ n(1 − H(p) + δ) such
that the probability of decoding correctly is at least γ for a k-bit input message
chosen uniformly at random.
Proof: We irst prove the existence of suitable (k, n) encoding and decoding functions
when k ≤ n(1 − H(p) − δ) by using the probabilistic method. In the end, we want our
encoding and decoding functions to have error probability at most γ on every possi-
ble input. We begin with a weaker result, showing that there exist appropriate coding
functions when the input is chosen uniformly at random from all k-bit inputs.
The encoding function assigns to each of the 2k strings an n-bit codeword chosen
independently and uniformly at random from the space of all n-bit sequences. Label
these codewords X0 , X1 , . . . , X2k −1 . The encoding function simply outputs the code-
word assigned to the k-bit message using a large lookup table containing an entry for
each k-bit string. (You may be concerned that two codewords may turn out to be the
same; the probability of this is very small and is handled in the analysis that follows.)
To describe the decoding function, we provide a decoding algorithm based on the
lookup table for the encoding function, which we may assume the receiver possesses.
The decoding algorithm makes use of the fact that the receiver expects the channel to
make roughly pn errors. The receiver therefore looks for a codeword that differs from
the n bits received in between (p − ε)n and (p + ε)n places for some suitably small
constant ε > 0. If just one codeword has this property, then the receiver will assume
that this was the codeword sent and will recover the message accordingly. If more than
one codeword has this property, the decoding algorithm fails. The decoding algorithm
described here requires exponential time and space. As in the rest of this chapter, we
are not now concerned with eficiency issues.
The corresponding (k, n) decoding function can be obtained from the algorithm by
simply running through all possible n-bit sequences. Whenever a sequence decodes
properly with the foregoing algorithm, the output of the decoding function for that
sequence is set to the k-bit sequence associated with the corresponding codeword.
Whenever the algorithm fails, the output for the sequence can be any arbitrary sequence
of k bits. For the decoding function to fail, at least one of the two following events must
occur:
r the channel does not make between (p − ε)n and (p + ε)n errors; or
r when a codeword is sent, the received sequence differs from some other codeword in
between (p − ε)n and (p + ε)n places.
283
entropy, randomness, and information
The path of the proof is now clear. A Chernoff bound can be used to show that, with
high probability, the channel does not make too few or too many errors. Conditioning
on the number of errors being neither too few nor too many, the question becomes
how large k can be while ensuring that, with the required probability, the received
sequence does not differ from multiple codewords in between (p − ε)n and (p + ε)n
places.
Now that we have described the encoding and decoding functions, we establish the
notation to be used in the analysis. Let R be the received sequence of bits. For sequences
s1 and s2 of n bits, we write (s1 , s2 ) for the number of positions where these sequences
differ. This value (s1 , s2 ) is referred to as the Hamming distance between the two
strings. We say that the pair (s1 , s2 ) has weight
The weight corresponds to the probability that s2 is received when s1 is sent over
the channel. We introduce random variables S0 , S1 , . . . , S2k−1 and W0 , W1 , . . . , W2k−1
deined as follows. The set Si is the set of all received sequences that decode to Xi . The
value Wi is given by
Wi = w(Xi , r).
r∈S
/ i
The Si and Wi are random variables that depend only on the random choices of
X0 , X1 , . . . , X2k−1 . The variable Wi represents the probability that, when Xi is sent, the
received sequence R does not lie in Si and hence is decoded incorrectly. It is also helpful
to express Wi in the following way: letting Ii,s be an indicator random variable that is 1
/ Si and 0 otherwise, we can write
if s ∈
Wi = Ii,r w(Xi , r).
r
We start by bounding E[Wi ]. By symmetry, E[Wi ] is the same for all i, so we bound
E[W0 ]. Now
E[W0 ] = E I0,r w(X0 , r)
r
= E[w(X0 , r)I0,r ].
r
We split the sum into two parts. Let T1 = {s : |(X0 , s) − pn| > εn} and T2 = {s :
|(X0 , s) − pn| ≤ εn}, where ε > 0 is some constant to be determined. Then
E[w(X0 , r)I0,r ] = E[w(X0 , r)I0,r ] + E[w(X0 , r)I0,r ],
r r∈T1 r∈T2
We irst bound
E[w(X0 , r)I0,r ] ≤ w(X0 , r)
r∈T1 r∈T1
= p(X0 ,r) (1 − p)n−(X0 ,r)
r:|(X0 ,r)−pn|>εn
= Pr(|(X0 , R) − np| > εn).
That is, to bound the irst term, we simply bound the probability that the receiver fails
to decode correctly and the number of errors is not in the range [(p − ε)n, (p + ε)n]
by the probability that the number of errors is not in this range. Equivalently, we obtain
our bound by assuming that, whenever there are too many or too few errors introduced
by the channel, we fail to decode correctly. This probability is very small, as we can
see by using the Chernoff bound of Eqn. (4.6):
2
n/3p
Pr(|(X0 , R) − np| > εn) ≤ 2e−ε .
For any ε > 0, we can choose n suficiently large so that this probability, and hence
r∈T1 E[w(X0 , r)I0,r ], is less than γ
/2.
We now ind an upper bound for r∈T2 E[w(X0 , r)I0,r ]. For every r ∈ T2 , the decod-
ing algorithm will be successful when r is received unless r differs from some other
codeword Xi in between (p − ε)n and (p + ε)n places. Hence I0,r will be 1 only if such
an Xi exists, and thus for any values of X0 and r ∈ T2 we have
E[w(X0 , r)I0,r ]
= w(X0 , r) Pr(for some Xi with 1 ≤ i ≤ 2k − 1, |(Xi , r) − pn| ≤ εn).
To obtain this upper bound, we recall that the other codewords X1 , X2 , . . . , X2k−1 are
chosen independently and uniformly at random. The probability that any other speciic
codeword Xi , i > 0, differs from any given string r of length n in between (p − ε)n and
(p + ε)n places is therefore at most
⌊n(p+ε)⌋ n n
j ⌊n(p+ε)⌋
n
≤n .
j=⌈n(p−ε)⌉
2 2n
Here we have bounded the summation by n times its largest term; nj is largest when
j = ⌊n(p + ε)⌋ over the range of j in the summation, as long as ε is chosen so that
p + ε < 1/2.
285
entropy, randomness, and information
Hence the probability that any speciic Xi matches a string r on a number of bits so as
to cause a decoding failure is at most n2−n(1−H(p+ε)) . By a union bound, the probability
that any of the 2k − 1 other codewords cause a decoding failure when X0 is sent is at
most
where we have used the fact that k ≤ n(1 − H(p) − δ). By choosing ε small enough
so that H(p + ε) − H(p) − δ is negative and then choosing n suficiently large, we can
make this term as small as desired, and in particular we can make it less than γ /2.
By summing the bounds over the two sets T1 and T2 , which correspond to the two
types of error in the decoding algorithm, we ind that E[W0 ] ≤ γ .
We can bootstrap this result to show that there exists a speciic code such that, if
the k-bit message to be sent is chosen uniformly at random, then the code fails with
probability γ . We use the linearity of expectations and the probabilistic method. We
have that
k
⎡k ⎤
2 −1 2
−1
E[W j ] = E ⎣ W j ⎦ ≤ 2k γ ,
j=0 j=0
where again the expectation is over the random choices of the codewords X0 , X1 , . . . ,
X2k −1 . By the probabilistic method, there must exist a speciic set of codewords
x0 , x1 , . . . , x2k −1 such that
k
2 −1
W j ≤ 2k γ .
j=0
We have shown that there exists a set of codewords x0 , x1 , . . . , x2k −1 for which
k
2 −1
W j ≤ 2k γ .
j=0
Without loss of generality, let us assume that the xi are sorted in increasing order of
Wi . Suppose that we remove the half of the codewords that have the largest values Wi ;
that is, we remove the codewords that have the highest probability of yielding an error
when being sent. We claim that each xi , i < 2k−1 , must satisfy Wi ≤ 2γ . Otherwise we
would have
k
2 −1
W j > 2k−1 (2γ ) = 2k γ ,
j=2k−1
a contradiction. (We used similar reasoning in the proof of Markov’s inequality in Sec-
tion 3.1.)
We can set up new encoding and decoding functions on all (k − 1)-bit strings using
just these 2k−1 codewords, and now the error probability for every codeword is simulta-
neously at most 2γ . Hence we have shown that, when k − 1 ≤ n(1 − H(p) − δ), there
exists a code such that the probability that the receiver fails to obtain the correct mes-
sage is at most 2γ for any message that is sent. Since δ and γ were arbitrary constants,
we see that this implies the irst half of the theorem.
Having completed the irst half of the theorem, we now move to the second
half: for any constants δ, γ > 0 and for n suficiently large, there do not exist (k, n)
encoding and decoding functions with k ≥ n(1 − H(p) + δ) such that the probabil-
ity of decoding correctly is at least γ for a k-bit input message chosen uniformly at
random.
Before giving the proof, let us irst consider some helpful intuition. We know that
the number of errors introduced by the channel is, with high probability, between
⌈(p − ε)n⌉ and ⌊(p + ε)n⌋ for a suitable constant ε > 0. Suppose that we try to set
up the decoding function so that each codeword is decoded properly whenever the
number of errors is between (p − ε)n and (p + ε)n. Then each codeword is associated
with
⌊n(p+ε)⌋
2nH(p)
n n
≥ ≥
k=⌈n(p−ε)⌉
k ⌈np⌉ n+1
bit sequences by the decoding function; the last inequality follows from Corollary 10.3.
But there are 2k different codewords, and
2nH(p) 2nH(p)
2k ≥ 2n(1−H(p)+δ) > 2n
n+1 n+1
when n is suficiently large. Since there are only 2n possible bit sequences that can be
received, we cannot create a decoding function that always decodes properly whenever
the number of errors is between (p − ε)n and (p + ε)n.
287
entropy, randomness, and information
We now need to extend the argument for any encoding and decoding functions.
This argument is more complex, since we cannot assume that the decoding func-
tion necessarily tries to decode properly whenever the number of errors is between
(p − ε)n and (p + ε)n, even though this would seem to be the best strategy to
pursue.
Given any ixed encoding function with codewords x0 , x1 , . . . , x2k −1 and any ixed
decoding function, let z be the probability of successful decoding. Deine Si to be the
set of all received sequences that decode to xi . Then
k
2 −1
z= Pr((xi is sent) ∩ (R = s))
i=0 s∈Si
k
2 −1
= Pr(xi is sent) Pr(R = s | xi is sent)
i=0 s∈Si
k
2 −1
1
= k Pr(R = s | xi is sent)
2 i=0 s∈S
i
k
2 −1
1
= w(xi , s).
2k i=0 s∈S
i
The second line follows from the deinition of conditional probability. The third line
uses the fact that the message sent and hence the codeword sent is chosen uniformly
at random from all codewords. The fourth line is just the deinition of the weight
function. k
To bound this last line, we again split the summation 2i=0−1 s∈Si w(xi , s) into two
parts. Let Si,1 = {s ∈ Si : |(xi , s) − pn| > εn} and Si,2 = {s ∈ Si : |(xi , s) − pn| ≤
εn}, where again ε > 0 is some constant to be determined. Then
w(xi , s) = w(xi , s) + w(xi , s).
s∈Si s∈Si,1 s∈Si,2
Now
w(xi , s) ≤ w(xi , s),
s∈Si,1 s:|(xi ,s)−pn|>εn
which can be bounded using Chernoff bounds. The summation on the right is simply
the probability that the number of errors introduced by the channel is not between
2
(p − ε)n and (p + ε)n, which we know from previous arguments is at most 2e−ε n/3p .
This bound is equivalent to assuming that decoding is successful even if there are too
many or too few errors introduced by the channel; since the probability of too many or
too few errors is small, this assumption still yields a good bound.
288
10.5
∗
coding: shannon’s theorem
To bound s∈Si,2 w(xi , s), we note that w(xi , s) is decreasing in (xi , s). Hence, for
s ∈ Si,2 ,
Therefore,
1 − p εn
−H(p)n
w(xi , s) ≤ 2
s∈Si,2 s∈Si,2
p
εn
1− p
= 2−H(p)n |Si,2 |.
p
We continue with
k
2 −1
1
z= k w(xi , s)
2 i=0 s∈S
i
⎛ ⎞
2k −1
1 ⎝
= k w(xi , s) + w(xi , s)⎠
2 i=0 s∈S s∈S
i,1 i,2
k
2
−1
εn
1 −ε2 n/3p −H(p)n 1− p
≤ k 2e +2 |Si,2 |
2 i=0
p
k
εn 2
−1
−ε2 n/3p 1 1− p
= 2e + k 2−H(p)n |Si,2 |
2 p i=0
εn
−ε2 n/3p 1 1− p
≤ 2e + k 2−H(p)n 2n .
2 p
In this last line, we have used the important fact that the sets of bit sequences Si and
hence all the Si,2 are disjoint, so their total size is at most 2n . This is where the fact
that we are using a decoding function comes into play, allowing us to establish a useful
bound.
To conclude,
εn
2
n/3p 1− p
z ≤ 2e−ε + 2n−(1−H(p)+δ)n−H(p)n
p
n
1 − p ε −δ
−ε2 n/3p
= 2e + 2 .
p
289
entropy, randomness, and information
Shannon’s theorem demonstrates the existence of codes that transmit arbitrarily closely
to the capacity of the binary symmetric channel over long enough blocks. It does not
give explicit codes, nor does it say that such codes can be encoded and decoded efi-
ciently. It took decades after Shannon’s original work before practical codes with near-
optimal performance were found.
10.6. Exercises
for integers k = 1, . . .
, 10. Find H(X ).
(c) Consider Sα = 10 α
k=1 1/k , where α > 1 is a constant. Consider random vari-
ables Xα such that Pr(Xα = k) = 1/Sα kα for integers k = 1, . . . , 10. Give an intu-
itive explanation explaining whether H(Xα ) is increasing or decreasing with α and
why.
Exercise 10.2: Consider an n-sided die, where the ith face comes up with probability
pi . Show that the entropy of a die roll is maximized when each face comes up with
equal probability 1/n.
Exercise 10.3: (a) A fair coin is repeatedly lipped until the irst heads occurs. Let X
be the number of lips required. Find H(X ).
(b) Your friend lips a fair coin repeatedly until the irst heads occurs. You want
to determine how many lips were required. You are allowed to ask a series of yes–
no questions of the following form: you give your friend a set of integers, and your
friend answers “yes” if the number of lips is in that set and “no” otherwise. Describe
a strategy so that the expected number of questions you must ask before determining
the number of lips is H(X ).
(c) Give an intuitive explanation of why you cannot come up with a strategy that
would allow you to ask fewer than H(X ) questions on average.
is inite.
290
10.6 exercises
Exercise 10.5: Suppose p is chosen uniformly at random from the real interval [0, 1].
Calculate E[H(p)].
Exercise 10.8: We have shown in Theorem 10.5 that we can use a recursive proce-
dure to extract, on average, at least ⌊log2 m⌋ − 1 independent, unbiased bits from a
number X chosen uniformly at random from S = {0, . . . , m − 1}. Consider the follow-
ing extraction function: let α = ⌊log2 m⌋, and write
m = βα 2α + βα−1 2α−1 + · · · + β0 20 ,
where each βi is either 0 or 1.
Let k be the number of values of i for which βi equals 1. Then we split S into k disjoint
subsets in the following manner: there is one set for each value of βi that equals 1, and
the set for this i has 2i elements. The assignment of S to sets can be arbitrary, as long
as the resulting sets are disjoint. To get an extraction function, we map the elements
of the subset with 2i elements in a one-to-one manner with the 2i binary strings of
length i.
Show that this mapping is equivalent to the recursive extraction procedure given in
Theorem 10.5 in that both produce i bits with the same probability for all i.
Exercise 10.9: We have shown that we can extract, on average, at least ⌊log2 m⌋ − 1
independent, unbiased bits from a number chosen uniformly at random from
291
entropy, randomness, and information
{0, . . . , m − 1}. It follows that if we have k numbers chosen independently and uni-
formly at random from {0, . . . , m − 1} then we can extract, on average, at least
k⌊log2 m⌋ − k independent, unbiased bits from them. Give a better procedure that
extracts, on average, at least k⌊log2 m⌋ − 1 independent, unbiased bits from these num-
bers.
Exercise 10.10: Suppose that we have a means of generating independent, fair coin
lips.
(a) Give an algorithm using the coin to generate a number uniformly from
{0, 1, . . . , n − 1}, where n is a power of 2, using exactly log2 n lips.
(b) Argue that, if n is not a power of 2, then no algorithm can generate a number
uniformly from {0, 1, . . . , n − 1} using exactly k coin lips for any ixed k.
(c) Argue that, if n is not a power of 2, then no algorithm can generate a number
uniformly from {0, 1, . . . , n − 1} using at most k coin lips for any ixed k.
(d) Give an algorithm using the coin to generate a number uniformly from
{0, 1, . . . , n − 1}, even when n is not a power of 2, using at most 2⌈log2 n⌉ expected
lips.
Exercise 10.11: Suppose that we have a means of generating independent, fair coin
lips.
(a) Give an algorithm using the fair coin that simulates lipping a biased coin that
comes up heads with probability p. The expected number of lips your algorithm
uses should be at most 2. (Hint: Think of p written as a decimal in binary, and use
the fair coin to generate binary decimal digits.)
(b) Give an algorithm using the coin to generate a number uniformly from
{0, 1, . . . , n − 1}. The expected number of lips your algorithm uses should be at
most ⌈log2 n⌉ + 2.
(a) Show that the bits extracted are independent and unbiased.
(b) Show that the expected number of extracted bits is ⌊n/2⌋2p(1 − p) ≈ np(1 − p).
(c) We can derive another set of lips Y = y1 , y2 , . . . from the sequence X as fol-
lows. Start with j, k = 1. Repeat the following operations until j = ⌊n/2⌋: If
a j = (heads, heads), set yk to heads and increment j and k; if a j = (tails, tails),
set yk to tails and increment j and k; otherwise, increment j. See Figure 10.3 for an
example.
The intuition here is that we take some of the randomness that A was unable
to use effectively and re-use it. Show that the bits produced by running A on Y
292
10.6 exercises
Figure 10.3: After running A on the input sequence X, we can derive further sequences Y and Z; after
running A on each of Y and Z, we can derive further sequences from them; and so on.
are independent and unbiased, and further argue that they are independent of those
produced from running A on X.
(d) We can derive a second set of lips Z = z1 , z2 , . . . , z⌊n/2⌋ from the sequence X as
follows: let zi be heads if ai = (heads, heads) or (tails, tails), and let zi be tails
otherwise. See Figure 10.3 for an example. Show that the bits produced by running
A on Z are independent and unbiased, and further argue that they are independent
of those produced from running A on X and Y.
(e) After we derive and run A on Y and Z, we can recursively derive two further
sequences from each of these sequences in the same way, run A on those, and
so on. See Figure 10.3 for an example. Let A(p) be the average number of bits
extracted for each lip (with probability p of coming up heads) in the sequence X,
in the limit as the length of the sequence X goes to ininity. Argue that A(p) satisies
the recurrence
p2
1 2 2 1
A(p) = p(1 − p) + (p + q ) A 2 2
+ A(p2 + (1 − p)2 ).
2 p +q 2
(f) Show that the entropy function H(p) satisies this recurrence for A(p).
(g) Implement the recursive extraction procedure explained in part (e). Run it 1000
times on sequences of 1024 bits generated by a coin that comes up heads with
probability p = 0.7. Give the distribution of the number of lips extracted over the
1000 runs and discuss how close your results are to 1024 · H(0.7).
Exercise 10.13: Suppose that, instead of a biased coin, we have a biased six-sided
die with entropy h > 0. Modify our extraction function for the case of biased coins
so that it extracts, on average, almost h random bits per roll from a sequence of die
rolls. Prove formally that your extraction function works by modifying Theorem 10.5
appropriately.
Exercise 10.14: Suppose that, instead of a biased coin, we have a biased six-sided die
with entropy h > 0. Modify our compression function for the case of biased coins so
that it compresses a sequence of n die rolls to almost nh bits on average. Prove formally
that your compression function works by modifying Theorem 10.6 appropriately.
293
entropy, randomness, and information
(a) Explain how this property can be used to easily decompress the string created by
the compression algorithm when reading the bits sequentially.
(b) Prove that the ℓi must satisfy
n
2−ℓi ≤ 1.
i=1
and let the ith codeword be the irst ⌈log2 (1/pi )⌉ bits of Ti . Start with an empty string,
and consider the X j in order. If X j takes on the ith value, append the ith codeword to
the end of the string.
H(X ) ≤ z ≤ H(X ) + 1.
p(X )) with ⌈log2 (1/p(X ))⌉ + 1 binary decimal digits to represent X in such a way
that no codeword is the preix of any other codeword.
(d) Given a codeword chosen as in (c), explain how to decompress it to determine the
corresponding sequence (X1 , X2 , . . . , Xn ).
(e) Using a Chernoff bound, argue that log2 (1/p(X )) is close to nH(p) with high prob-
ability. Hence this approach yields an effective compression scheme.
Exercise 10.18: Alice wants to send Bob the result of a fair coin lip over a binary
symmetric channel that lips each bit with probability p < 1/2. To avoid errors in trans-
mission, she encodes heads as a sequence of 2k + 1 zeroes and tails as a sequence of
2k + 1 ones.
(a) Consider the case where k = 1, so heads is encoded as 000 and tails as 111. For
each of the eight possible sequences of 3 bits that can be received, determine the
probability that Alice lipped a heads conditioned on Bob receiving that sequence.
(b) Bob decodes by examining the 3 bits. If two or three of the bits are 0, then Bob
decides the corresponding coin lip was a heads. Prove that this rule minimizes the
probability of error for each lip.
(c) Argue that, for general k, Bob minimized the probability of error by deciding the
lip was heads if at least k + 1 of the bits are 0.
(d) Give a formula for the probability that Bob makes an error that holds for general
k. Evaluate the formula for p = 0.1 and k ranging from 1 to 6.
(e) Give a bound on the probability computed in part (d) using Chernoff bounds.
Exercise 10.19: Consider the following channel. The sender can send a symbol from
the set {0, 1, 2, 3, 4}. The channel introduces errors; when the symbol k is sent, the
recipient receives k + 1 mod 5 with probability 1/2 and receives k − 1 mod 5 with
probability 1/2. The errors are mutually independent when multiple symbols are sent.
Let us deine encoding and decoding functions for this channel. A ( j, n) encoding
function Enc maps a number in {0, 1, . . . , j − 1} into sequences from {0, 1, 2, 3, 4}n ,
and a ( j, n) decoding function Dec maps sequences from {0, 1, 2, 3, 4}n back into
{0, 1, . . . , j − 1}. Notice that this deinition is slightly different than the one we used
for bit sequences over the binary symmetric channel.
There are (1, 1) encoding and decoding functions with zero probability of error. The
encoding function maps 0 to 0 and 1 to 1. When a 0 is sent, the receiver will receive
either a 1 or 4, so the decoding function maps 1 and 4 back to 0. When a 1 is sent, the
receiver will receiver either a 2 or 0, so the decoding function maps 2 and 0 back to 1.
This guarantees that no error is made. Hence at least one bit can be sent without error
per channel use.
(a) Show that there are (5, 2) encoding and decoding functions with zero probability
of error. Argue that this means more than one bit of information can be sent per
use of the channel.
(b) Show that if there are ( j, n) encoding and decoding functions with zero probability
of error, then n ≥ log2 j/(log2 5 − 1).
295
entropy, randomness, and information
Exercise 10.20: A binary erasure channel transfers a sequence of n bits. Each bit
either arrives successfully without error or fails to arrive successfully and is replaced
by a ‘?’ symbol, denoting that it is not known if that bit is a 0 or a 1. Failures occur
independently with probability p. We can deine (k, n) encoding and decoding functions
for the binary erasure channel in a similar manner as for the binary symmetric channel,
except here the decoding function Dec: {0, 1, ?}n → {0, 1}k must handle sequences
with the ‘?’ symbol.
Prove that, for any p > 0 and any constants δ, γ > 0, if n is suficiently large then
there exist (k, n) encoding and decoding functions with k ≤ n(1 − p − δ) such that the
probability that the receiver fails to obtain the correct message is at most γ for every
possible k-bit input message.
296
chapter eleven
The Monte Carlo Method
The Monte Carlo method refers to a collection of tools for estimating values through
sampling and simulation. Monte Carlo techniques are used extensively in almost all
areas of physical sciences and engineering. In this chapter, we irst present the basic
idea of estimating a value through sampling, using a simple experiment that gives an
estimate of the value of the constant π. Estimating through sampling is often more com-
plex than this simple example suggests. We demonstrate the potential dificulties that
can arise in devising an eficient sampling procedure by considering how to appropri-
ately sample in order to estimate the number of satisfying assignments of a disjunctive
normal form (DNF) Boolean formula.
We then move to more general considerations, demonstrating a general reduction
from almost uniform sampling to approximate counting of combinatorial objects. This
leads us to consider how to obtain almost uniform samples. One method is the Markov
chain Monte Carlo (MCMC) technique, introduced in the last section of this chapter.
Consider the following approach for estimating the value of the constant π. Let (X, Y )
be a point chosen uniformly at random in a 2 × 2 square centered at the origin (0, 0).
This is equivalent to choosing X and Y independently from a uniform distribution on
[−1, 1]. The circle of radius 1 centered at (0, 0) lies inside this square and has area π.
If we let
√
1 if X 2 + Y 2 ≤ 1,
Z=
0 otherwise,
then – because the point was chosen uniformly from the 2 × 2 square – the probability
that Z = 1 is exactly the ratio of the area of the circle to the area of the square. See
Figure 11.1. Hence
π
Pr(Z = 1) = .
4
297
the monte carlo method
Figure 11.1: A point chosen uniformly at random in the square has probability π /4 of landing in the
circle.
Assume that we run this experiment m times (with X and Y chosen independently
among the runs), with Zi being the value of Z at the ith run. If W = m
i=1 Zi , then
m m
mπ
E[W ] = E Zi = E[Zi ] = ,
i=1 i=1
4
and hence W ′ = (4/m)W is a natural estimate for π. Applying the Chernoff bound of
Eqn. (4.6), we compute
′
mπ εmπ
Pr(|W − π| ≥ επ ) = Pr W −
≥
4 4
= Pr(|W − E[W ]| ≥ εE[W ])
2
≤ 2e−mπε /12
.
Therefore, by using a suficiently large number of samples we can obtain, with high
probability, as tight an approximation of π as we wish.
This method for approximating π is an example of a more general class of approx-
imation algorithms that we now characterize.
Deinition 11.1: A randomized algorithm gives an (ε, δ)-approximation for the value
V if the output X of the algorithm satisies
Pr(|X − V | ≤ εV ) ≥ 1 − δ.
Our method for estimating π gives an (ε, δ)-approximation, as long as ε < 1 and we
choose m large enough to make
2
2e−mπε /12
≤ δ.
Algebraic manipulation yields that choosing
12 ln(2/δ)
m≥
πε2
is suficient.
We may generalize the idea behind our technique for estimating π to provide a
relation between the number of samples and the quality of the approximation. We use
the following simple application of the Chernoff bound throughout this chapter.
298
11.1 the monte carlo method
Xi = p(Y1 , . . . , Yk ).
We can then use the Xi to estimate the expected future price E[p(Y1 , . . . , Yk )] with the
Monte Carlo method. That is, by simulating the possible future outcomes of the Yi many
times, we can estimate the desired expectation.
299
the monte carlo method
300
11.2 application: the dnf counting problem
301
the monte carlo method
Here c(F ) ≤ |U|, since an assignment can satisfy more than one clause and thus appear
in more than one pair in U.
To estimate c(F ), we deine a subset S of U with size c(F ). We construct this set
by selecting, for each satisfying assignment of F, exactly one pair in U that has this
assignment; speciically, we can use the pair with the smallest clause index number,
giving
/ SC j for j < i}.
S = {(i, a) | 1 ≤ i ≤ t, a ∈ SCi , a ∈
302
11.2 application: the dnf counting problem
Since we know the size of U, we can estimate the size of S by estimating the ratio
|S|/|U|. We can estimate this ratio eficiently if we sample uniformly at random from
U using our previous approach, choosing pairs uniformly at random from U and count-
ing how often they are in S. We can avoid the problem we encountered when simply
sampling assignments at random, because S is relatively dense in U. Speciically, since
each assignment can satisfy at most t different clauses, |S|/|U| ≥ 1/t.
The only question left is how to sample uniformly from U. Suppose that we irst
choose the irst coordinate, i. Because the ith clause has |SCi | satisfying assignments, we
should choose i with probability proportional to |SCi |. Speciically, we should choose
i with probability
|SCi | |SCi |
t = .
i=1 |SC i | |U|
We then can choose a satisfying assignment uniformly at random from SCi . This is easy
to do; we choose the value True or False independently and uniformly at random for
each literal not in clause i. Then the probability that we choose the pair (i, a) is
Proof: Step 2(a) of the algorithm chooses an element of U uniformly at random. The
probability that this element belongs to S is at least 1/t. Fix any ε > 0 and δ > 0,
303
the monte carlo method
and let
< =
3t 2
m= ln .
ε2 δ
Then m is polynomial in t, ε, and ln(1/δ), and the processing time of each sample is
polynomial in t. By Theorem 11.1, with this number of samples, X/m gives an (ε, δ)-
approximation of c(F )/|U| and hence Y gives an (ε, δ)-approximation of c(F ).
We let (Gi ) denote the set of independent sets in Gi . The number of independent
sets in G can then be expressed as
|(Gm )| |(Gm−1 )| |(Gm−2 )| |(G1 )|
|(G)| = × × × ··· × × |(G0 )|.
|(Gm−1 )| |(Gm−2 )| |(Gm−3 )| |(G0 )|
Since G0 has no edges, every subset of V is an independent set and (G0 ) = 2n . In
order to estimate |(G)|, we need good estimates for the ratios
|(Gi )|
ri = , i = 1, . . . , m.
|(Gi−1 )|
More formally, we will develop estimates r̃i for the ratios ri , and then our estimate for
the number of independent sets in G will be
m
2n r̃i ,
i=1
Pr(|R − 1| ≤ ε) ≥ 1 − δ.
Estimating ri :
holds for all i with probability at least 1 − δ. When these bounds hold for all i, we can
combine them to obtain
m
ε m r̃i ε m
1−ε ≤ 1− ≤ ≤ 1+ ≤ 1 + ε,
2m r
i=1 i
2m
Hence all we need is a method for obtaining an (ε/2m, δ/m)-approximation for the
ri . We estimate each of these ratios by a Monte Carlo algorithm that uses the FPAUS
for sampling independent sets. To estimate ri , we sample independent sets in Gi−1 and
compute the fraction of these sets that are also independent sets in Gi , as described in
Algorithm 11.3. The constants in the procedure were chosen to facilitate the proof of
Lemma 11.4.
Lemma 11.4: When m ≥ 1 and 0 < ε ≤ 1, the procedure for estimating ri yields an
(ε/2m, δ/m)-approximation for ri .
Proof: We irst show that ri is not too small, avoiding the problem that we found in
Section 11.2.1. Suppose that Gi−1 and Gi differ in that edge (u, v) is present in Gi but
not in Gi−1 . An independent set in Gi is also an independent set in Gi−1 , so
(Gi ) ⊆ (Gi−1 ).
An independent set in (Gi−1 ) \ (Gi ) contains both u and v. To bound the size of
the set (Gi−1 ) \ (Gi ), we associate each I ∈ (Gi−1 ) \ (Gi ) with an independent
set I \ {v} ∈ (Gi ). In this mapping an independent set I ′ ∈ (Gi ) is associated with
no more than one independent set I ′ ∪ {v} ∈ (Gi−1 ) \ (Gi ), and thus |(Gi−1 ) \
(Gi )| ≤ |(Gi )|. It follows that
|(Gi )| |(Gi )| 1
ri = = ≥ .
|(Gi−1 )| |(Gi )| + |(Gi−1 ) \ (Gi )| 2
306
11.3 from approximate sampling to approximate counting
Now consider our M samples, and let Xk = 1 if the kth sample is in (Gi ) and 0
otherwise. Because our samples are generated by an (ε/6m)-uniform sampler, by Def-
inition 11.3 each Xi must satisfy
Pr(Xk = 1) − |(Gi )| ≤ ε .
|(Gi−1 )| 6m
Since the Xk are indicator random variables, it follows that
E[Xk ] − |(Gi )| ≤ ε
|(Gi−1 )| 6m
and further, by linearity of expectations,
M
k=1 X k |(Gi )| ε
E − ≤ .
M |(Gi−1 )| 6m
We therefore have
M
i=k Xk |(G i )|
|E[r̃i ] − ri | = E −
|(Gi−1 )|
M
ε
≤ .
6m
We now complete the lemma by combining (a) the fact just shown that E[r̃i ] is close
to ri and (b) the fact that r˜i will be close to E[r̃i ] for a suficiently large number of
samples. Using ri ≥ 1/2, we have
ε 1 ε 1
E[r̃i ] ≥ ri − ≥ − ≥ .
6m 2 6m 3
Applying Theorem 11.1 yields that, if the number of samples M satisies
3 ln(2m/δ) 2m
M≥ = 1296m2 ε−2 ln ,
(ε/12m)2 (1/3) δ
then
r̃i ε ε δ
Pr
− 1 ≥
= Pr |r̃i − E[r̃i ]| ≥ E[r̃i ] ≤ .
E[r˜i ] 12m 12m m
Equivalently, with probability 1 − δ/m,
ε r̃i ε
1− ≤ ≤1+ . (11.1)
12m E[r̃i ] 12m
As |E[r̃i ] − ri | ≤ ε/6m, we have that
ε E[r̃i ] ε
1− ≤ ≤1+ .
6mri ri 6mri
Using that ri ≥ 1/2 then yields
ε E[r̃i ] ε
1− ≤ ≤1+ . (11.2)
3m ri 3m
307
the monte carlo method
Combining Eqns. (11.1) and (11.2), it follows that, with probability 1 − δ/m,
ε ε ε r̃i ε ε ε
1− ≤ 1− 1− ≤ ≤ 1+ 1+ ≤1+ .
2m 3m 12m ri 3m 12m 2m
This gives the desired (ε/2m, δ/m)-approximation.
The number of samples M is polynomial in m, ε, and ln δ −1 , and the time for each sam-
ple is polynomial in the size of the graph and ln ε−1 . We therefore have the following
theorem.
Theorem 11.5: Given a fully polynomial almost uniform sampler (FPAUS) for inde-
pendent sets in any graph, we can construct a fully polynomial randomized approxi-
mation scheme (FPRAS) for the number of independent sets in a graph G.
In fact, this theorem is more often used in the following form.
Theorem 11.6: Given a fully polynomial almost uniform sampler (FPAUS) for inde-
pendent sets in any graph with maximum degree at most , we can construct a fully
polynomial randomized approximation scheme (FPRAS) for the number of independent
sets in a graph G with maximum degree at most .
This version of the theorem follows from our previous argument, since our graphs Gi
are subgraphs of the initial graph G. Hence, if we start with a graph of maximum degree
at most , then our FPAUS need only work on graphs with maximum degree at most
. In the next chapter, we will see how to create an FPAUS for graphs with maximum
degree 4.
This technique can be applied to a broad range of combinatorial counting problems.
For example, in Chapter 12 we consider its application to inding proper colorings of
a graph G. The only requirement is that we can construct a sequence of reinements of
the problem, starting with an instance that is easy to count (the number of independent
sets in a graph with no edges, in our example) and ending with the actual counting
problem, and such that the ratio between the counts in successive instances is at most
polynomial in the size of the problem.
The Monte Carlo method is based on sampling. It is often dificult to generate a random
sample with the required probability distribution. For example, we saw in the previous
section that we can count the number of independent sets in a graph if we can generate
an almost uniform sample from the set of independent sets. But how can we generate
an almost uniform sample?
The Markov chain Monte Carlo (MCMC) method provides a very general approach
to sampling from a desired probability distribution. The basic idea is to deine an
ergodic Markov chain whose set of states is the sample space and whose stationary dis-
tribution is the required sampling distribution. Let X0 , X1 , . . . , Xn be a run of the chain.
The Markov chain converges to the stationary distribution from any starting state X0
308
11.4 the markov chain monte carlo method
and so, after a suficiently large number of steps r, the distribution of the state Xr will be
close to the stationary distribution, so it can be used as a sample. Similarly, repeating
this argument with Xr as the starting point, we can use X2r as a sample, and so on. We
can therefore use the sequence of states Xr , X2r , X3r , . . . as almost independent samples
from the stationary distribution of the Markov chain. The eficiency of this approach
depends on (a) how large r must be to ensure a suitably good sample and (b) how much
computation is required for each step of the Markov chain. In this section, we focus on
inding eficient Markov chains with the appropriate stationary distribution and ignore
the issue of how large r needs to be. Coupling, which is one method for determining
the relationship between the value of r and the quality of the sample, is discussed in
the next chapter.
In the simplest case, the goal is to construct a Markov chain with a stationary distri-
bution that is uniform over the state space . The irst step is to design a set of moves
that ensures the state space is irreducible under the Markov chain. Let us call the set of
states reachable in one step from a state x (but excluding x) the neighbors of x, denoted
by N(x). We adopt the restriction that if y ∈ N(x) then also x ∈ N(y). Generally N(x)
will be a small set, so that performing each move is simple computationally.
We again use the setting of independent sets in a graph G = (V, E ) as an example.
The state space is all of the independent sets of G. A natural neighborhood framework
is to say that states x and y, which are independent sets, are neighbors if they differ
in just one vertex. That is, x can be obtained from y by adding or deleting just one
vertex. This neighbor relationship guarantees that the state space is irreducible, since all
independent sets can reach (respectively, can be reached from) the empty independent
set by a sequence of vertex deletions (respectively, vertex additions).
Once the neighborhoods are established, we need to establish transition probabil-
ities. One natural approach to try would be performing a random walk on the graph
of the state space. This might not lead to a uniform distribution, however. We saw in
Theorem 7.13 that, in the stationary distribution of a random walk, the probability of
a vertex is proportional to the degree of the vertex. Nothing in our previous discussion
requires all states to have the same number of neighbors, which is equivalent to all
vertices in the graph of the state space having the same degree.
The following lemma shows that, if we modify the random walk by giving each
vertex an appropriate self-loop probability, then we can obtain a uniform stationary
distribution.
Lemma 11.7: For a inite state space and neighborhood structure {N(X ) | x ∈ },
let N = maxx∈ |N(x)|. Let M be any number such that M ≥ N. Consider a Markov
chain where
⎧
⎨1/M if x = y and y ∈ N(x),
Px,y = 0 if x = y and y ∈
/ N(x),
1 − N(x)/M if x = y.
⎩
If this chain is irreducible and aperiodic, then the stationary distribution is the uniform
distribution.
309
the monte carlo method
Proof: We show that the chain is time reversible and then apply Theorem 7.10. For
any x = y, if πx = πy then
πx Px,y = πy Py,x ,
since Px,y = Py,x = 1/M. It follows that the uniform distribution πx = 1/|| is the sta-
tionary distribution.
Consider now the following simple Markov chain, whose states are independent sets
in a graph G = (V, E ).
This chain has the property that the neighbors of a state Xi are all independent sets
that differ from Xi in just one vertex. Since every state can reach and is reachable
from the empty set, the chain is irreducible. Assuming that G has at least one edge
(u, v), then the state {v} has a self-loop (Pv,v > 0), and the chain is aperiodic. Further,
when y = x, it follows that Px,y = 1/|V | or 0. Lemma 11.7 therefore applies, and the
stationary distribution is the uniform distribution.
Lemma 11.8: For a inite state space and neighborhood structure {N(X ) | x ∈ },
let N = maxx∈ |N(x)|. Let M be any number such that M ≥ N. For all x ∈ , let πx >
0 be the desired probability of state x in the stationary distribution. Consider a Markov
310
11.4 the markov chain monte carlo method
chain where
⎧
⎨(1/M) min(1, πy /πx ) if x = y and y ∈ N(x),
Px,y = 0 if x = y and y ∈
/ N(x),
1 − y=x Px,y if x = y.
⎩
Then, if this chain is irreducible and aperiodic, the stationary distribution is given by
the probabilities πx .
Proof: As in the proof of Lemma 11.7, we show that the chain is time reversible and
apply Theorem 7.10. For any x = y, if πx ≤ πy then Px,y = 1 and Py,x = πx /πy . It fol-
lows that πx Px,y = πy Py,x . Similarly, if πx > πy then Px,y = πy /πx and Py,x = 1, and it
follows that πx Px,y = πy Py,x . By Theorem 7.10, the stationary distribution is given by
the values πx .
As an example of how to apply Lemma 11.8, let us consider how to modify our previ-
ous Markov chains on independent sets. Let us suppose that now we want to create a
Markov chain where, in the stationary distribution, each independent set I has probabil-
ity proportional to λ|I| for some constant parameter λ > 0. That is, |Ix |
πx |I=x | λ /B, where
Ix is the independent set corresponding to state x and where B = x λ . When λ = 1,
this is the uniform distribution; when λ > 1, larger independent sets have a larger prob-
ability than smaller independent sets; and when λ < 1, larger independent sets have a
smaller probability than smaller independent sets.
Consider now the following variation on the previous Markov chain for independent
sets in a graph G = (V, E ).
1. X0 is an arbitrary independent set in G.
2. To compute Xi+1 :
(a) choose a vertex v uniformly at random from V ;
(b) if v ∈ Xi , set Xi+1 = Xi \ {v} with probability min(1, 1/λ);
(c) if v ∈
/ Xi and if adding v to Xi still gives an independent set, then put Xi+1 =
Xi ∪ {v} with probability min(1, λ);
(d) otherwise, set Xi+1 = Xi .
We now follow a two-step approach. We irst propose a move by choosing a vertex v
to add or delete, where each vertex is chosen with probability 1/M; here M = |V |. This
proposal is then accepted with probability min(1, πy /πx ), where x is the current state
and y is the proposed state to which the chain will move. Here, πy /πx is λ if the chain
attempts to add a vertex and is 1/λ if the chain attempts to delete a vertex. This two-
step approach is the hallmark of the Metropolis algorithm: each neighbor is selected
with probability 1/M, and then it is accepted with probability min(1, πy /πx ). Using
this two-step approach, we naturally obtain that the transition probability Px,y is
1 πy
Px,y = min 1, ,
M πx
so Lemma 11.8 applies.
It is|Ix |
important that, in designing this Markov chain, we never needed to know B =
x λ . A graph with n vertices can have exponentially many independent sets, and
311
the monte carlo method
calculating this sum directly would be too expensive computationally for many graphs.
Our Markov chain gives the correct stationary distribution by using the ratios πy /πx ,
which are much easier to deal with.
11.5. Exercises
Exercise 11.2: Another method for approximating π using Monte Carlo tech-
niques is based on Buffon’s needle experiment. Research and explain Buffon’s nee-
dle experiment, and further explain how it can be used to obtain an approximation
for π.
Exercise 11.3: Show that the following alternative deinition is equivalent to the dei-
nition of an FPRAS given in the chapter: A fully polynomial randomized approximation
scheme (FPRAS) for a problem is a randomized algorithm for which, given an input x
and any parameter ε with 0 < ε < 1, the algorithm outputs an (ε, 1/4)-approximation
in time that is polynomial in 1/ε and the size of the input x. (Hint: To boost the prob-
ability of success from 3/4 to 1 − δ, consider the median of several independent runs
of the algorithm. Why is the median a better choice than the mean?)
Exercise 11.4: Suppose we have a class of instances of the DNF satisiability prob-
lem, each with α(n) satisfying truth assignments for some polynomial α. Suppose we
apply the naïve approach of sampling assignments and checking whether they sat-
isfy the formula. Show that, after sampling 2n/2 assignments, the probability of ind-
ing even a single satisfying assignment for a given instance is exponentially small
in n.
We have available a procedure that can, in one step, choose an element uniformly at
random from a set Si . Also, given an element x ∈ U, we can determine the number of
sets Si for which x ∈ Si . We call this number c(x).
Deine pi to be
|Si |
pi = m .
j=1 |S j |
The jth trial consists of the following steps. We choose a set S j , where the probability of
each set Si being chosen is pi , and then we choose an element x j uniformly at random
from S j . In each trial the random choices are independent of all other trials. After t
312
11.5 exercises
n Argue that this Markov chain has a uniform stationary distribution whenever
a
i=1 i > b. Be sure to argue that the chain is irreducible and aperiodic.
(c) Argue that, if we have an FPAUS for the knapsack problem, then we can derive
an FPRAS for the problem. To set the problem up properly,assume without loss
i
of generality that a1 ≤ a2 ≤ · · · ≤ an . Let b0 = 0 and b i = j=1 a j . Let (bi ) be
n n
the set of vectors (x1 , x2 , . . . , xn ) ∈ {0, 1} that satisfy i=1 ai xi ≤ bi . Let k be the
smallest integer such that bk ≥ b. Consider the equation
|(b)| |(bk−1 )| |(b1 )|
|(b)| = × × ··· × × |(b0 )|.
|(bk−1 )| |(bk−2 )| |(b0 )|
You will need to argue that |(bi−1 )|/|(bi )| is not too small. Speciically, argue
that |(bi )| ≤ (n + 1)|(bi−1 )|.
Show that an ε-uniform sample under this deinition yields an ε-uniform sample as
given in Deinition 11.3
Exercise 11.9: Recall the Bubblesort algorithm of Exercise 2.22. Suppose we have n
cards labeled 1 through n. The order of the cards X can be the state of a Markov chain.
Let f (X ) be the number of Bubblesort moves necessary to put the cards in increasing
sorted order. Design a Markov chain based on the Metropolis algorithm such that, in
the stationary distribution, the probability of an order X is proportional to λ f (X ) for a
given constant λ > 0. Pairs of states of the chain are connected if they correspond to
pairs of orderings that can be obtained by interchanging at most two adjacent cards.
314
11.6 an exploratory assignment on minimum spanning trees
Exercise 11.13: Suppose we have a program that takes as input a number x on the
real interval [0, 1] and outputs f (x) for some bounded function f taking on values in
the range [1, b]. We want to estimate
1
f (x) dx.
x=0
Assume that we have a random number generator that can generate independent uni-
form random variables X1 , X2 , . . .. Show that
m
f (Xi )
i=1
m
Consider a complete, undirected graph with n2 edges. Each edge has a weight, which
315
the monte carlo method
r If you chose to throw away edges, how did you determine k(n), and how effective
was this approach?
r Can you give a rough explanation for your results? (The limiting behavior as n grows
large can be proven rigorously, but it is very dificult; you need not attempt to prove
any exact result.)
r Did you have any interesting experiences with the random number generator? Do
you trust it?
316
chapter twelve∗
Coupling of Markov Chains
In our study of discrete time Markov chains in Chapter 7, we found that ergodic Markov
chains converge to a stationary distribution. However, we did not determine how quickly
they converge, which is important in a number of algorithmic applications, such as
sampling using the Markov chain Monte Carlo technique. In this chapter, we introduce
the concept of coupling, a powerful method for bounding the rate of convergence of
Markov chains. We demonstrate the coupling method in several applications, including
card-shufling problems, random walks, and Markov chain Monte Carlo sampling of
independent sets and vertex coloring.
Consider the following method for shufling n cards. At each step, a card is chosen
independently and uniformly at random and put on the top of the deck. We can think
of the shufling process as a Markov chain, where the state is the current order of the
cards. You can check that the Markov chain is inite, irreducible, and aperiodic, and
hence it has a stationary distribution.
Let x be a state of the chain, and let N(x) be the set of states that can reach x in
one step. Here |N(x)| = n, since the top card in x could have been in n different places
in the previous step. If πy is the probability associated with state y in the stationary
distribution, then for any state x we have
1
πx = πy .
n y∈N(x)
The uniform distribution satisies these equations, and hence the unique stationary dis-
tribution is uniform over all possible permutations.
We know that the stationary distribution is the limiting distribution of the Markov
chain as the number of steps grows to ininity. If we could run the chain “forever”,
then in the limit we would obtain a state that was uniformly distributed. In practice,
317
coupling of markov chains
Figure 12.1: Example of variation distance. The areas shaded by upward diagonal lines correspond
to values x where D1 (x) < D2 (x); the areas shaded by downward diagonal lines correspond to values
x where D1 (x) > D2 (x). The total area shaded by upward diagonal lines must equal the total area
shaded by downward diagonal lines, and the variation distance equals one of these two areas.
we run the chain for a inite number of steps. If we want to use this Markov chain to
shufle the deck, how many steps are necessary before we obtain a shufle that is close
to uniformly distributed?
To quantify what we mean by “close to uniform”, we must introduce a distance
measure.
Deinition 12.1: The variation distance between two distributions D1 and D2 on a
countable state space S is given by
1
D1 − D2 = |D1 (x) − D2 (x)|.
2 x∈S
A careful examination of Figure 12.1 helps make the proof of this lemma transparent.
Proof: Let S+ ⊆ S be the set of states such that D1 (x) ≥ D2 (x), and let S− ⊆ S be the
set of states such that D2 (x) > D1 (x).
Clearly,
max D1 (A) − D2 (A) = D1 (S+ ) − D2 (S+ )
A⊆S
and
max D2 (A) − D1 (A) = D2 (S− ) − D1 (S− ).
A⊆S
Finally, since
|D1 (S+ ) − D2 (S+ )| + |D1 (S− ) − D2 (S− )| = |D1 (x) − D2 (x)| = 2D1 − D2 ,
x∈S
we have
max|D1 (A) − D2 (A)| = D1 − D2 ,
A⊆S
As an application of Lemma 12.1, suppose that we run our shufling Markov chain until
the variation distance between the distribution of the chain and the uniform distribution
is less than ε. This is a strong notion of close to uniform, because every permutation
of the cards must have probability at most 1/n! + ε. In fact the bound on the variation
distance gives an even stronger statement: For any subset A ⊆ S, the probability that the
inal permutation is from the set A is at most π (A) + ε. For example, suppose someone
is trying to make the top card in the deck an ace. If the variation distance from the
distribution to the uniform distribution is less than ε, we can safely say that probability
that an ace is the irst card in the deck is at most ε greater than if we had a perfect
shufle.
As another example, suppose we take a 52-card deck and shufle all the cards – but
leave the ace of spades on top. In this case, the variation distance between the resulting
distribution D1 and the uniform distribution D2 could be bounded by considering the
set B of states where the ace of spades is on the top of the deck:
1 51
D1 − D2 = max|D1 (A) − D2 (A)| ≥ |D1 (B) − D2 (B)| = 1 − = .
A⊆S 52 52
The deinition of variation distance coincides with the deinition of an ε-uniform
sample (given in Deinition 11.3). A sampling algorithm returns an ε-uniform sample
on if and only if the variation distance between its output distribution D and the
uniform distribution U satisies
D − U ≤ ε.
Bounding the variation distance between the uniform distribution and the distribution
of the state of a Markov chain after some number of steps can therefore be a useful
way of proving the existence of eficient ε-uniform samplers, which (as we showed in
Chapter 11) can in turn lead to eficient approximate counting algorithms.
We now consider how to bound this variation distance after t steps. In what follows,
we assume that the Markov chains under consideration are ergodic discrete space and
319
coupling of markov chains
discrete time chains with well-deined stationary distributions. The following deini-
tions will be useful.
Deinition 12.2: Let π̄ be the stationary distribution of an ergodic Markov chain with
state space S. Let ptx represent the distribution of the state of the chain starting at state
x after t steps. We deine
x (t ) = ptx − π̄ ; (t ) = max x (t ).
x∈S
That is, x (t ) is the variation distance between the stationary distribution and ptx , and
(t ) is the maximum of these values over all states x.
We also deine
τx (ε) = min{t : x (t ) ≤ ε}; τ (ε) = max τx (ε).
x∈S
That is, τx (ε) is the irst step t at which the variation distance between ptx and the
stationary distribution is less than ε, and τ (ε) is the maximum of these values over all
states x.
When τ (ε) is considered as a function of ε, it is generally called the mixing time of the
Markov chain. A chain is called rapidly mixing if τ (ε) is polynomial in log(1/ε) and
the size of the problem. The size of the problem depends on the context; in the shufling
example, the size would be the number of cards.
12.2. Coupling
Coupling of Markov chains is a general technique for bounding the mixing time of a
Markov chain.
Deinition 12.3: A coupling of a Markov chain Mt with state space S is a Markov
chain Zt = (Xt , Yt ) on the state space S × S such that:
Pr(Xt+1 = x′ | Zt = (x, y)) = Pr(Mt+1 = x′ | Mt = x);
Pr(Yt+1 = y′ | Zt = (x, y)) = Pr(Mt+1 = y′ | Mt = y).
That is, a coupling consists of two copies of the Markov chain M running simultane-
ously. These two copies are not literal copies; the two chains are not necessarily in the
same state, nor do they necessarily make the same move. Instead, we mean that each
copy behaves exactly like the original Markov chain in terms of its transition probabil-
ities. One obvious way to obtain a coupling is simply to take two independent runs of
the Markov chain. As we shall see, such a coupling is generally not very useful for our
purposes.
Instead, we are interested in couplings that (a) bring the two copies of the chain to
the same state and then (b) keep them in the same state by having the two chains make
identical moves once they are in the same state. When the two copies of the chain reach
the same state, they are said to have coupled. The following lemma motivates why we
seek couplings that couple.
320
12.2 coupling
Lemma 12.2 [Coupling Lemma]: Let Zt = (Xt , Yt ) be a coupling for a Markov chain
M on a state space S. Suppose that there exists a T such that, for every x, y ∈ S,
Pr(XT = YT | X0 = x, Y0 = y) ≤ ε.
Then
τ (ε) ≤ T.
That is, for any initial state, the variation distance between the distribution of the state
of the chain after T steps and the stationary distribution is at most ε.
Proof: Consider the coupling when Y0 is chosen according to the stationary distribution
and X0 takes on any arbitrary value. For the given T and ε and for any A ⊆ S,
Pr(XT ∈ A) ≥ Pr((XT = YT ) ∩ (YT ∈ A))
= 1 − Pr((XT = YT ) ∪ (YT ∈
/ A))
≥ (1 − Pr(YT ∈
/ A)) − Pr(XT = YT )
≥ Pr(YT ∈ A) − ε
= π (A) − ε.
Here the third line follows from the union bound. For the fourth line, we used the fact
that Pr(XT = YT ) ≤ ε for any initial states X0 and Y0 ; in particular, this holds when
Y0 is chosen according to the stationary distribution. For the last line, we used that
Pr(YT ∈ A) = π (A), since YT is also distributed according to the stationary distribu-
tion. The same argument for the set S − A shows that Pr(XT ∈ / A) ≥ π (S − A) − ε, or
Pr(XT ∈ A) ≤ π (A) + ε.
It follows that
max|pTx (A) − π (A)| ≤ ε,
x,A
so by Lemma 12.1 the variation distance from the stationary distribution after the chain
runs for T steps is bounded above by ε.
is again valid, because in both chains the probability a speciic card is moved to the top
at each step is 1/n. With this coupling, it is easy to see by induction that, once a card C
is moved to the top, it is always in the same position in both copies of the chain. Hence,
the two copies are sure to become coupled once every card has been moved to the top
at least once.
Now our coupling problem for the shufling Markov chain looks like a coupon col-
lector’s problem; to bound the number of steps until the chains couple, we simply bound
how many times cards must be chosen uniformly at random before every card is cho-
sen at least once. We know that when the Markov chain runs for n ln n + cn steps,
the probability that a speciic card has not been moved to the top at least once is at
most
1 n ln n+cn e−c
1− ≤ e−(ln n+c) = ,
n n
and thus (by the union bound) the probability that any card has not been moved to
the top at least once is at most e−c . Hence, after only n ln n + n ln(1/ε) = n ln(n/ε)
steps, the probability that the chains have not coupled is at most ε. The coupling
lemma allows us to conclude that the variation distance between the uniform distribu-
tion and the distribution of the state of the chain after n ln(n/ε) steps is bounded above
by ε.
argument, the probability is less than ε that after n ln(nε−1 ) steps the chains have not
coupled, and hence by the coupling lemma the mixing time satisies
τ (ε) ≤ n ln(nε−1 ).
We know that an ergodic Markov chain eventually converges to its stationary distribu-
tion. In fact, the variation distance between the state of a Markov chain and its stationary
distribution is nonincreasing in time. To show this, we start with an interesting lemma
that gives another useful property of the variation distance.
324
12.3 application: variation distance is nonincreasing
and therefore
Pr(X = Y ) ≥ 1 − min(Pr(X = x), Pr(Y = x))
x∈S
= Pr(X = x) − min(Pr(X = x), Pr(Y = x)) .
x∈S
But Pr(X = x) − min(Pr(X = x), Pr(Y = x)) = 0 when σX (x) < σY (x), and when
σX (x) ≥ σY (x) it is
Pr(X = x) − Pr(Y = x) = σX (x) − σY (x).
If we let S+ be the set of all states for which σX (x) ≥ σY (x), then the right-hand side of
Eqn. (12.2) is equal to σX (S+ ) − σY (S+ ), which equals σX − σY from the argument
in Lemma 12.1. This gives the irst part of the lemma.
Equality holds in Eqn. (12.1) if we take a joint distribution where X = Y as much as
possible. Speciically, let m(x) = min(Pr(X = x), Pr(Y = x)). If x m(x) = 1, then X
and Y have the same distribution and we are done. Otherwise, let Z = (X, Y ) be deined
by
⎧
⎨m(x)
⎪ if x = y;
Pr(X = x, Y = y) = (σX (x) − m(x))(σY (y) − m(y))
⎪ otherwise.
1 − z m(z)
⎩
The idea behind this choice of Z is to irst match X and Y as much as possible and then
force X and Y to behave independently if they do not match.
325
coupling of markov chains
It remains to show that, for this choice of Z, Pr(X = x) = σX (x); the same argument
will hold for Pr(Y = y). If m(x) = σX (x) then Pr(X = x, Y = x) = m(x) and Pr(X =
x, Y = y) = 0 when x = y, so Pr(X = x) = σX (x). If m(x) = σY (x), then
Pr(X = x) = Pr(X = x, Y = y)
y
(σX (x) − m(x))(σY (y) − m(y))
= m(x) +
y=x
1 − z m(z)
(σX (x) − m(x)) y=x (σY (y) − m(y))
= m(x) +
1 − z m(z)
(σX (x) − m(x)) 1 − σY (x) − z m(z) − m(x)
= m(x) +
1 − z m(z)
= m(x) + (σX (x) − m(x))
= σX (x),
Recall that (t ) = maxx x (t ), where x (t ) is the variation distance between the sta-
tionary distribution and the distribution of the state of the Markov chain after t steps
when starting at state x. Using Lemma 12.3, we can prove that (t ) is nonincreasing
over time.
Proof: Let x be any given state, and let y be a state chosen from the stationary distri-
bution. Then
x (T ) = pTx − pTy .
x (T ) = Pr(XT = YT )
≥ Pr(XT +1 = YT +1 )
≥ pTx +1 − pTy +1
= x (T + 1).
326
12.4 geometric convergence
The second line follows from the irst because the one-step coupling assures XT +1 =
YT +1 whenever XT = YT . The result follows since the foregoing relations hold for every
state x.
The following general result, derived from a trivial coupling, is useful for bounding the
mixing time of some Markov chains.
Theorem 12.5: Let P be the transition matrix for a inite, irreducible, aperiodic
chain. Let m j be the smallest entry in the jth column of the matrix, and let
Markov
m = j m j . Then, for all x and t,
ptx − π ≤ (1 − m)t .
Proof: If the minimum entry in column j is m j , then in one step the chain reaches state
j with probability at least m j from every state. Hence we can design a coupling where
the two copies of the chain both move to state j together with probability at least m j in
every step. Since this holds for all j, at each step the two chains can be made to couple
with probability at least m. Hence the probability they have not coupled after m steps
is at most (1 − m)t , yielding the theorem via the coupling lemma.
Theorem 12.5 is not immediately helpful if there is a zero entry in each column, in
which case m = 0. In Exercise 12.6, we consider how to make it useful for any inite,
irreducible, aperiodic Markov chain. Theorem 12.5 shows that, under very general con-
ditions, Markov chains converge quickly to their stationary distributions, with the vari-
ation distance converging geometrically in the number of steps.
A more general related result is the following. Suppose that we can obtain an upper
bound on τ (c) for some constant c < 1/2. For example, such a bound might be found
by a coupling. This is suficient to bootstrap a bound for τ (ε) for any ε > 0.
Theorem 12.6: Let P be the transition matrix for a inite, irreducible, aperiodic
Markov chain Mt with τ (c) ≤ T for some c < 1/2. Then, for this Markov chain,
τ (ε) ≤ ⌈ln ε/ ln(2c)⌉T .
Proof: Consider any two initial states X0 = x and Y0 = y. By the deinition of τ (c),
we have pTx − π ≤ c and pTy − π ≤ c. It follows that pTx − pTy ≤ 2c and hence,
by Lemma 12.3, there exists a random variable ZT,x,y = (XT , YT ) with XT distributed
according to pTx and YT distributed according to pTy such that Pr(XT = YT ) ≤ 2c.
Now consider the Markov chain Mt′ given by the transition matrix PT , which corre-
sponds to a chain that takes T steps of Mt for each of its steps; the ZT,x,y give a coupling
for this new chain. That is, given two copies of the chain Mt′ in the paired state (x, y),
we can let the next paired state be given by the distribution ZT,x,y , which guarantees that
the probability the two states have not coupled in one step is at most 2c. The probability
that this coupling of the chain Mt′ has not coupled over k steps is then at most (2c)k by
induction. By the coupling lemma, Mt′ is within variation distance ε of its stationary
327
coupling of markov chains
(2c)k ≤ ε.
It follows that, after at most ⌈ln ε/ ln(2c)⌉ steps, Mt′ is within variation distance ε of its
stationary distribution. But Mt′ and Mt have the same stationary distribution, and each
step of Mt′ corresponds to T steps of Mt . Therefore,
< =
ln ε
τ (ε) ≤ T
ln(2c)
for the Markov chain Mt .
A vertex coloring of a graph gives each vertex v a color from a set C, which we can
assume without loss of generality is the set {1, 2, . . . , c}. In a proper coloring, the two
endpoints of every edge are colored by two different colors. Any graph with maximum
degree can be colored properly with + 1 colors by the following procedure: choose
an arbitrary ordering of the vertices, and color them one at a time, labeling each vertex
with a color not already used by any of its neighbors.
Here we are interested in sampling almost uniformly at random a proper coloring
of a graph. We present a Markov chain Monte Carlo (MCMC) process that generates
such a sample and then use a coupling technique to show that it is rapidly mixing.
In the terminology of Chapter 11, this gives an FPAUS for proper colorings. Apply-
ing the general reduction from approximate counting to almost uniform sampling, as
in Theorem 11.5, we can use the FPAUS for sampling proper colorings to obtain an
FPRAS for the number of proper colorings. The details of this reduction are left as part
of Exercise 12.15.
To begin, we present a straightforward coupling that allows us to approximately
sample colorings eficiently when there are c > 4 + 1 colors. We then show how to
improve the coupling to reduce the number of colors necessary to 2 + 1.
Our Markov chain on proper colorings is the simplest one possible. At each step,
choose a vertex v uniformly at random and a color ℓ uniformly at random. Recolor
vertex v with color ℓ if the new coloring is proper (that is, v does not have a neighbor
colored ℓ), and otherwise let the state of the chain be unchanged. This inite Markov
chain is aperiodic because it has nonzero probability of staying in the same state. When
c ≥ + 2, it is also irreducible. To see how from any state X we can reach any other
state Y, consider an arbitrary ordering of the vertices. Recolor the vertices in X to match
Y in this order. If there is a conlict at any step, it must arise because a vertex v that needs
to be colored is blocked by some other vertex v ′ later in the ordering. But v ′ can be
recolored to some other nonconlicting color, since c ≥ + 2, allowing the process to
continue. Hence, when c ≥ + 2, the Markov chain has a stationary distribution. The
fact that this stationary distribution is uniform over all proper colorings can be veriied
by applying Lemma 11.7.
328
12.5 application: approximately sampling proper colorings
When there are 4 + 1 colors, we use a trivial coupling on the pair of chains (Xt , Yt ):
choose the same vertex and color on both chains at each step.
Theorem 12.7: For any graph with n vertices and maximum degree , the mixing
time of the graph-coloring Markov chain satisies
<
=
nc n
τ (ε) ≤ ln ,
c − 4 ε
provided that c ≥ 4 + 1.
Proof: Let Dt be the set of vertices that have different colors in the two chains at time
t, and let dt = |Dt |. At each step in which dt > 0, either dt remains at the same value
or dt increases or decreases by at most 1. We show that dt is actually more likely to
decrease than increase; then we use this fact to bound the probability that dt is nonzero
for suficiently large t.
Consider any vertex v that is colored differently in the two chains. Since the degree
of v is at most , there are at least c − 2 colors that do not appear on the neighbors
of v in either of the two chains. If the vertex is recolored to one of these c − 2 colors,
it will have the same color in both chains. Hence
dt c − 2
Pr(dt+1 = dt − 1 | dt > 0) ≥ .
n c
Now consider any vertex v that is colored the same in both chains. For v to be colored
differently at the next step, it must have some neighbor w that is differently colored in
the two chains; in that case, it is possible that trying to recolor v using a color that the
neighbor w has in one of the two chains will recolor the vertex v in one chain but not the
other. Every vertex colored differently in the two chains can affect at most neighbors
in this way. Hence, when dt > 0,
dt 2
Pr(dt+1 = dt + 1 | dt > 0) ≤ .
n c
We ind that
E[dt+1 | dt ] = dt + Pr(dt+1 = dt + 1) − Pr(dt+1 = dt − 1)
dt 2 dt c − 2
≤ dt + −
n c n
c
c − 4
= dt 1 − ,
nc
which also holds if dt = 0.
Using the conditional expectation equality, we have
c − 4
E[dt+1 ] = E[E[dt+1 | dt ]] ≤ E[dt ] 1 − .
nc
By induction, we ind
c − 4 t
E[dt ] ≤ d0 1 − .
nc
329
coupling of markov chains
Assuming that each step of the Markov chain can be accomplished eficiently in time
that is polynomial in n, Theorem 12.7 gives an FPAUS for proper colorings.
Theorem 12.7 is rather wasteful. For example, when bounding the probability that
dt decreases, we used the loose bound c − 2. The number of colors that decrease dt
could be much higher if some of the vertices around v have the same color in both
chains. By being a bit more careful and slightly more clever with the coupling, we can
improve Theorem 12.7 to hold for any c ≥ 2 + 1.
Theorem 12.8: Given an n-vertex graph with maximum degree , the mixing time of
the graph-coloring Markov chain satisies
<
=
nc n
τ (ε) ≤ ln ,
c − 2 ε
provided that c ≥ 2 + 1.
Proof: As before, let Dt be the set of vertices that have different colors in the two
chains at time t, with |Dt | = dt . Let At be the set of vertices that have the same color in
the two chains at time t. For a vertex v in At , let d ′ (v) be the number of vertices adjacent
to v that are in Dt ; similarly, for a vertex w in Dt , let d ′ (w) be the number of vertices
adjacent to w that are in At . Note that
d ′ (v) = d ′ (w),
v∈At w∈Dt
since the two sums both count the number of edges connecting vertices in At to vertices
in Dt . Denote this summation by m′ .
Consider the following coupling: if a vertex v ∈ Dt is chosen to be recolored, we
simply choose the same color in both chains. That is, when v is in Dt , we are using
the same coupling we used before. The vertex v will have the same color whenever the
color chosen is different from any color on any of the neighbors of v in both copies of
the chain. There are c − 2 + d ′ (v) such colors; notice that this is a tighter bound than
we used in the proof of Theorem 12.7. Hence the probability that dt+1 = dt − 1 when
dt > 0 is at least
1 c − 2 + d ′ (v) 1
= ((c − 2)dt + m′ ).
n v∈D c cn
t
Assume now that the vertex to be recolored is v ∈ At . In this case we change the cou-
pling slightly. Recall that, in the previous coupling, recoloring a vertex v ∈ At results in
v becoming differently colored in the two chains if the randomly chosen color appears
330
12.5 application: approximately sampling proper colorings
Figure 12.2: (a), original coupling; (b), improved coupling. In the original coupling of part (a), the
gray vertex has the same color in both chains and has a neighbor with different colors in the two
chains, one black and one white. If an attempt is made to recolor the gray vertex black, then the move
will succeed in one chain but not the other, increasing dt . Similarly, if an attempt is made to recolor
the gray vertex white, then the move will succeed in one chain but not the other, giving a second move
that increases dt . In the improved coupling of part (b), if the gray vertex is recolored white in Xt then
the gray vertex is recolored black in Yt and vice versa, giving just one move that increases dt .
on a neighbor of v in one chain but not the other. For example: if v is colored green, and
a neighbor w is colored red in one chain and blue in the other, and no other neighbor of
v is colored red or blue in either chain, then attempting to color v either red or blue will
cause v to be recolored in one chain but not the other. Hence there are two potential
choices for v’s color that increase dt .
In this speciic case where just one vertex w neighboring v has different colors in the
two chains, we could improve the coupling as follows: when we try to recolor v blue in
the irst chain, we try to recolor it red in the second chain; and when we try to recolor it
red in the irst chain, we try to recolor it blue in the second chain. Now v either changes
color in both chains or stays the same in both chains. By changing the coupling, we
have collapsed two potentially bad moves that increase dt into just one bad move. See
Figure 12.2 for an example.
More generally, if there are d ′ (v) differently colored vertices around v then we can
couple the colors so that at most d ′ (v) color choices cause dt to increase, instead of up
to 2d ′ (v) choices in the original coupling. Concretely, let S1 (v) be the set of colors on
neighbors of v in the irst chain but not the second, and similarly let S2 (v) be the set
of colors on neighbors of v in the second chain but not the irst. Couple pairs of colors
c1 ∈ S1 (v) and c2 ∈ S2 (v) as much as possible, so that when c1 is chosen in one chain
c2 is chosen in the other. Then the total number of ways to color v that increases dt is
at most max(|S1 (v)|, |S2 (v)|) ≤ d ′ (v).
As a result, the probability that dt+1 = dt + 1 when dt > 0 is at most
1 d ′ (v) m′
= .
n v∈A c cn
t
331
coupling of markov chains
c − 2 t
Pr(dt ≥ 1) ≤ E[dt ] ≤ n 1 − ≤ ne−t(c−2)/nc ,
nc
and the variation distance is at most ε after
<
=
nc n
t= ln
c − 2 ε
steps.
Hence we can use the Markov chain for proper colorings to give us an FPAUS whenever
c > 2.
In Section 11.3 we showed that, if we can obtain an FPAUS for independent sets for
graphs of degree at most , then we can approximately count the number of inde-
pendent sets in such graphs. Here we present a Markov chain on independent sets,
together with a coupling argument, to prove that the chain gives such an FPAUS when
≤ 4. The coupling argument uses a further technique, path coupling. We demon-
strate this technique speciically for the Markov chain sampling independent sets in a
graph, although with appropriate deinitions the approach can be generalized to other
problems.
Interestingly, it is very dificult to prove that the simple Markov chain for sampling
independent sets given in Section 11.4, which removes or attempts to add a random
vertex to the current independent set at each step, mixes quickly. Instead, we consider
here a different Markov chain that simpliies the analysis. We assume without loss of
generality that the graph consists of a single connected component. At each step, the
Markov chain chooses an edge (u, v) in the graph uniformly at random. If Xt is the
independent set at time t, then the move proceeds as follows.
r With probability 1/3, set Xt+1 = Xt − {u, v}. (This move removes u and v, if they are
in the set.)
r With probability 1/3, let Y = (Xt − {u}) ∪ {v}. If Y is an independent set, then Xt+1 =
Y ; otherwise, Xt+1 = Xt . (This move tries to remove u if it is in the set and then add
v.)
r With probability 1/3, let Y = (Xt − {v}) ∪ {u}. If Y is an independent set, then Xt+1 =
Y ; otherwise, Xt+1 = Xt . (This move tries to remove v if it is in the set and then add
u.)
332
12.6 path coupling
It is easy to verify that the chain has a stationary distribution that is uniform on all
independent sets. We now use the path coupling argument to bound the mixing time of
the chain.
The idea of path coupling is to start with a coupling for pairs of states (Xt , Yt ) that
differ in just one vertex. This coupling is then extended to a general coupling over all
pairs of states. When it applies, path coupling is very powerful, because it is often much
easier to analyze the situation where the two states differ in a small way (here, in just
one vertex) than to analyze all possible pairs of states.
Consider a graph G = (V, E ). We say that a vertex is bad if it is an element of Xt or
Yt but not both; otherwise, the vertex is good. Let dt = |Xt − Yt | + |Yt − Xt |, so that dt
counts the number of bad vertices. Assume that Xt and Yt differ in exactly one vertex
(i.e., dt = 1). We apply a simple coupling, performing the same move in both states,
and show that under this coupling E[dt+1 | dt ] ≤ dt when dt = 1 or, equivalently, that
E[dt+1 − dt | dt = 1] ≤ 0.
Without loss of generality, let Xt = I and Yt = I ∪ {x}. A change in dt can occur only
when a move involves a neighbor of x. Thus, in analyzing this coupling, we can restrict
our discussion to moves in which the chosen random edge is adjacent to a neighbor of
x. Let δz = 1 if the vertex z = x goes from good to bad between step t and step t + 1.
Similarly, let δx = −1 if the vertex x goes from bad to good between step t and step
t + 1. By linearity of expectations,
E[dt+1 − dt | dt = 1] = E δw | dt = 1 = E[δw | dt = 1].
w w
As we shall see, in the summation we need only consider those w that are equal to x, a
neighbor of x, or a neighbor of a neighbor of x, since these are the only vertices that can
change from good to bad or bad to good in one step of the chain. We shall demonstrate
how to balance the moves in such a way that it becomes clear that E[dt+1 − dt | dt =
1] ≤ 0 as long as ≤ 4.
Assume that x has k neighbors, and let y be one of these neighbors. For each vertex
y that is a neighbor of x, we consider all of the moves that choose an edge adjacent to
y. The subsequent analysis makes use of the restriction ≤ 4. There are three cases,
as shown in Figure 12.3.
1. Suppose that y has two or more neighbors in the independent set I = Xt . Then no
move that involves y can increase the number of bad vertices, and hence dt+1 cannot
be larger than dt as a result of any such move.
2. Suppose that y has no neighbors in I. Then dt can increase by 1 if the edge (y, zi )
(where 1 ≤ i ≤ 3) is chosen and an attempt is made to add y and remove zi . These
moves are successful on Xt but not on Yt , and hence δy = 1 with probability at most
3 · 1/3|E| = 1/|E|. No other move involving y increases dt .
The possible gain from δy is balanced by moves that decrease δx . Any of the three
possible moves on the edge (x, y) match the vertex x, so that δx = −1, and no other
bad vertices are created. Hence δx = −1 with probability at least 1/|E|. We see that
333
coupling of markov chains
Figure 12.3: Three cases for the independent set Markov chain. Vertices colored black are in both
independent sets of the coupling. Vertex x is colored gray, to represent that it is a member of the
independent set of one chain in the coupling but not the other.
the total effect of all of these moves on w E[δw | dt = 1] is
1 1
1· −1· = 0,
|E| |E|
so that the moves from this case do not increase E[dt+1 − dt | dt = 1].
3. Suppose that y has one neighbor in I. If the edge (x, y) is chosen, then two moves
can give δx = −1: the move that removes both x and y, or the move that removes y
and adds x. The third move, which tries to add y and remove x, fails in both chains
because y has a neighbor in I. Hence δx = −1 with probability at least 23 (1/|E|).
Let z be the neighbor of y in I. Both y and z can become bad in one step if the
edge (y, z) is chosen and an attempt is made to add y and remove z. This move is
successful on Xt but not on Yt , causing dt to increase by 2 since δy and δz both equal 1.
No other move increases dt . Hence the probability that the number of bad vertices
is increased in this case
is 1/3|E|, and the increase is by 2. Again, the total effect of
all of these moves on w E[δw | dt = 1] is
1 2 1
2· −1· = 0,
3|E| 3 |E|
so that the moves from this case do not increase E[dt+1 − dt | dt = 1].
The case analysis shows that if we consider moves that involve a speciic neighbor y,
they balance so that every move that increases dt+1 − dt is matched by corresponding
moves that decrease dt+1 − dt . Summing over all vertices, we can conclude that
E[dt+1 − dt | dt = 1] = E δw | dt = 1 = E[δw | dt = 1] ≤ 0.
w w
We now use an appropriate coupling to argue that E[dt+1 | dt ] ≤ dt for any pair of
states (Xt , Yt ). The statement is trivial if dt = 0, and we have just shown it to be true
if dt = 1. If dt > 1, then create a chain of states Z0 , Z1 , . . . , Zdt as follows: Z0 = Xt ,
and each successive Zi is obtained from Zi−1 by either removing a vertex from Xt − Yt
or adding a vertex from Yt − Xt . This can be done, for example, by irst removing all
334
12.6 path coupling
vertices in Xt − Yt one by one and then adding vertices from Yt − Xt one by one. Our
coupling now arises as follows. When a move is made in Xt = Z0 , the coupling for the
case when dt = 1 gives a corresponding move for the state Z1 . This move in Z1 can
similarly be coupled with a move in state Z2 , and so on, until the move in Zdt −1 yields
a move for Zdt = Yt . Let Zi′ be the state after the move is made from state Zi , and let
′
(Zi−1 , Zi′ ) = |Zi−1
′
− Zi′ | + |Zi′ − Zi−1
′
|.
Note that Z0′ = Xt+1 and Zd′t = Yt+1 . We have shown that E[dt+1 − dt | dt = 1] ≤ 0, so
we can conclude that
′
E[(Zi−1 , Zi′ )] ≤ 1;
that is, because the two states Zi−1 and Zi differ in just one vertex, the expected number
of vertices in which they differ after one step is at most 1. Using the triangle inequality
for sets,
|A − B| ≤ |A − C| + |C − B|,
we obtain
dt
′
|Xt+1 − Yt+1 | + |Yt+1 − Xt+1 | ≤ (|Zi−1 ′
− Zi′ | + |Zi′ − Zi−1 |)
i=1
or
dt
′
dt+1 = |Xt+1 − Yt+1 | + |Yt+1 − Xt+1 | ≤ (Zi−1 , Zi′ ).
i=1
Hence,
dt
′
E[dt+1 | dt ] ≤ E (Zi−1 , Zi′ )
i=1
dt
′
= E[(Zi−1 , Zi′ )]
i=1
≤ dt .
E[dt+1 | dt ] ≤ βdt
for some β < 1, and we used this strict inequality to bound the mixing time. How-
ever, the weaker condition E[dt+1 | dt ] ≤ dt that we have here is suficient for rapid
mixing, as we shall see in Exercise 12.7. Thus, the Markov chain gives an FPAUS for
independent sets in graphs when the maximum degree is at most 4; as we showed in
Section 11.3, this can be used to obtain an FPRAS for this problem.
335
coupling of markov chains
12.7. Exercises
Exercise 12.1: Write a program that takes as input two positive integers n1 and n2 and
two real numbers p1 , p2 with 0 ≤ p1 , p2 ≤ 1. The output of your program should be
the variation distance between the binomial random variables B(n1 , p1 ) and B(n2 , p2 ),
rounded to the nearest thousandth. Use your program to compute the variation distance
between the following pairs of distributions: B(20, 0.5) and B(20, 0.49); B(20, 0.5) and
B(21, 0.5); and B(21, 0.5) and B(21, 0.49).
Exercise 12.2: Consider the Markov chain for shufling n cards, where at each step
a card is chosen uniformly at random and moved to the top. Suppose that, instead of
running the chain for a ixed number of steps, we stop the chain at the irst step where
every card has been moved to the top at least once. Show that, at this stopping time, the
state of the chain is uniformly distributed on the n! possible permutations of the cards.
Exercise 12.3: Consider the Markov chain for shufling n cards, where at each step
a card is chosen uniformly at random and moved to the top. Show that, if the chain is
run for only (1 − ε)n ln n steps for some constant ε > 0, then the variation distance is
1 − o(1).
Exercise 12.4: (a) Consider the Markov chain given by the transition matrix
⎡ ⎤
1/2 0 1/2 0 0
⎢ 0 1/2 1/2 0 0 ⎥
⎢ ⎥
P = ⎢1/4 1/4 0 1/4 1/4⎥
⎢
⎥.
⎣ 0 0 1/2 1/2 0 ⎦
0 0 1/2 0 1/2
Explain why Theorem 12.5 is not useful when applied directly to P. Then apply The-
orem 12.5 to the Markov chain with transition matrix P2 , and explain the implications
for the convergence of the original Markov chain to its stationary distribution.
(b) Consider the Markov chain given by the transition matrix
⎡ ⎤
1/2 0 1/2 0 0
⎢ 0 1/2 1/2 0 0 ⎥
⎢ ⎥
P = ⎢1/5 1/5 1/5 1/5 1/5⎥
⎢
⎥.
⎣ 0 0 1/2 1/2 0 ⎦
0 0 1/2 0 1/2
Apply Theorem 12.5 to P. Then apply Theorem 12.5 to the Markov chain with transi-
tion matrix P2 , and explain the implications for the convergence of the original Markov
chain to its stationary distribution. Which application gives better bounds on the vari-
ation distance?
Exercise 12.5: Suppose I repeatedly roll a standard six-sided die and obtain a sequence
of independent random variables X1 , X2 , . . . , where Xi is the outcome of the ith roll.
336
12.7 exercises
Let
j
Yj = Xi mod 10
i=1
be the sum of the irst j rolls considered modulo 10. The sequence Y j forms a Markov
chain. Determine its stationary distribution, and determine a bound on τ (ε) for this
chain. (Hint: One approach is to use the method of Exercise 12.4.)
Exercise 12.6: Theorem 12.5 is useful only if there exists a nonzero entry in at least
one column of the transition matrix P of the Markov chain. Argue that for any inite,
aperiodic, irreducible Markov chain, there exists a time T such that every entry of PT
is nonzero. Explain how this can be used in conjunction with Theorem 12.5.
E[dt+1 | dt ] ≤ βdt .
(a) Under this condition, give an upper bound for τ (ε) in terms of β and d ∗ , where d ∗
is the maximum distance over all possible pairs of initial states for the coupling.
(b) Suppose that instead we have
E[dt+1 | dt ] ≤ dt .
337
coupling of markov chains
Exercise 12.11: Show that the Markov chain for sampling all independent sets of size
exactly k ≤ n/(3 + 3) in a graph with n nodes and maximum degree , as deined
in Section 12.2.3, is ergodic and has a uniform stationary distribution.
Exercise 12.12: We wish to improve the coupling technique used in Section 12.2.3 in
order to obtain a better bound. The improvement here is related to the technique used
to prove Theorem 12.8. As with the coupling in Section 12.2.3, if an attempt is made to
move v ∈ Xt − Yt to a vertex w then the same attempt is made with the matched vertex
in the other chain. If, however, an attempt is made to move a vertex v ∈ Xt ∩ Yt in both
chains, we no longer attempt to make the same move.
(a) Assume there exists a set S1 of exactly dt ( + 1) distinct vertices that are members
of or neighbors of vertices in Xt − Yt and, likewise, a set S2 of exactly dt ( + 1)
distinct vertices that are members of or neighbors of vertices in Yt − Xt ; assume
further that S2 and S1 are disjoint. Suppose that we match up the vertices in S1 and
S2 in a one-to-one fashion. Argue that the moves can be coupled so that, when one
chain attempts and fails to move v to a vertex in S1 in one chain, it also attempts
and fails to move v to the matching vertex in S2 in the other chain. Similarly, argue
that the moves can be coupled so that, when one chain attempts and succeeds in
moving v to a vertex in S1 in one chain, it also attempts and succeeds in moving v
to the matching vertex in S2 in the other chain. Show that the coupling gives
k − dt dt ( + 1)
Pr(dt+1 = dt + 1) ≤ .
k n
(b) In the general case, S1 and S2 are not necessarily disjoint or of equal size. Show
that in this case, by pairing up failing moves as much as possible, the number of
choices for w that can increase dt is max(|S1 |, |S2 |) ≤ dt ( + 1). Then argue that
k − dt dt ( + 1)
Pr(dt+1 = dt + 1) ≤
k n
holds in all cases.
(c) Use this coupling to obtain a polynomial bound on τ (ε) that holds for any k ≤
n/(2 + 2).
Exercise 12.13: Consider a Markov chain with state space S and a stationary dis-
tribution π̄ , and recall the deinitions of ptx and (t ) from Deinition 12.2. For any
nonnegative integer t let
¯ ) = max ptx − pty .
(t
x,y∈S
¯ + t ) ≤ (s)
(a) Prove (s ¯ (t
¯ ) for any positive integers s and t.
¯ ) for any positive integers s and t.
(b) Prove (s + t ) ≤ (s)(t
(c) Prove
¯ ) ≤ 2(t )
(t ) ≤ (t
for any positive integer t.
Exercise 12.14: Consider the following variation on shufling for a deck of n cards.
At each step, two speciic cards are chosen uniformly at random from the deck, and
their positions are exchanged. (It is possible both choices give the same card, in which
case no change occurs.)
(a) Argue that the following is an equivalent process: at each step, a speciic card is
chosen uniformly at random from the deck, and a position from [1, n] is chosen
uniformly at random; then the card at position i exchanges positions with the spe-
ciic card chosen.
(b) Consider the coupling where the two choices of card and position are the same
for both copies of the chain. Let Xt be the number of cards whose positions are
different in the two copies of the chain. Show that Xt is nonincreasing over time.
(c) Show that
2
Xt
Pr(Xt+1 ≤ Xt − 1 | Xt > 0) ≥ .
n
(d) Argue that the expected time until Xt is 0 is O(n2 ), regardless of the starting state
of the two chains.
Exercise 12.15: Modify the arguments of Lemma 11.3 and Lemma 11.4 to show that,
if we have an FPAUS for proper colorings for any c ≥ + 2, then we also have an
FPRAS for this value of c.
Exercise 12.16: Consider the following simple Markov chain whose states are inde-
pendent sets in a graph G = (V, E ). To compute Xi+1 from Xi :
r choose a vertex v uniformly at random from V, and lip a fair coin;
r if the lip is heads and v ∈ Xi , then Xi+1 = Xi \ {v};
r if the lip is heads and v ∈ / Xi , then Xi+1 = Xi ;
r if the lip is tails, v ∈
/ Xi , and adding v to Xi still gives an independent set, then Xi+1 =
Xi ∪ {v};
r if the lip is tails and v ∈ Xi , then Xi+1 = Xi .
(a) Show that the stationary distribution of this chain is uniform over all independent
sets.
(b) We consider this Markov chain speciically on cycles and line graphs. For a line
graph with n vertices, the vertices are labeled 1 to n, and there is an edge from 1 to
2, 2 to 3, …, n − 1 to n. A cycle graph on n vertices is the same with the addition
of an edge from n to 1.
339
coupling of markov chains
Devise a coupling (Xt , Yt ) for this Markov chain such that, on line graphs and
cycle graphs, if dt = |Xt − Yt | + |Yt − Xt | is the number of vertices on which the
two independent sets disagree, then at each step the coupling is at least as likely to
reduce dt as to increase dt .
(c) With the coupling from part (b), argue that you can use this chain to obtain an
FPAUS for independent sets on a cycle graph or line graph. You may want to use
Exercise 12.7.
(d) For the special cases of line graphs and cycle graphs, we can derive exact formulas
for the number of independent sets. Derive exact formulas for these cases and prove
that your formulas are correct. (Hint: You may want to express your results in terms
of Fibonacci numbers.)
Exercise 12.17: For integers a and b, an a × b grid is a graph whose vertices are all
ordered pairs of integers (x, y) with 0 ≤ x < a and 0 ≤ y < b. The edges of the graph
connect all pairs of distinct vertices (x, y) and (x′ , y′ ) such that |x − x′ | + |y − y′ | =
1. That is, every vertex is connected to the neighbors up, down, left, and right of it,
where vertices on the boundary are connected to the relevant points only. Consider the
following problems on the graph given by the 10 × 10 grid.
(a) Implement an FPAUS to generate an ε-uniform proper 10-coloring of the graph,
where ε is given as an input. Discuss how many steps your Markov chain runs for,
what your starting state is, and any other relevant details.
(b) Using your FPAUS as a subroutine, implement an FPRAS to generate an (ε, δ)-
approximation to the number of proper 10-colorings of the graph. Test your code
by running it to obtain a (0.1, 0.001)-approximation. (Note: This may take a sig-
niicant amount of time to run.) Discuss the ordering you choose on the edges, how
many samples are required at each step, how many steps of the Markov chain you
perform in total throughout the process, and any other relevant details.
Exercise 12.19: In Section 12.5, we considered a simple Markov chain for coloring.
Suppose that we can apply the path coupling technique. (You do not need to show this.)
In this case, we can just consider the case where dt = 1. Give a simpler argument that,
when dt = 1 and c > 2, E[dt+1 | dt ] ≤ βdt for some β < 1. Also show that, when
dt = 1 and c = 2, E[dt+1 | dt ] ≤ dt .
340
chapter thirteen
Martingales
Martingales are sequences of random variables satisfying certain conditions that arise
in numerous applications, such as random walks and gambling problems. We focus
here on three useful analysis tools related to martingales: the martingale stopping theo-
rem, Wald’s inequality, and the Azuma–Hoeffding inequality. The martingale stopping
theorem and Wald’s equation are important tools for computing the expectation of
compound stochastic processes. The Azuma–Hoeffding inequality is a powerful tech-
nique for deriving Chernoff-like tail bounds on the values of functions of dependent
random variables. We conclude this chapter with applications of the Azuma–Hoeffding
inequality to problems in pattern matching, balls and bins, and random graphs.
13.1. Martingales
r Zn is a function of X0 , X1 , . . . , Xn ;
r E[|Zn |] < ∞;
r E[Zn+1 | X0 , . . . , Xn ] = Zn .
A martingale can have a inite or a countably ininite number of elements. The indexing
of the martingale sequence does not need to start at 0. In fact, in many applications it
is more convenient to start it at 1. When we say that Z0 , Z1 , . . . is a martingale with
respect to X1 , X2 , . . . , then we may consider X0 to be a constant that is omitted.
For example, consider a gambler who plays a sequence of fair games. Let Xi be the
amount the gambler wins on the ith game (Xi is negative if the gambler loses), and let
Zi be the gambler’s total winnings at the end of the ith game. Because each game is
341
martingales
Consider any inite-valued function F deined over graphs; for example, let F (G) be
the size of the largest independent set in G. Now let Z0 = E[F (G)] and
Zi = E[F (G) | X1 , . . . , Xi ], i = 1, . . . , m.
The sequence Z0 , Z1 , . . . , Zm is a Doob martingale that represents the conditional
expectations of F (G) as we reveal whether each edge is in the graph, one edge at a
time. This process of revealing edges gives a martingale that is commonly called the
edge exposure martingale.
342
13.2 stopping times
Similarly, instead of revealing edges one at a time, we could reveal the set of edges
connected to a given vertex, one vertex at a time. Fix an arbitrary numbering of the
vertices 1 through n, and let Gi be the subgraph of G induced by the irst i vertices.
Then, setting Z0 = E[F (G)] and
Zi = E[F (G) | G1 , . . . , Gi ], i = 1, . . . , n,
gives a Doob martingale that is commonly called the vertex exposure martingale.
1 More formally, in the discrete case we deine a “iltration” F 0 , F1 , . . . such that the distribution of Z0 is fully
deined by events in F0 , and the joint distribution of Z0 , . . . , Zn is fully deined by events in Fn . The event T = n
is a stopping time if it is an event in Fn .
343
martingales
E[ZT ] = E[Z0 ]
We use the martingale stopping theorem to derive a simple solution to the gambler’s
ruin problem introduced in Section 7.2.1. Consider a sequence of independent, fair
gambling games. In each round, a player wins a dollar with probability 1/2 or loses a
dollar with probability 1/2. Let Z0 = 0, let Xi be the amount won on the ith game, and
let Zi be the total won by the player after i games (again, Xi and Zi are negative if the
player loses money). Assume that the player quits the game when she either loses ℓ1
dollars or wins ℓ2 dollars. What is the probability that the player wins ℓ2 dollars before
losing ℓ1 dollars?
Let the time T be the irst time the player has either won ℓ2 or lost ℓ1 . Then T is a
stopping time for X1 , X2 , . . . . The sequence Z0 , Z1 , . . . is a martingale and, since the
values of the Zi are clearly bounded, we can apply the martingale stopping theorem.
We therefore have
E[ZT ] = 0.
Let q be the probability that the gambler quits playing after winning ℓ2 dollars. Then
E[ZT ] = ℓ2 q − ℓ1 (1 − q) = 0,
344
13.2 stopping times
giving
ℓ1
q= ,
ℓ1 + ℓ2
matching the result found in Section 7.2.1.
Therefore,
Sn−k
E[Xk | X0 , . . . , Xk−1 ] = E | Sn , . . . , Sn−k+1
n−k
Sn−k+1
=
n−k+1
= Xk−1 ,
Case 1: Candidate A leads throughout the count. In this case, all Sn−k (and therefore
all Xk ) are positive for 0 ≤ k ≤ n − 1, T = n − 1, and
XT = Xn−1 = S1 = 1.
That S1 = 1 follows because candidate A must receive the irst vote in the count to
be ahead throughout the count.
Case 2: Candidate A does not lead throughout the count. In that case we claim that for
some k < n − 1, Xk = 0. Candidate A clearly has more votes at the end. If candidate
B ever leads, then there must be some intermediate point k where Sk (and therefore
Xk ) is 0. In this case, T = k < n − 1 and XT = 0.
Observe that
a−b
E[XT ] = = 1 · Pr(Case 1) + 0 · Pr(Case 2),
a+b
and thus the probability of Case 1, in which candidate A leads throughout the count, is
(a − b)/(a + b).
In fact, Wald’s equation holds more generally; there are different proofs of the equality
that do not require the random variables X1 , X2 , . . . to be nonnegative.
E[ZT ] = E[Z1 ] = 0.
We now ind
⎡ ⎤
T
E[ZT ] = E ⎣ (X j − E[X])⎦
j=1
⎡⎛ ⎞ ⎤
T
= E ⎣⎝ X j ⎠ − T E[X]⎦
j=1
⎡ ⎤
T
= E⎣ X j ⎦ − E[T ] · E[X]
j=1
= 0,
As a simple example, consider a gambling game in which a player irst rolls one stan-
dard die. If the outcome of the roll is X then she rolls X new standard dice and her gain
Z is the sum of the outcomes of the X dice. What is the expected gain of this game?
347
martingales
For 1 ≤ i ≤ X, let Yi be the outcome of the ith die in the second round. Then
X
E[Z] = E Yi .
i=1
You may check that N is independent of the ri , and N is bounded in expectation; hence
N is a stopping time for the sequence of ri .
348
13.4 tail inequalities for martingales
Perhaps the most useful property of martingales for the analysis of algorithms is that
Chernoff-like tail inequalities can apply, even when the underlying random variables
are not independent. The main results in this area are Azuma’s inequality and Hoeff-
ding’s inequality. They are quite similar, so they are often together referred to as the
Azuma–Hoeffding inequality.
Theorem 13.4 [Azuma–Hoeffding Inequality]: Let X0 , . . . , Xn be a martingale such
that
|Xk − Xk−1 | ≤ ck .
Then, for all t ≥ 1 and any λ > 0,
2
t 2
Pr(|Xt − X0 | ≥ λ) ≤ 2e−λ /(2 k=1 ck ) .
Proof: The proof follows the same format as that for Chernoff bounds (Section 4.2).
We irst derive an upper bound for E[eα(Xt −X0 ) ]. Toward that end, we deine
Yi = Xi − Xi−1 , i = 1, . . . , t.
Note that |Yi | ≤ ci and, since X0 , X1 , . . . is a martingale,
E[Yi | X0 , X1 , . . . , Xi−1 ] = E[Xi − Xi−1 | X0 , X1 , . . . , Xi−1 ]
= E[Xi | X0 , X1 , . . . , Xi−1 ] − Xi−1 = 0.
Now consider
E[eαYi | X0 , X1 , . . . , Xi−1 ].
349
martingales
Writing
1 − Yi /ci 1 + Yi /ci
Yi = −ci + ci
2 2
and using the convexity of eαYi , we have that
1 − Yi /ci −αci 1 + Yi /ci αci
eαYi ≤ e + e
2 2
eαci + e−αci Yi αci
= + (e − e−αci ).
2 2ci
Since E[Yi | X0 , X1 , . . . , Xi−1 ] = 0, we have
e + e−αci
αci
Yi αci
(e − e−αci ) X0 , X1 , . . . , Xi−1
E[eαYi | X0 , X1 , . . . , Xi−1 ] ≤ E +
2 2ci
eαci + e−αci
=
2
2
≤ e(αci ) /2 .
Here we have used the Taylor series expansion of ex to ind
eαci + e−αci 2
≤ e(αci ) /2 ,
2
in a manner similar to the proof of Theorem 4.7. It follows that
t
α(Xt −X0 )
E e =E eαYi
i=1
t−1
αYi
=E e E[eαYt | X0 , X1 , . . . , Xt−1 ]
i=1
t−1
2
≤E αYi
e e(αct ) /2
i=1
2
t 2
≤ eα k=1 ck /2 .
Hence,
Pr(Xt − X0 ≥ λ) = Pr(eα(Xt −X0 ) ≥ eαλ )
E[eα(Xt −X0 ) ]
≤
eαλ
2
t 2
≤ eα k=1 ck /2−αλ
2
t 2
≤ e−λ /(2 k=1 ck )
,
t
where the last inequality comes from choosing α = λ/ k=1 c2k . A similar argument
gives the bound for Pr(Xt − X0 ≤ −λ), as can be seen for example by replacing Xi
everywhere by −Xi , giving the theorem.
350
13.5 applications of the azuma–hoeffding inequality
We now present a more general form of the Azuma–Hoeffding inequality that yields
slightly tighter bounds in our applications.
Theorem 13.6 [Azuma–Hoeffding Inequality]: Let X0 , . . . , Xn be a martingale such
that
Bk ≤ Xk − Xk−1 ≤ Bk + dk
for some constants dk and for some random variables Bk that may be functions of
X0 , X1 , . . . , Xk−1 . Then, for all t ≥ 0 and any λ > 0,
2
t
dk2 )
Pr(|Xt − X0 | ≥ λ) ≤ 2e−2λ /( k=1 .
This version of the inequality generalizes the requirement of a bound on |Xk − Xk−1 |.
The key is the gap dk between the lower and upper bounds for Xk − Xk−1 . Notice that,
when we have the bound |Xk − Xk−1 | ≤ ck , this result is equivalent to Theorem 13.4
using Bk = −ck with a gap dk = 2ck . The proof is similar to that for Theorem 13.4 and
is left as Exercise 13.7.
Proof: We prove this for the case of discrete random variables (although the result
holds more generally). To ease the notation, we use Sk as shorthand for X1 , X2 , . . . , Xk ,
so that we write
E[ f (X̄ ) | Sk ]
for
E[ f (X̄ ) | X1 , X2 , . . . , Xk ].
That is, fk (X̄, x) is f (X̄ ) with the value x in the kth coordinate. We shall likewise write
(If we are dealing with random variables that can take on only a inite number of values,
we could use max and min in place of sup and inf.) Therefore, letting
if we can bound
352
13.5 applications of the azuma–hoeffding inequality
Because the Xi are independent, the probability of any speciic set of values for
Xk+1 through Xn does not depend on the values of X1 , . . . , Xk . Hence, for any values
x, y, z1 , . . . , zk−1 we have that
E[ fk (X̄, x) − fk (X̄, y) | X1 = z1 , . . . , Xk−1 = zk−1 ]
is equal to
Pr((Xk+1 = zk+1 ) ∩ · · · ∩ (Xn = zn )) · ( fk (z̄, x) − fk (z̄, y)).
zk+1 ,...,zn
But
fk (z̄, x) − fk (z̄, y) ≤ c,
and hence
E[ fk (X̄, x) − fk (X̄, y) | Sk−1 ] ≤ c,
giving the required bound, so that we may apply Theorem 12.6 to conclude the
proof.
We can obtain slightly better bounds by applying the general framework of Theo-
rem 13.6. Let F = f (X1 , X2 , . . . , Xn ). Then, by our preceding argument, changing the
value of any single Xi can change the value of F by at most k, and hence the function
satisies the Lipschitz condition with bound k. Theorem 13.6 then applies to give
2
/nk2
Pr(|F − E[F]| ≥ ε) ≤ 2e−2ε ,
improving the value in the exponent by a factor of 4.
balls, then changing Xi so that the ith ball lands in an otherwise empty bin decreases F
by 1. In all other cases, changing Xi leaves F the same. We therefore obtain
2
Pr(|F − E[F]| ≥ ε) ≤ 2e−2ε /m
13.6. Exercises
Exercise
i 13.2: Let X0 , X1 , . . . be a sequence of random variables, and let Si =
X
j=1 j . Show that if S0 , S1 , . . . is a martingale with respect to X0 , X1 , . . ., then for
all i = j, E[Xi X j ] = 0.
Exercise 13.3: Let X0 = 0 and for j ≥ 0 let X j+1 be chosen uniformly over the real
interval [X j , 1]. Show that, for k ≥ 0, the sequence
Yk = 2k (1 − Xk )
is a martingale.
355
martingales
Exercise 13.5: Consider the gambler’s ruin problem, where a player plays a sequence
of independent games, either winning one dollar with probability 1/2 or losing one
dollar with probability 1/2. The player continues until either losing ℓ1 dollars or win-
ning ℓ2 dollars. Let Xn be 1 if the player wins the nth game and −1 otherwise. Let
n 2
Zn = i=1 Xi − n.
(a) Show that Z1 , Z2 , . . . is a martingale.
(b) Let T be the stopping time when the player inishes playing. Determine E[ZT ].
(c) Calculate E[T ]. (Hint: You can use what you already know about the probability
that the player wins.)
Exercise 13.6: Consider the gambler’s ruin problem, where now the independent
games are such that the player either wins one dollar with probability p < 1/2 or loses
one dollar with probability 1 − p. As in Exercise 13.5, the player continues until either
losing ℓ1 dollars or winning ℓ2 dollars. Let Xn be 1 if the player wins the nth game and
−1 otherwise, and let Zn be the player’s total winnings after n games.
(a) Show that
Zn
1− p
An =
p
is a martingale with mean 1.
(b) Determine the probability that the player wins ℓ2 dollars before losing ℓ1 dollars.
(c) Show that
Bn = Zn − (2p − 1)n
is a martingale with mean 0.
(d) Let T be the stopping time when the player inishes playing. Determine E[ZT ], and
use it to determine E[T ]. (Hint: You can use what you already know about the
probability that the player wins.)
Exercise 13.8: In the bin-packing problem, we are given items with sizes
a1 , a2 , . . . , an with 0 ≤ ai ≤ 1 for 1 ≤ i ≤ n. The goal is to pack them into the mini-
mum number of bins, with each bin being able to hold any collection of items whose
total sizes sum to at most 1. Suppose that each of the ai is chosen independently accord-
ing to some distribution (which might be different for each i). Let P be the number of
356
13.6 exercises
bins required in the best packing of the resulting items. Prove that
2
Pr(|P − E[P]| ≥ λ) ≤ 2e−2λ /n .
Exercise 13.10: In Chapter 4 we developed a tail bound for the sum of {0, 1} random
variables. We can use martingales to generalize this result for the sum of any random
X2 , . . . , Xn be independent random vari-
variables whose range lies in [0, 1]. Let X1 ,
ables such that Pr(0 ≤ Xi ≤ 1) = 1. If Sn = ni=1 Xi , show that
2
Pr(|Sn − E[Sn ]| ≥ λ) ≤ 2e−2λ /n .
Exercise 13.11: A parking-lot attendant has mixed up n keys for n cars. The n car
owners arrive together. The attendant gives each owner a key according to a permutation
chosen uniformly at random from all permutations. If an owner receives the key to his
car, he takes it and leaves; otherwise, he returns the key to the attendant. The attendant
now repeats the process with the remaining keys and car owners. This continues until
all owners receive the keys to their cars.
Let R be the number of rounds until all car owners receive the keys to their cars. We
want to compute E[R]. Let Xi be the number of owners who receive their car keys in
the ith round. Prove that
i
Yi = (X j − E[X j | X1 , . . . , X j−1 ])
j=1
Exercise 13.12: Alice and Bob play each other in a checkers tournament, where the
irst player to win four games wins the match. The players are evenly matched, so the
probability that each player wins each game is 1/2, independent of all other games.
The number of minutes for each game is uniformly distributed over the integers in the
range [30, 60], again independent of other games. What is the expected time they spend
playing the match?
Exercise 13.13: Consider the following extremely ineficient algorithm for sorting
n numbers in increasing order. Start by choosing one of the n numbers uniformly at
random, and placing it irst. Then choose one of the remaining n − 1 numbers uniformly
at random, and place it second. If the second number is smaller than the irst, start
over again from the beginning. Otherwise, next choose one of the remaining n − 2
numbers uniformly at random, place it third, and so on. The algorithm starts over from
the beginning whenever it inds that the kth item placed is smaller than the (k − 1)th
357
martingales
item. Determine the expected number of times the algorithm tries to place a number,
assuming that the input consists of n distinct numbers.
Exercise 13.14: Suppose that you are arranging a chain of n dominos so that, once
you are done, you can have them all fall sequentially in a pleasing manner by knocking
down the lead domino. Each time you try to place a domino in the chain, there is some
chance that it falls, taking down all of the other dominos you have already carefully
placed. In that case, you must start all over again from the very irst domino.
(a) Let us call each time you try to place a domino a trial. Each trial succeeds with
probability p. Using Wald’s equation, ind the expected number of trials necessary
before your arrangement is ready. Calculate this number of trials for n = 100 and
p = 0.1.
(b) Suppose instead that you can break your arrangement into k components, each
of size n/k, in such a way so that once a component is complete, it will not fall
when you place further dominos. For example: if you have 10 components of size
10, then once the irst component of 10 dominos are placed successfully they will
not fall; misplacing a domino later might take down another component, but the
irst will remain ready. Find the expected number of trials necessary before your
arrangement is ready in this case. Calculate the number of trials for n = 100, k =
10, and p = 0.1, and compare with your answer from part (a).
That is, N is the smallest number for which the sum of the irst N of the Xi is larger than
k. Use Wald’s inequality to determine E[N].
(b) Let X1 , X2 , . . . be a sequence of independent uniform random variables on the
interval (0, 1). Given a positive real number k with 0 < k < 1, let N be deined by
n "
N = min n : Xi < k .
i=1
That is, N is the smallest number for which the product of the irst N of the Xi is smaller
than k. Determine E[N]. (Hint: You may ind Exercise 8.9 helpful.)
Exercise 13.16: A subsequence of a string s is any string that can be obtained by delet-
ing characters from s. Consider two strings x and y of length n, where each character
in each string is independently a 0 with probability 1/2 and a 1 with probability 1/2.
We consider the longest common subsequence of the two strings.
(a) Show that the expected length of the longest common subsequence is greater than
c1 n and less than c2 n for constants c1 > 1/2 and c2 < 1 when n is suficiently large.
358
13.6 exercises
(Any constants c1 and c2 will do; as a challenge, you may attempt to ind the best
constants c1 and c2 that you can.)
(b) Use a martingale inequality to show that the length of the longest common subse-
quence is highly concentrated around its mean.
Exercise 13.17: Given a bag with r red balls and g green balls, suppose that we uni-
formly sample n balls from the bin without replacement. Set up an appropriate martin-
gale and use it to show that the number of red balls in the sample is tightly concentrated
around nr/(r + g).
Exercise 13.18: We showed in Chapter 5 that the fraction of entries that are 0 in a
Bloom ilter is concentrated around
1 km
′
p = 1− ,
n
where m is the number of data items, k is the number of hash functions, and n is the
size of the Bloom ilter in bits. Derive a similar concentration result using a martingale
inequality.
Exercise 13.19: Consider a random graph from Gn,N , where N = cn for some constant
c > 0. Let X be the number of isolated vertices (i.e., vertices of degree 0).
(a) Determine E[X].
(b) Show that
√ 2
Pr |X − E[X]| ≥ 2λ cn ≤ 2e−λ /2 .
(Hint: Use a martingale that reveals the locations of the edges in the graph, one at
a time.)
Exercise 13.20: We improve our bound from the Azuma–Hoeffding inequality for
the problem where m balls are thrown into n bins. We let F be the number of empty
bins after the m balls are thrown and Xi the bin in which the ith ball lands. We deine
Z0 = E[F] and Zi = E[F | X1 , . . . , Xi ].
(a) Let Ai denote the number of bins that are empty after the ith ball is thrown. Show
that in this case
1 m−i+1
Zi−1 = Ai−1 1 − .
n
(b) Show that, if the ith ball lands in a bin that is empty when it is thrown, then
1 m−i
Zi = (Ai−1 − 1) 1 − .
n
(c) Show that, if the ith ball lands in a bin that is not empty when it is thrown, then
1 m−i
Zi = Ai−1 1 − .
n
359
martingales
(d) Show that the Azuma–Hoeffding inequality of Theorem 13.6 applies with di =
(1 − 1/n)m−i .
(e) Using part (d), prove that
2
(2n−1)/(n2 −(E[F])2 )
Pr(|F − E[F]| ≥ λ) ≤ 2e−λ .
Exercise 13.21: Let f (X1 , X2 , . . . , Xn ) satisfy the Lipschitz condition so that, for any
i and any values x1 , . . . , xn and yi ,
| f (x1 , x2 , . . . , xi−1 , xi , xi+1 , . . . , xn ) − f (x1 , x2 , . . . , xi−1 , yi , xi+1 , . . . , xn )| ≤ c.
We set
Z0 = E[ f (X1 , X2 , . . . , Xn )]
and
Zi = E[ f (X1 , X2 , . . . , Xn ) | X1 , X2 , . . . , Xi ].
Give an example to show that, if the Xi are not independent, then it is possible that
|Zi − Zi−1 | > c.
360
chapter fourteen
Sample Complexity,
VC Dimension, and
Rademacher Complexity
Sampling is a powerful technique at the core of statistical data analysis and machine
learning. Using a inite, often small, set of observations, we attempt to estimate prop-
erties of an entire sample space. How good are estimates obtained from a sample? Any
rigorous application of sampling requires an understanding of the sample complexity
of the problem – the minimum size sample needed to obtain the required results. In this
chapter we focus on the sample complexity of two important applications of sampling:
range detection and probability estimation. Here a range is just a subset of the underly-
ing space. Our goal is to use one set of samples to detect a set of ranges or estimate the
probabilities of a set of ranges, where the set of possible ranges is large, in fact possibly
ininite. For detection, we mean that we want the sample to intersect with each range
in the set, while for probability estimation, we want the fraction of points in the sample
that intersect with each range in the set to approximate the probability associated with
that range.
As an example, consider a sample x1 , . . . , xm of m independent observations from
an unknown distribution D, where the values for our samples are in R. Given an
interval [a, b], if the probability of the interval is at least ǫ, i.e., Pr(x ∈ [a, b]) ≥ ǫ,
then the probability that a sample of size m = 1ǫ ln 1δ intersects (or, in this context,
detects) the interval [a, b] is at least 1 − (1 − ǫ)m ≥ 1 − δ. Given a set of k intervals,
each of which has probability at least ǫ, we can apply a union bound to show that the
probability that a sample of size m ′ = 1ǫ ln δk intersects each of the k intervals is at least
′
1 − k(1 − ǫ)m ≥ 1 − δ.
In many applications we need a sample that intersects with every interval that has
probability at least ǫ, and there can be an ininite number of such intervals. What sample
size guarantees that? We cannot use a simple union bound to answer this question, as
our above analysis does not make sense when k is ininite. However, if there are many
such intervals, there can be signiicant overlap between them. For example, consider
samples chosen uniformly over [0, 1] with ǫ = 1/10; there are ininitely many intervals
[a, b] of length at least 1/10, but the largest number of disjoint intervals of size at least
1/10 is ten. A sample point may intersect with many intervals, and thus a small sample
may be suficient.
361
sample complexity, vc dimension, and rademacher complexity
Indeed, the technique we will develop in this chapter will show that for any distribu-
tion D, a sample of size ( 1ǫ ln 1δ ), with probability at least 1 − δ, intersects all intervals
of probability at least ǫ. Similarly, we will show that a sample of size ( ǫ12 ln 1δ ), with
probability at least 1 − δ, simultaneously estimates the probabilities of all intervals,
where each probability is estimated within an additive error bounded by ǫ.
The above example shows that the set of intervals on a line corresponds to a set of
ranges that is easy to sample. In this chapter we develop general methods for evaluating
the sample complexity of sets of ranges. We will see an example of sets of ranges with
signiicantly larger sample complexity than the intervals example, and even sets of
ranges with ininite sample complexity for either detection or probability estimation.
We also present applications of the theory to rigorous machine learning and data mining
analysis.
The study of sample complexity was motivated by statistical machine learning. To moti-
vate our discussion of these concepts, we show how the task of learning a binary clas-
siication can be framed as either a detection or a probability estimation problem.
As a starting example, suppose that we know that a publisher uses a certain rule when
determining whether to review or reject a book based on the submitted manuscript. The
rule is a conjunction over certain Boolean variables (or their negations); for example,
there could be a Boolean variable for whether the manuscript is over 100 pages, for
whether the topic was of wide interest, for whether the author had suitable experience,
and so on. As outsiders, we might not know the rule, and the question is whether we
can learn the rule after seeing enough examples.
A second example involves learning the range of temperatures in which some elec-
tronic equipment is functioning correctly. We test the equipment at various tempera-
tures: some are too low and some are too high, but in between there is an interval of
temperatures in which the equipment is functioning correctly. The question is to deter-
mine an appropriate range of temperatures where the equipment functions.
Here is a general model for this sort of problem; we formalize these deinitions later.
We have a universe U of objects that we wish to classify, and let c : U → {−1, 1} be the
correct, unknown classiication. Usually c(x) = 1 corresponds to x being a “positive”
example, and c(x) = −1 corresponds to x being a “negative” example. The correct
classiication also can be thought of as the subset of the universe corresponding to the
positive examples.
The learning algorithm receives a training set (x1 , c(x1 )), . . . , (xm , c(xm )), where
xi ∈ U is chosen according to an unknown distribution D, and c(xi ) is the correct clas-
siication of xi . The algorithm also receives a collection C of hypotheses, or possi-
ble classiications, to choose from. This collection of hypotheses can be referred to as
the concept class. The output of the algorithm is a classiication h ∈ C. In the context
of binary classiication, every h ∈ C is also a function h : U → {−1, 1}. Equivalently,
each hypothesis is itself a subset of the universe, corresponding to the elements x with
362
14.2 vc dimension
h(x) = 1. The correctness of the chosen classiication is evaluated with respect to its
error in classifying new objects chosen according to the distribution D.
In our irst example, C is the collection of all possible conjunctions of subsets of
the Boolean variables or their negations. That is, each h ∈ C corresponds to a Boolean
formula given by a conjunction of variables; h(x) is 1 if the Boolean expression evalu-
ates to true on x, and −1 if it evaluates to false. In the second example, C is the set of
all intervals in R, so that for each h ∈ C, h(x) = 1 if x is a point in the corresponding
interval and h(x) = −1 otherwise.
Assume irst that the correct classiication c is included in the collection C of possible
classiications. For any other h ∈ C let
(c, h) = {x ∈ U | c(x) = h(x)}
be the set of objects that are not classiied correctly by classiication h. The proba-
bility of a set (c, h) is the probability that the distribution D generates an object
in (c, h). If our training set intersects with every set (c, h) that has probability at
least ǫ, then the learning algorithm can eliminate any classiication h ∈ C that has error
at least ǫ on input from D. Thus, a sample (training set) that with probability 1 − δ
detects (or intersects with) all sets {(c, h)| PrD ((c, h)) ≥ ǫ, h ∈ C} guarantees that
such an algorithm outputs with probability 1 − δ a classiication that errs with proba-
bility bounded by ǫ.
A more realistic scenario is that no classiication in C is perfectly correct. In that case,
we require the algorithm to return a classiication in C with an error probability that is
no more than ǫ larger (with respect to D) than any classiication in C. If our training set
approximates all sets {(c, h) | h ∈ C} to within an additive error ǫ/2, then the learning
algorithm has suficient information to eliminate any h ∈ C with error which is at least
ǫ larger than the error of the best hypothesis in C.
Finally, we note a major difference between the two examples above. Since the num-
ber of possible conjunctions over a bounded number of variables or their complements
is bounded, the set of possible classiications in the irst example is inite, and we can
use standard techniques (union bound and Chernoff bound) to bound the size of the
required sample (training set), though the bound may be loose. In the second example,
the size of the concept class is not bounded and we need more advanced techniques
to obtain a bound on the sample complexity. We present here two major techniques to
evaluate the sample complexity, VC dimension and Rademacher complexity.
14.2. VC Dimension
We begin with the formal deinitions, using the setting of intervals on a line to help
explain them, and then consider other examples.
The Vapnik–Chervonenkis (VC) dimension is deined on range spaces.
Deinition 14.1: A range space is a pair (X, R) where:
1. X is a (inite or ininite) set of points;
2. R is a family of subsets of X, called ranges.
363
sample complexity, vc dimension, and rademacher complexity
2 4 6
Figure 14.1: Let R be the collection of all closed intervals in R. Any 2 points can be shattered, but
there is no interval that separates {2, 6} from {4}. The VC dimension of (R, R) is therefore 2.
If for example X = R is the set of real numbers, then R could be the family of all
closed intervals [a, b] in R.
Given a set S ⊆ X, one can obtain a subset of S by intersecting it with a range R ∈ R.
The projection of R on S corresponds to the collection of all subsets that can be obtained
in this way.
Deinition 14.2: Let (X, R) be a range space and let S ⊆ X. The projection of R on
S is
RS = {R ∩ S | R ∈ R}.
For example, let X = R and R be the set of all closed intervals. Consider S = {2, 4}.
The intersection of S with the interval [0, 1] gives the empty set; the intersection of S
with the interval [1, 3] is {2}; the intersection of S with the interval [3, 5] is {4}; and
the intersection of S with the interval [1, 5] is {2, 4}. Hence the projection of R on S is
the set of all possible subsets of S in this case, and indeed the same is true for any set
of two distinct points.
Consider now a set S = {2, 4, 6}. You should convince yourself that the projection
of R on S includes seven of the eight subsets of S, but not {2, 6}. This is because an
interval containing 2 and 6 must also contain 4. More generally, the projection of R on
any set S of three distinct points would contain only seven of the eight possible subsets
of S.
We measure the complexity of a range space (X, R) by considering the largest subset
S of X such that all subsets of S are contained in the projection of R on S.
We have shown that any set of two points is shattered by closed intervals on the real
number line, but that any set of three points is not. Of course, that argument also shows
that no larger set of points is shattered by closed intervals. Therefore, the VC dimension
of that range space is 2. Our example shows that a range space with an ininite set
of points and an ininite number of ranges can have a bounded VC dimension. (See
Figure 14.1.)
An important subtlety in the deinition is that the VC dimension of a range space is d
if there is some set of cardinality d that is shattered by R. It does not imply that all sets
of cardinality d are shattered by R. On the other hand, to show that the VC dimension
364
14.2 vc dimension
Figure 14.2: Let R be the collection of all half-space partitions on R2 . Any three points can be
shattered, but there is no half-space partition that separates the two white points from the two black
points. Thus, the VC dimension of (R2 , R) is 3.
is not d + 1 or larger, one must show that all sets of cardinality larger than d are not
shattered by R.
Linear half-spaces
Let X = R2 and let R be the set of all half-spaces deined by a linear partition of the
plane. That is, we consider all possible lines ax + by = c in the plane, and R consists
of all half-spaces ax + by ≥ c. The VC dimenstion in this case is at least 3, since any
set of three points that do not lie on a line can be shattered. On the other hand, no set
of four points can be shattered. To see this, we need to consider several cases. First, if
any three points lie on a line they cannot be shattered, as we cannot separate the middle
point from the other two by any half-space. Hence we may assume no three points lie
on a line; this is often referred to as the points being in “general position”. Second, if
one point lies within the convex hull deined by the other three points, no half-space
can separate that point from the other three. Finally, if the four points deine a convex
hull, then there is no half-space that separates two non-neighboring points from the
other two. (See Figure 14.2.)
While harder to visualize, if X = Rd and R corresponds to all half-spaces in d
dimensions, the VC dimension is d + 1. (See Exercise 14.7.)
Convex sets
Let X = R2 and let R be the family of all closed convex sets on the plane. We show that
this range space has ininite VC dimension by showing that for every n there exists a set
of size n that can be shattered. Let Sn = {x1 , . . . , xn } be a set of n points on the boundary
of a circle. Any subset Y ⊆ Sn , Y = ∅ deines a convex set that does not include any
point in Sn \ Y , and hence Y is included in the projection of R on Sn . The empty set is
easily seen to be in the projection as well. Hence, for any number of points n, the set
Sn is shattered and the VC dimension is therefore ininite. (See Figure 14.3.)
Figure 14.3: Let R be the set of all convex bodies in R2 . Any partition of the set of points on the
circle can be deined by a convex body. Therefore, the VC dimension of (R2 , R) is ininite.
correspond to all possible truth assignments of the n variables in the natural way. For
each function f ∈ MCn let R f = {ā ∈ X : f (ā) = 1} be the set of inputs that satisfy
f , and let R = {R f | f ∈ MCn }. Consider the set S ⊆ X of n points:
(0, 1, 1, . . . , 1)
(1, 0, 1, . . . , 1)
(1, 1, 0, . . . , 1)
..
.
(1, 1, 1, . . . , 0).
We claim that each subset of S is equal to S ∩ R f for some R f . For example, the com-
plete set S corresponds to S ∩ R f for the trivial function that is always 1, i.e., f (ā) = 1.
More generally, the subset of S that has all points except those with a 0 in coordinates
i1 , i2 , . . . , i j is equal to S ∩ R f for f (ā) = yi1 ∧ yi2 ∧ · · · ∧ yi j . This set can therefore be
shattered by R and the VC dimension is at least n. The VC dimension cannot be larger
than n since |R| = |MCn | = 2n , and hence there can be at most 2n distinct intersec-
tions of the form S ∩ R f . If the VC dimension was larger than n, at least 2n+1 different
intersections would be needed.
d
n
G(d, n) = .
i=0
i
366
14.2 vc dimension
The growth function is related to the VC dimension through the following theorem.
Theorem 14.1 [Sauer–Shelah]: Let (X, R) be a range space with |X| = n and VC
dimension d. Then |R| ≤ G(d, n).
Proof: We prove the claim by induction on d, and for each d by induction on n. As
the base case, the claim clearly holds for d = 0 or n = 0, as in both of these cases
G(d, n) = 1, with the only possible R being the family containing only the empty set.
Assume that the claim holds for d − 1 and n − 1, and for d and n − 1. We may
therefore assume |X| = n > 0. For some x ∈ X, consider two range spaces on X \ {x}:
R1 = {R \ {x} | R ∈ R}
and
R2 = {R \ {x} | R ∪ {x} ∈ R and R \ {x} ∈ R} .
We irst observe that |R| = |R1 | + |R2 |. Indeed, each set R ∈ R is mapped to a set
R \ {x} ∈ R1 , but if both R ∪ {x} and R \ {x} are in R, then both sets are mapped to the
same set R \ {x} ∈ R1 . By including that set again in R2 , we have |R| = |R1 | + |R2 |.
Now (X \ {x}, R1 ) is a range space on n − 1 items, and its VC dimension is bounded
above by d, the VC dimension of (X, R). To see this, assume that R1 shatters a set S
of size d + 1 in X \ {x}. Then S is also shattered by R, as for any R ∈ R1 , there is a
corresponding R′ in R that is either R or R ∪ {x}, and in either case the projection of
R on S contains S ∩ R′ = S ∩ R. But then R would shatter the set S, contradicting the
assumption that (X, R) has VC dimension d.
Similarly, (X \ {x}, R2 ) is a range space on n − 1 items, and its VC dimension is
bounded above by d − 1. To see this, assume that R2 shatters a set S of size d in X \ {x}.
Then consider the set S ∪ {x} in R. For any R ∈ R2 , both R and R ∪ {x} are in R, and
hence one can obtain both (S ∪ {x}) ∩ R = S ∩ R and (S ∪ {x}) ∩ (R ∪ {x}) = S ∪ {x}
in the projection of R on S. But then R would shatter the set S ∪ {x}, contradicting the
assumption that (X, R) has VC dimension d.
Applying the induction hypothesis we get
|R| = |R1 | + |R2 | ≤ G(d, n − 1) + G(d − 1, n − 1)
d
d−1
n−1 n−1
≤ +
i=0
i i=0
i
d−1
n−1 n−1
=1+ +
i=0
i+1 i
d
n
= = G(d, n).
i=0
i
367
sample complexity, vc dimension, and rademacher complexity
Deinition 14.6: Let (X, R) be a range space, and let D be a probability distribution
on X. A set S ⊆ X is an ǫ-sample for X with respect to D if for all sets R ∈ R,
PrD (R) − |S ∩ R| ≤ ǫ.
|S|
Again, by ixing the distribution D to be uniform over a inite set A ⊆ X, we obtain
the combinatorial version of this concept.
Deinition 14.7 [combinatorial deinition]: Let (X, R) be a range space, and let
A ⊆ X be a inite subset of X. A set N ⊆ A is a combinatorial ǫ-sample for A if for all
sets R ∈ R,
|A ∩ R| |N ∩ R|
|A| − |N| ≤ ǫ.
In what follows, we may say ǫ-net and ǫ-sample in place of the more exact terms
combinatorial ǫ-net and combinatorial ǫ-sample when the meaning should be clear
from context.
Our goal is to obtain ǫ-nets and ǫ-samples through sampling. We say that a set S is a
sample of size m from a distribution D if the m elements of S were chosen independently
with distribution D.
Deinition 14.8: A range space (X, R) has the uniform convergence property if for
every ǫ, δ > 0 there is a sample size m = m(ǫ, δ) such that for every distribution D
over X, if S is a random sample from D of size m then, with probability at least 1 − δ,
S is an ǫ-sample for X with respect to D.
In the following sections we show that the minimum sample size that contains an
ǫ-net or an ǫ-sample for a range space can be bounded in terms of the VC dimension
of the range space, independent of the numbers of its points or ranges. In particular,
we will show that a range space has the uniform convergence property if and only if its
VC dimension is inite. These results show that the VC dimension is a concrete, useful
measure of the complexity of a range space.
As a irst step, we use a standard union bound argument to obtain bounds on the size
of a combinatorial ǫ-net via the probabilistic method.
Theorem 14.7: Let (X, R) be a range space with VC dimension d ≥ 2 and let A ⊆ X
have size |A| = n. Then there exists a combinatorial ǫ-net N for A of size at most ⌈ d lnǫ n ⌉.
Proof: Consider the projection of the range space R on A; denote this by R′ . By The-
orem 14.1, the size of R′ is at most G(d, n) ≤ nd .
Suppose we take a sample of k = ⌈ d lnǫ n ⌉ points of A independently and uniformly
at random. For each set R ∈ R such that |R ∩ A| ≥ ǫ|A|, there is a corresponding set
R′ ∈ R′ . The probability that our sample misses a given set R′ is (1 − ǫ)k , and there are
370
14.3 the ǫ-net theorem
at most nd possible sets R′ to consider. Applying a union bound, the probability that
the sample misses at least one such R′ is at most
nd (1 − ǫ)k < nd e−d ln n = 1.
Since the probability that a random sample of size k = ⌈ d lnǫ n ⌉ misses at least one set R′
is strictly less than 1, by the probabilistic method there is a set of that size that misses
no set R′ ∈ R′ , and is therefore an ǫ-net for A.
We can, however, in general do much better than the bound of Theorem 14.7. Our
goal is to show that with high probability we can obtain an ǫ-net from a random sample
of elements where the size of the sample does not depend on n, as long as the VC dimen-
sion is inite. This may appear somewhat surprising; while O(1/ǫ) points on average
are needed to hit any particular range, it is not clear how to hit all of them without some
dependence on n. Essentially, we are inding that the union bound of Theorem 14.7 is
too weak an approach in this setting, and that the VC dimension provides a means to
avoid it.
The following theorem, whose proof takes a somewhat unusual path that we some-
times refer to as “double sampling”, provides our main results on ǫ-nets. The theorem
holds for our more general notion of ǫ-nets, not just combinatorial ǫ-nets.
Theorem 14.8: Let (X, R) be a range space with VC dimension d and let D be a
probability distribution on X. For any 0 < δ, ǫ ≤ 1/2, there is an
d d 1 1
m=O ln + ln
ǫ ǫ ǫ δ
such that a random sample from D of size greater than or equal to m is an ǫ-net for X
with probability at least 1 − δ.
In particular, Theorem 14.8 implies that there exists an ǫ-net of size O( dǫ ln dǫ ).
Proof: Let M be a set of m independent samples from X according to D, and let E1 be
the event that M is not an ǫ-net for X with respect to the distribution D, i.e.,
E1 = {∃ R ∈ R | PrD (R) ≥ ǫ and |R ∩ M| = 0}.
We want to show that Pr(E1 ) ≤ δ for a suitable m. Notice that for any particular R,
since PrD (R) ≥ ǫ, the expected size of |R ∩ M| would be at least ǫm, and hence it seems
natural that Pr(E1 ) is small. However, as the union bound argument of Theorem 14.7
is too weak to provide this strong a bound, we use an indirect means to bound Pr(E1 ).
To do this, we choose a second set T of m independent samples from X according
to D and deine E2 to be the event that some range R with PrD (R) ≥ ǫ has an empty
intersection with M but a reasonably large intersection with T :
E2 = {∃ R ∈ R | PrD (R) ≥ ǫ and |R ∩ M| = 0 and |R ∩ T | ≥ ǫm/2}.
Since T is a random sample and PrD (R) ≥ ǫ, the event |R ∩ T | ≥ ǫm/2 should occur
with nontrivial probability and therefore the events E1 and E2 should have similar prob-
ability. The following lemma formalizes this intuition:
371
sample complexity, vc dimension, and rademacher complexity
Proof: As the event E2 is included in the event E1 , we have Pr(E2 ) ≤ Pr(E1 ). For
the second inequality, note that if event E1 holds, there is some particular R′ so that
|R′ ∩ M| = 0 and PrD (R′ ) ≥ ǫ. We use the deinition of conditional probability to
obtain
Pr(E2 ) Pr(E1 ∩ E2 )
= = Pr(E2 | E1 ) ≥ Pr(|T ∩ R′ | ≥ ǫm/2).
Pr(E1 ) Pr(E1 )
Now for a ixed range R′ and a random sample T the random variable |T ∩ R′ | has
a binomial distribution B(m, PrD (R′ )). Since PrD (R′ ) ≥ ǫ, by applying the Chernoff
bound (Theorem 4.5), we have for m ≥ 8/ǫ,
Thus,
Pr(E2 )
= Pr(E2 | E1 ) ≥ Pr(|T ∩ R′ | ≥ ǫm/2) ≥ 1/2,
Pr(E1 )
giving Pr(E1 ) ≤ 2 Pr(E2 ) as desired.
The lemma above gives us an approach to showing that Pr(E1 ) is small. The intuition
is as follows: since M and T are both random samples of size m, it would be very
surprising to have |M ∩ R| = 0 but |T ∩ R| be large for some R. If we think of irst
sampling the m items that form M and then sampling the m items that form T , we must
have somehow been very unlucky to have all the samples that intersect R come in the
second set of m samples, and none in the irst.
Formally, we bound the probability of E2 by the probability of a larger event E2′ :
The event E2′ excludes the condition that PrD (R) ≥ ǫ; in some sense, that has been
replaced by the condition on the size of |R ∩ T |. The event E2′ now depends only on
the elements in M ∪ T .
Proof: Since M and T are random samples, we can assume that we irst choose a set
of 2m elements and then partition it randomly into two equal size sets M and T .
For a ixed R ∈ R and k = ǫm/2, let
To bound the probability of ER we note that this event implies that M ∪ T has at least k
elements of R, but all these elements were placed in T by the random partition. That is,
372
14.3 the ǫ-net theorem
Pr(ER ) ≤ Pr(|M ∩ R| = 0 | |R ∩ (M ∪ T )| ≥ k)
2m−k
m
= 2m
m
(2m − k)!m!
=
(2m)!(m − k)!
m(m − 1) · · · (m − k + 1)
=
(2m)(2m − 1) · · · (2m − k + 1)
≤ 2−ǫm/2 .
Our bound on Pr(ER ) does not depend on the choice of the set T ∪ M, only on its
random partition into T and M. By Theorem 14.1 the projection of R on M ∪ T has no
more than (2m)d ranges. Thus,
Equivalently, we require
Clearly it holds that ǫm/4 ≥ ln(2/δ), since m > 4ǫ ln 2δ . It therefore sufices to show
that ǫm/4 ≥ d ln(2m) to complete the proof.
Applying Lemma 14.3 with y = 2m ≥ 16d ǫ
ln 16d
ǫ
and x = 16d ǫ
, we have
4m 16d
≥ ,
ln(2m) ǫ
so
ǫm
≥ d ln(2m)
4
as required.
The above theorem gives a near tight bound, as shown by the following theorem (see
Exercise 14.13 for a proof).
Theorem 14.11: A random sample of a range space with VC dimension d that, with
probability at least 1 − δ, is an ǫ-net must have size ( dǫ ).
373
sample complexity, vc dimension, and rademacher complexity
1 PAC learning is mainly concerned with the computational complexity of learning. In particular, a concept class C
is eficiently PAC learnable if the algorithm runs in time polynomial in the size of the problem, 1/ǫ and 1/δ. Such
an algorithm uses at most polynomially many samples. Here we are only interested in the sample complexity
of the learning process; however, we note that the computational complexity of the learning algorithm is not
necessarily polynomial in the sample size.
374
14.4 application: pac learning
with m random samples is bounded above by (1 − ǫ)m , and hence the probability that
any bad hypothesis is consistent with m random samples is bounded above by
|C|(1 − ǫ)m ≤ δ.
The result follows.
We can also apply the PAC learning framework to ininite concept classes. Let us
consider learning an interval [a, b] ∈ R. The concept class here is the collection of all
closed intervals in R:
C = {[x, y] | x ≤ y} ∪ ∅.
Notice that we also include a trivial concept that corresponds to the empty interval.
Let c∗ ∈ C be the concept to be learned, and h be the hypothesis returned by our
algorithm. The training set is a collection of n points drawn from a distribution D on
R, where each point in the interval [a, b] is a positive example and each point outside
the interval is a negative example. If none of the sample points are positive examples,
then our algorithm returns the trivial hypothesis, where h(x) = −1 everywhere. If any
of the sample points are positive examples, then let c and d respectively be the smallest
and largest values of positive examples. Our algorithm then returns the interval [c, d]
as its hypothesis. (If there is only one positive example, the algorithm will return an
interval of the form [c, c].) By design, our algorithm can only make an error on an
input x if x ∈ [a, b]; our algorithm will not make an error outside this interval, because
it always returns −1 for points x ∈ / [a, b].
We now determine the probability that our algorithm returns a bad hypothesis. Let
us irst consider the case where PrD (x ∈ [a, b]) ≤ ǫ. Because our algorithm can only
return an incorrect answer on points in the interval [a, b], our algorithm always returns
a hypothesis with a probability of error at most ǫ in this case, and hence never returns
a bad hypothesis.
Now let us consider when PrD (x ∈ [a, b]) > ǫ. In this case, let a ′ ≥ a be the small-
est value such that PrD ([a, a ′ ]) ≥ ǫ/2. Similarly, let b ′ ≤ b be the largest value such
that PrD ([b ′ , b]) ≥ ǫ/2. Here a ′ ≤ b ′ since PrD (x ∈ [a, b]) > ǫ. For convenience, we
assume a ′ < b ′ ; the case a ′ = b ′ can be handled similarly. (If a ′ = b ′ , then the point
a ′ has nonzero probability of being selected, and we can divide up that probability
among the intervals [a, a ′ ] and [b ′ , b] so the probability of each is at least ǫ/2.) For our
algorithm to return a bad hypothesis with error at least ǫ, it must be the case that no
sample points fell either in the interval [a, a ′ ] or the interval [b ′ , b], or both. Otherwise,
our algorithm would return a range [c, d] that covers [a ′ , b ′ ], and correspondingly the
probability our hypothesis would be incorrect on a new input chosen from D would be
at most ǫ.
The probability that a training set of n points does not have any examples from either
[a, a ′ ] or [b, b ′ ] is bounded above by
ǫ n
2 1− ≤ 2e−ǫn/2 .
2
Hence choosing n ≥ 2 ln(2/δ)/ǫ samples guarantees that the probability of choosing a
bad hypothesis is bounded above by δ, and therefore this concept class is PAC learnable.
375
sample complexity, vc dimension, and rademacher complexity
While the above example of learning intervals demonstrates an ininite concept class
that is PAC learnable, the approach to this problem of considering intervals around the
maximum and minimum sampled points appears ad hoc. The idea behind this approach,
however, can be generalized. Observe that a concept class C over input set X deines
a range space (X, C). We show that the number of examples required to PAC learn a
concept class is the same as the number of samples needed to construct an ǫ-net for a
range space of VC dimension equal to the VC dimension of the range space deined by
the concept class.
Theorem 14.13: Let C be a concept class that deines a range space with VC dimen-
sion d. For any 0 < δ, ǫ ≤ 1/2, there is an
d d 1 1
m=O ln + ln
ǫ ǫ ǫ δ
such that C is PAC learnable with m samples.
Proof: Let X be the ground set of inputs and assume that c ∈ C is the correct classi-
ication. For any c ′ ∈ C, c ′ = c let (c ′ , c) = {x | c(x) = c ′ (x)}, where c(x) and c ′ (x)
are the labeling functions for c and c ′ respectively. Let (c) = {(c ′ , c) | c ′ ∈ C}.
That is, (c) is a collection of all the possible sets of points of disagreement with the
correct classiication. The symmetric difference range space with respect to C and c is
(X, (c)). We prove the following lemma about the symmetric difference range space.
Lemma 14.14: The VC dimension of (X, (c)) is equal to the VC dimension of (X, C).
Proof: For any set S ⊆ X we deine a bijection from the projection of (X, C) on S,
denoted by CS , to the projection of (X, (c)) on S, denoted by (c)S . The bijec-
tion maps each element c ′ ∩ S ∈ CS to (c ′ ∩ S, c ∩ S) ∈ (c)S . To show this is a
bijection, we irst consider two elements c ′ , c ′′ ∈ C with c ′ ∩ S = c ′′ ∩ S, and show
that (c ′ ∩ S, c ∩ S) = (c ′′ ∩ S, c ∩ S). If c ′ ∩ S = c ′′ ∩ S, then there is an element
y ∈ S such that c ′ (y) = c ′′ (y). Without loss of generality, assume that c ′ (y) = c(y) but
c ′′ (y) = c(y). In that case y ∈ (c ′ ∩ S, c) ∩ S but y ∈ (c ′′ ∩ S, c ∩ S). Similarly, if
for two elements c ′ , c ′′ ∈ C there is an element y ∈ S such that (c ′ ∩ S, c ∩ S) =
(c ′′ ∩ S, c ∩ S), then there is an element y ∈ S such that c ′ (y) = c ′′ (y), so c ′ ∩ S =
c ′′ ∩ S, proving the bijection.
Thus, for any S ⊆ X, |CS | = |(c)S |, and S is shattered by C if and only if it is
shattered by (c). The two range spaces therefore have the same VC dimension.
Since the range space (X, (c)) has a VC dimension d, by Theorem 14.8 there is an
d d 1 1
m=O ln + ln
ǫ ǫ ǫ δ
so that any sample of size m or larger is, with probability at least 1 − δ, an ǫ-net for
that range space, and therefore has a nonempty intersection with every set (c ′ , c) that
has probability at least ǫ. Thus, with probability at least 1 − δ, our training set allows
the algorithm to exclude any hypothesis with error probability at least ǫ.
376
14.5 the ǫ-sample theorem
We saw in Section 14.2.1 that the VC dimension of the collection of closed intervals
on R is 2. Applying Theorem 14.13 to the problem of learning an interval on the line
gives an alternative proof to the result we saw in Section 14.4 that this range space can
be learned with O( 1ǫ ln 1δ ) samples.
Recall that an ǫ-sample for a range space (X, R) maintains the relative probability
weight of all sets R ∈ R within a tolerance of ǫ (Deinition 14.6), while an ǫ-net just
includes at least one element from each range with total probability at least ǫ. Surpris-
ingly, adding just another O(1/ǫ) factor to the sample size gives an ǫ-sample, again
with probability at least 1 − δ. The proof of this result uses the same “double sampling”
method as in the proof of the ǫ-net theorem, albeit with a somewhat more complicated
argument.
Theorem 14.15: Let (X, R) be a range space with VC dimension d and let D be a
probability distribution on X. For any 0 < ǫ, δ < 1/2, there is an
d d 1 1
m = O 2 ln + 2 ln
ǫ ǫ ǫ δ
such that a random sample from D of size greater than or equal to m is an ǫ-sample
for X with probability at least 1 − δ.
Proof: Let M be a set of m independent samples from X according to D, and let E1 be
the event that M is not an ǫ-sample for X with respect to the distribution D, i.e.
"
|M ∩ R|
E1 = ∃ R ∈ R | PrD (R) −
>ǫ .
|M|
We want to show that Pr(E1 ) ≤ δ for a suitable m. We choose a second set T of m
independent samples from X according to D, and deine E2 to be the event that some
range R is not well approximated by M but is reasonably well approximated by T :
"
|R ∩ M| |R ∩ T | ǫ
E2 = ∃ R ∈ R | − PrD (R) > ǫ and
− PrD (R) ≤
.
|M| |T | 2
Lemma 14.16:
Pr(E2 ) ≤ Pr(E1 ) ≤ 2 Pr(E2 ).
Proof: Clearly the event E2 is included in the event E1 , thus Pr(E2 ) ≤ Pr(E1 ). For
the second inequality we again use conditional probability. If E1 holds, there is some
|R ′ ∩M|
′
′
particular R so that |M| − PrD (R ) > ǫ. Therefore,
′
Pr(E2 ) Pr(E1 ∩ E2 ) |R ∩ T | ǫ
′
= = Pr(E2 | E1 ) ≥ Pr − PrD (R ) ≤ .
Pr(E1 ) Pr(E1 ) |T | 2
Now for a ixed range R ′ and a random sample T , the random variable |T ∩ R ′ | has a
binomial distribution B(m, PrD (R ′ )), and applying the Chernoff bound (Theorem 4.5)
377
sample complexity, vc dimension, and rademacher complexity
we have
and
In that case
Lemma 14.17:
2
Pr(E2 ) ≤ Pr(E2′ ) ≤ (2m)d e−ǫ m/8
.
Proof: Since M and T are random samples, we can assume that we irst choose a
random sample of 2m elements Z = z1 , . . . , z2m and then partition it randomly into two
sets of size m each. Since Z is a random sample, any partition that is independent of the
actual values of the elements generates two random samples. We will use the following
partition: for each pair of sampled items z2i−1 and z2i , i = 1, . . . , m, with probability
1/2 (independent of other choices) we place z2i−1 in T and z2i in M, otherwise we place
z2i−1 in M and z2i in T .
For a ixed R ∈ R, let ER be the event {||R ∩ T | − |R ∩ M|| ≥ 2ǫ m}. To bound the
probability of ER we consider the contribution of the assignment of each pair z2i−1 , z2i
to the value of ||R ∩ T | − |R ∩ M||. If the two items are both in R or the two items are
both not in R, the contribution of the pair is 0. If one item is in R and the other is not
in R then the contribution of the pair is 1 with probability 1/2 and −1 with probability
2 The reverse triangle inequality is simply |x − y| ≥ ||x| − |y||, which follows easily from the triangle inequality.
378
14.5 the ǫ-sample theorem
1/2. There are no more than m such pairs, so from the Chernoff bound in Theorem 4.7
we can conclude
2
m/8
Pr(ER ) ≤ e−ǫ .
By Theorem 14.1 the projection of R on Z has no more than (2m)d ranges. Thus,
by the union bound we have
2
Pr(E2′ ) ≤ (2m)d e−ǫ m/8
.
classiication that is correct on all items in X, and in particular conforms with all exam-
ples in the training set. This assumption does not hold in most applications. First, the
training set may have some errors. Second, we may not know any concept class that
is guaranteed to include the correct classiication and is also simple to represent and
compute. In this section we extend our discussion of PAC learning to the case in which
the concept class does not necessarily include a perfectly correct classiication, which
is referred to as the unrealizable case or agnostic learning. Since the concept class may
not have a correct or even close to correct classiication, the goal of the the algorithm
in this case is to select a classiication c ′ ∈ C with an error that is no more than ǫ larger
than that of any other classiication in C. Formally, let c be the correct classiication
(which may not be in C). We require the output classiication c ′ to satisfy the following
inequality:
Recall from Section 14.4 that the symmetric difference range space with respect to
the concept class C and the correct classiication c is (X, (c)). If the examples in the
training set deine an ǫ/2-sample for that range space then the algorithm has suficiently
many examples to estimate the error probability of each c ′ ∈ C to within an additive
error ǫ/2, and thus can select a classiication that satisies the above requirement.3
Applying Theorem 14.15, agnostic learning of a concept class with VC dimension d
d d 1 1
requires O min |X|, ǫ 2 ln ǫ 2 + ǫ 2 ln δ samples.
Finally, we state a general characterization of concept classes that are agnostic PAC
learnable.
3 Recall that we are only concerned here with the sampling complexity of the problem. Depending on the particular
concept class, the computation cost may not be practically feasible.
380
14.5 the ǫ-sample theorem
can give a signiicantly better bound. (Although, strictly speaking, here we need an
(ǫ/2)-sample.)
For each subset s ⊆ I, let T (s) = {t ∈ T and s ⊆ t} denote the collection of all
transactions in the data set that include s. Let R = {T (s) | s ⊆ I}, and consider the
range space (T , R). We would like to bound the VC dimension of this range space
by a parameter that can be evaluated in one pass over the data (say when the data is
irst loaded to the system). We irst observe that the VC dimension is bounded by ℓ,
the maximum size of any transaction in the data set. Indeed, a transaction of size q has
2q subsets and is therefore included in no more than 2q ranges. Since no transaction
can belong to more than 2ℓ ranges, no set of more than ℓ transactions can be shattered.
Thus, by Theorem 14.15, with probability at least 1 − δ, a sample of size
ℓ ℓ 1 1
O 2 ln + 2 ln (14.1)
ǫ ǫ ǫ δ
can guarantee that all itemsets are accurately determined to within ǫ/2 of their true
proportion with probability at least 1 − δ, and thus is suficient for identifying all the
frequent itemsets. A better bound is proven in Exercise 14.12.
The expression m1 m
i=1 c(xi )h(xi ) represents the correlation between c and h; if c and
h always agree, the value of the expression is 1, and if they alway disagree, the value is
−1. The hypothesis that minimizes the training error is the hypothesis that maximizes
the correlation.
Now, given a collection of sample points xi , 1 ≤ i ≤ m, we consider how well our
class of possible hypotheses C can align with all possible classiications of these sample
points. To consider all possible classiications, we use the Rademacher variables: m
independent random variables, σ = (σ1 , . . . , σm ), with Pr(σi = −1) = Pr(σi = 1) =
1/2. The hypothesis that aligns best with ixed values of the Rademacher variables σ
is then the one that maximizes the value
m
1
σi h(xi ),
m i=1
and our training error is
m
1 1
− max σi h(xi ).
2 h∈C 2m
i=1
To consider all possible sample points, we consider the expectation over all possible
outcomes for σ , or
m
1
Eσ max σi h(xi ). (14.2)
h∈C m
i=1
function are deined according to a probability space with distribution D. Hence, for
f ∈ F, when we refer to E[ f ], this would correspond to E[ f (Z)] where Z is a random
variable with distribution D. We generalize the expectation (14.2) as follows.
where the expectation is taken over the distribution of the Rademacher variables σ =
(σ1 , . . . , σm ).
We remark that we use sup instead of max since we are dealing with a family of
real-valued functions, so the maximum technically may not exist.
For a ixed assignment of values to the Rademacher variables the value of
sup f ∈F m1 m i=1 σi f (zi ) represents the best correlation between any function in F and
the vector (σ1 , . . . , σm ), generalizing the correlation for binary classiications. The
empirical Rademacher average therefore measures how well one can correlate random
partitions of the sample with some function in the set F, which provides a measure
of how expressive the set is. We therefore use the terms empirical Rademacher aver-
age and empirical Rademacher complexity interchangeably (both terms are used in the
literature).
Now let us look at the empirical Rademacher average in a different way. For large
m, an average m1 m i=1 f (zi ) over a random sample S = {z1 , . . . , zm }, should provide a
good approximation to E[ f ]. Multiplying by the Rademacher variables, the expression
1 m
m i=1 σi f (zi ) corresponds to splitting the sample S into two subsamples, correspond-
ing to the values of i where σi = 1 and the values of i where σi = −1. If S is a random
sample then the expression is similar to the difference between the average of the two
random subsamples, and hence the expectation
m
1
Eσ σi f (zi ) ,
m i=1
considers the supremum of this expectation over all functions in F. Intuitively, if the
empirical Rademacher average with respect to a sample of size m is small, then we
expect m to be suficiently large for a sample to provide a good estimate for all functions
in F. We formulate and prove this intuition in Theorem 14.20.
To remove the dependency on a particular sample we can take an expectation over
the distribution of all samples S of size m, where the samples are taken from the distri-
bution D.
384
14.6 rademacher complexity
We similarly use the terms Rademacher average and Rademacher complexity inter-
changeably.
The following theorem bounds this error in terms of the Rademacher complexity of F.
Theorem 14.20:
m
1
ES sup ED [ f (z)] − f (zi ) ≤ 2Rm (F ).
f ∈F m i=1
= 2Rm (F ).
385
sample complexity, vc dimension, and rademacher complexity
The irst equality holds because the expectation from the sample S ′ is the expectation
of f . The irst inequality, in which the order of the expectation with respect to S ′ with the
operation sup f ∈F is interchanged, follows from Jensen’s inequaliy (Theorem 2.4), and
the fact that supremum is a convex function. For the second equality, we use the fact that
multiplying f (zi ) − f (zi′ ) by a Rademacher variable σi does not change the expectation
of the sum. If σi = 1 there is clearly no change, and if σi = −1 this is equivalent to
switching zi and zi′ between the two samples, which does not change the expectation.
For the second inequality, we use that σi and −σi have the same distribution, so we can
change the sign to simplify the expression.
Next we show that for bounded functions the Rademacher complexity is well
approximated by the empirical Rademacher complexity, and the estimation error is well
approximated by twice the Rademacher complexity, thereby obtaining a probabilistic
bound on the estimation error of any bounded function in F from a sample.
Theorem 14.21: Let F be a set of functions such that for any f ∈ F and for any
two values x and y in the domain of f , | f (x) − f (y)| ≤ c for some constant c. Let
Rm (F ) be the Rademacher complexity, and R̃m (F, S) the empirical Rademacher com-
plexity of the set F, with respect to a random sample S = {z1 , . . . , zm } of size m from a
distribution D.
Proof: To prove the irst part of the theorem we observe that R̃m (F, S) is a function of
m random variables, z1 , . . . , zm , and any change in one of these variables can change the
value of R̃m (F, S) by no more than c/m. Since ES [R̃m (F, S)] = Rm (F ) we can apply
Theorem 13.7 to obtain
2 2
Pr(|R̃m (F, S) − Rm (F|) ≥ ǫ) ≤ 2e−2mǫ /c .
386
14.6 rademacher complexity
to obtain,
m
1 2
/c2
Pr ED [ f (z)] − f (zi ) ≥ 2Rm (F ) + ǫ ≤ e−2mǫ . (14.3)
m i=1
From the irst part of the theorem we know that Rm (F ) ≤ R̃m (F, S) + ǫ with probabil-
2 2
ity at least 1 − e−2mǫ /c . Combining this with Eqn. (14.3), we have the second part of
the theorem,
m
1 2 2
Pr ED [ f (z)] − f (zi ) ≥ 2R̃m (F, S) + 3ǫ ≤ 2e−2mǫ /c .
m i=1
f ∈F
m
sσi f (zi )
= Eσ e
f ∈F i=1
m
Eσ esσi f (zi ) .
=
f ∈F i=1
387
sample complexity, vc dimension, and rademacher complexity
Here the irst line follows from Jensen’s inequality, and the second line is just a
rearrangement of terms. The third line bounds the supremum by a summation, which is
possible since all the terms are positive. The fourth line changes the sum in the exponent
to a product, and the last line arises from the independence of the sample values.
Since E[σi f (zi )] = 0 and − f (zi ) ≤ σi f (zi ) ≤ f (zi ), we can apply Hoeffding’s
Lemma (Lemma 4.13) to obtain
2 2 2 2
E esσi f (zi ) ≤ es (2 f (zi )) /8 = es f (zi ) /2 .
Thus,
m
esmR̃m (F,S) = esE[sup f ∈F i=1 σi f (zi )]
m
2 2
≤ es f (zi ) /2
f ∈F i=1
2
m
f (zi )2
= es /2 i=1
f ∈F
2 2
≤ |F|e(s B )/2
.
E[ fh ′ ] ≥ sup E[ fh ] − ǫ.
fh ∈F
388
14.7 exercises
14.7. Exercises
Exercise 14.1: Consider a range space (X, C) where X = {1, 2, . . . , n} and C is the
set of all subsets of X of size k for some k < n. What is the VC dimension of C?
Exercise 14.3: Consider a range space (R2 , C) of all axis-aligned squares in R2 . Show
that the VC dimension of (R2 , C) is equal to 3.
Exercise 14.4: Consider a range space (R2 , C) of all squares (that need not be axis-
aligned) in R2 . Show that the VC dimension of (R2 , C) is equal to 5.
Exercise 14.5: Consider a range space (R3 , C) of all axis-aligned rectangular boxes
in R3 . Find the VC dimension of (R3 , C); you should show both the largest number of
points that can be shattered, and show that no larger set can be shattered.
Exercise 14.6: Prove that the VC dimension of the collection of all closed disks on
the plane is 3.
389
sample complexity, vc dimension, and rademacher complexity
Exercise 14.7: Prove that the VC dimension of the range space (Rd , R), where R
is the set of all half-spaces in Rd , is at least d + 1, by showing that the set consisting
of the origin (0, 0, . . . , 0) and the d unit points (1, 0, 0, . . . , 0), (0, 1, 0, . . . , 0), . . . ,
(0, 0, . . . , 1) is shattered by R.
Exercise 14.8: Let S = (X, R) and S ′ = (X, R ′ ) be two range spaces. Prove that if
R ′ ⊆ R then the VC dimension of S ′ is no larger than the VC dimension of S.
Exercise 14.9: Show that for n ≥ 2d and d ≥ 1 the growth function satisies
d
d
n ne
G(d, n) = ≤2 .
i=0
i d
Exercise 14.10: Use the bound of Exercise 14.9 to improve the result of Theorem 14.4
to show the VC dimension of the range space (X, R f ) is O(kd ln k).
Exercise 14.11: Use the bound of Exercise 14.9 to improve the result of Theorem 14.8
to show that there is an
d 1 1 1
m=O ln + ln
ǫ ǫ ǫ δ
such that a random sample from D of size greater than or equal to m sufices to obtain
the required ǫ-net with probability at least 1 − δ. (Hint: Use Lemma 14.3 with x =
O( 1ǫ ) and y = 2m
d
.)
Exercise 14.12: (a) Improve the result in Eqn. (14.1) by showing that the VC dimen-
sion of the frequent-itemsets range space is bounded by the maximum number q such
that the data set has q different transactions all of size at least q.
(b) Show how to compute an upper bound on the number q deined in (a) in one pass
over the data.
Exercise 14.13: Prove Theorem 14.11 using the following hints. Let (X, R) be a range
space with VC dimension d. Let Y = {y1 , . . . , yd } ⊆ X be a set of d elements that is
shattered by R. Deine a probability distribution D on R as follows: Pr(y1 ) = 1 − 16ǫ,
Pr(y2 ) = Pr(y3 ) = · · · = Pr(yd ) = 16ǫ/(d − 1), and all other elements have probabil-
ity 0. Consider a sample of size m = (d − 1)/(64ǫ). Show that with probability at least
1/2 the sample does not include at least half of the elements in {y2 , . . . , yd }. Conclude
that with probability δ ≥ 1/2 the output classiication has error at least ǫ.
Exercise 14.14: Given a set of functions F and constants a, b ∈ R, consider the set
of functions
Fa,b = {a f + b | f ∈ F}.
390
14.7 exercises
Let Rm () and R̃m () denote the Rademacher complexity and the empirical Rademacher
complexity, respectively. Prove that
(a) R̃m (Fa,b , S) = |a|R̃m (F, S),
(b) Rm (Fa,b ) = |a|Rm (F ).
Exercise 14.15: We apply Theorem 14.21 to compute a bound on the sample com-
plexity of agnostic learning a binary classiication. Assume a concept class with VC
dimension d and a sample size m.
(a) Find a sample size m1 such that the Empirical Rademacher Average of the corre-
sponding set of functions is at most ǫ/4.
(b) Use Theorem 14.21 to ind a sample size m such that with probability at least 1 − δ
the expectation of all the functions are estimated within error ǫ.
(c) Compare your bound to the result obtained in Section 14.5.1.
391
chapter fifteen
Pairwise Independence and
Universal Hash Functions
Mutual independence is often too much to ask for. Here, we examine a more limited
notion of independence that proves useful in many contexts: k-wise independence.
392
15.1 pairwise independence
Deinition 15.1:
1. A set of events E1 , E2 , . . . , En is k-wise independent if, for any subset I ⊆ [1, n] with
|I| ≤ k,
Pr Ei = Pr(Ei ).
i∈I i∈I
Proof: We irst show that, for any nonempty set S j , the random bit
B
Yj = Xi
i∈S j
is uniform. This follows easily using the principle of deferred decisions (see Sec-
tion 1.3). Let z be the largest element of S. Then
⎛ ⎞
B
Yj = ⎝ Xi ⎠ ⊕ Xz .
i∈S j −{z}
393
pairwise independence and universal hash functions
Suppose we reveal the values for Xi for all i ∈ S j − {z}. Then it is clear that the value
of Xz determines the value of Y j and that Y j will take on the values 0 and 1 with equal
probability.
Now consider any two variables Yk and Yℓ with their corresponding sets Sk and Sℓ .
Without loss of generality, let z be an element of Sℓ that is not in Sk and consider, for
any values c, d ∈ {0, 1},
Pr(Yℓ = d | Yk = c).
We claim, again by the principle of deferred decisions, that this probability is 1/2. For
suppose that we reveal the values for Xi for all i in (Sk ∪ Sℓ ) − {z}. Even though this
determines the value of Yk , the value of Xz will determine Yℓ . The conditioning on the
value of Yk therefore does not change that Yℓ is equally likely to be 0 or 1. Hence
Since this holds for any values of c, d ∈ {0, 1}, we have proven pairwise
independence.
Let a and b be the two vertices adjacent to the ith edge. Then
where we have used the pairwise independence of Ya and Yb . Hence E[Zi ] = 1/2, and
it follows that E[Z] = m/2.
Now let our n pairwise independent bits Y1 , . . . , Yn be generated from b inde-
pendent, uniform random bits X1 , X2 , . . . , Xb in the manner of Lemma 15.1 (here
b = ⌈log2 (n + 1)⌉). Then E[Z] = m/2 for the resulting cut, where the sample space
is just all the possible choices for the initial b random bits. By the probabilistic method
(speciically, Lemma 6.2), there is some setting of the b bits that gives a cut with value
at least m/2. We can try all possible 2b settings for the bits to ind such a cut. Since
2b is O(n) and since, for each cut, the number of crossing edges can easily be calcu-
lated in O(m) time, it follows that we can ind a cut with at least m/2 crossing edges
deterministically in O(mn) time.
Although this approach does not appear to be as eficient as the derandomization
of Section 6.3, one redeeming feature of the scheme is that it is trivial to parallelize.
If we have suficiently many processors available, then each of the (n) possibilities
for the random bits X1 , X2 , . . . , Xb can be assigned to a single processor, with each
possibility giving a cut. The parallelization reduces the running time by a factor of (n)
using O(n) processors. In fact, using O(mn) processors, we can assign a processor for
each combination of a speciic edge with a speciic sequence of random bits and then
determine, in constant time, whether the edge crosses the cut for that setting of the
random bits. After that, only O(log n) time is necessary to collect the results and ind
the large cut.
Lemma 15.2: The variables Y0 , Y1 , . . . , Yp−1 are pairwise independent uniform ran-
dom variables over {0, 1, . . . , p − 1}.
Proof: It is clear that each Yi is uniform over {0, 1, . . . , p − 1}, again by applying the
principle of deferred decisions. Given X2 , the p distinct possible values for X1 give p
distinct possible values for Yi modulo p, each of which is equally likely.
Now consider any two variables Yi and Y j . We wish to show that, for any a, b ∈
{0, 1, . . . , p − 1},
1
Pr((Yi = a) ∩ (Y j = b)) = ,
p2
which implies pairwise independence. The event Yi = a and Y j = b is equivalent to
This is a system of two equations and two unknowns with just one solution:
b−a i(b − a)
X2 = mod p and X1 = a − mod p.
j−i j−i
Since X1 and X2 are independent and uniform over {0, 1, . . . , p − 1}, the result
follows.
This proof can be extended to the following useful result: given 2n independent, uni-
form random bits, one can construct up to 2n pairwise independent and uniform strings
of n bits. The extension requires knowledge of inite ields, so we only sketch the result
here. The setup and proof are exactly the same as for Lemma 15.2 except that, instead
of working modulo p, we perform all arithmetic in a ixed inite ield with 2n elements
(such as the ield GF (2n ) of all polynomials with coeficients in GF (2) modulo some
irreducible polynomial of degree n). That is, we assume a ixed one-to-one mapping f
from strings of n bits, which can also be thought of as numbers in {0, 1, . . . , 2n − 1},
to ield elements. We let
where X1 and X2 are chosen independently and uniformly over {0, 1, . . . , 2n − 1}, i runs
over the values {0, 1, . . . , 2n − 1}, and the addition and multiplication are performed
over the ield. The Yi are then pairwise independent.
Theorem 15.3: Let X = ni=1 Xi , where the Xi are pairwise independent random vari-
ables. Then
n
Var[X] = Var[Xi ].
i=1
where
396
15.2 chebyshev’s inequality for pairwise independent variables
Therefore,
n
Var[X] = Var[Xi ].
i=1
Corollary 15.4: Let X = ni=1 Xi , where the Xi are pairwise independent random
variables. Then
n
Var[X] i=1 Var[Xi ]
Pr(|X − E[X]| ≥ a) ≤ 2
= .
a a2
Pr( f¯ ∈ [ f˜ − ε, f˜ + ε]) ≥ 1 − δ.
It follows that
1
1 C 1 C
f (x) − ≤ g(x)dx ≤ f (x) + .
2n x∈{0,1}n 2n x=0 2n x∈{0,1}n 2n
Although the exact choice of m depends on the Chernoff bound used, in general
this straightforward approach requires (ln(1/δ)/ε2 ) samples to achieve the desired
bounds.
A possible problem with this approach is that it requires a large number of random
bits to be available. Each sample of f requires n independent bits, so applying Theo-
rem 15.5 means that we need at least (n ln(1/δ)/ε2 ) independent, uniform random
bits to obtain an approximation that has additive error at most ε with probability at least
1 − δ.
A related problem arises when we need to record how the samples were obtained, so
that the work can be reproduced and veriied at a later time. In this case, we also need
to store the random bits used for archival purposes. In this case, using fewer random
bits would lessen the storage requirements.
We can use pairwise independent samples to obtain a similar approximation
using less randomness. mLet X1 , .. . , Xm be pairwise independent points chosen from
{0, 1}n , and let Y = ¯
i=1 f (Xi ) /m. Then E[Y ] = f , and we can apply Chebyshev’s
398
15.3 universal families of hash functions
inequality to obtain
Var[Y ]
Pr(|Y − f¯| ≥ ε) ≤
ε2
m
Var i=1 f (Xi ) /m
=
m ε2
Var[ f (Xi )]
= i=1 2 2
mε
m 1
≤ 2 2 = ,
mε mε2
since Var[ f (Xi )] ≤ E[( f (Xi ))2 ] ≤ 1. We therefore ind Pr(|Y − f¯| ≥ ε) ≤ δ when m =
1/δε2 . (In fact, one can prove that Var[ f (Xi )] ≤ 1/4, giving a slightly better bound; this
is left as Exercise 15.4.)
Using pairwise independent samples requires more samples: (1/δε2 ) instead of the
(ln(1/δ)/ε2 ) samples when they are independent. But recall from Section 15.1.3 that
we can obtain up to 2n pairwise independent samples with just 2n uniform independent
bits. Hence, as long as 1/δε2 < 2n , just 2n random bits sufice; this is much less than
the number required when using completely independent samples. Usually ε and δ are
ixed constants independent of n, and this type of estimation is quite eficient in terms
of both the number of random bits used and the computational cost.
Up to this point, when studying hash functions we modeled them as being completely
random in the sense that, for any collection of items x1 , x2 , . . . , xk , the hash values
h(x1 ), h(x2 ), . . . , h(xk ) were considered uniform and independent over the range of the
hash function. This was the framework we used to analyze hashing as a balls-and-bins
problem in Chapter 5. The assumption of a completely random hash function simpliies
the analysis for a theoretical study of hashing. In practice, however, completely random
hash functions are too expensive to compute and store, so the model does not fully
relect reality.
Two approaches are commonly used to implement practical hash functions. In many
cases, heuristic or ad hoc functions designed to appear random are used. Although these
functions may work suitably for some applications, they generally do not have any
associated provable guarantees, making their use potentially risky. Another approach
is to use hash functions for which there are some provable guarantees. We trade away
the strong statements one can make about completely random hash functions for weaker
statements with hash functions that are eficient to store and compute.
We consider one of the computationally simplest classes of hash functions that pro-
vide useful provable performance guarantees: universal families of hash functions.
These functions are widely used in practice.
Deinition 15.2: Let U be a universe with |U| ≥ n and let V = {0, 1, . . . , n − 1}. A
family of hash functions H from U to V is said to be k-universal if, for any elements
399
pairwise independence and universal hash functions
Since our hash function is chosen from a 2-universal family, it follows that
1
E[Xi j ] = Pr(h(xi ) = h(x j )) ≤
n
and hence
m2
m 1
E[X] ≤ < . (15.1)
2 n 2n
400
15.3 universal families of hash functions
m2
1
Pr X ≥ ≤ Pr(X ≥ 2E[X]) ≤ .
n 2
If we now suppose that the maximum number of items in a bin is Y, then the number
of collisions X must be at least Y2 . Therefore,
m2 m2
Y 1
Pr ≥ ≤ Pr X ≥ ≤ ,
2 n n 2
which implies that
1
Pr Y − 1 ≥ m 2/n ≤ .
2
√
In particular, in the case where m = n, the maximum load is at most 1 + 2n with
probability at least 1/2.
This result is much weaker than the one for perfectly random hash functions, but
it is extremely general in that it holds for any 2-universal family of hash functions.
The result will prove useful for designing perfect hash functions, as we describe in
Section 15.3.3.
H = {ha,b | 1 ≤ a ≤ p − 1, 0 ≤ b ≤ p − 1}.
Proof: We count the number of functions in H for which two distinct elements x1 and
x2 from U collide.
First we note that, for any x1 = x2 ,
This follows because ax1 + b = ax2 + b mod p implies that a(x1 − x2 ) = 0 mod p, yet
here both a and (x1 − x2 ) are nonzero modulo p.
In fact, for every pair of values (u, v) such that u = v and 0 ≤ u, v ≤ p − 1, there
exists exactly one pair of values (a, b) for which ax1 + b = u mod p and ax2 + b = v
401
pairwise independence and universal hash functions
mod p. This pair of equations has two unknowns, and its unique solution is given by:
v−u
a= mod p,
x2 − x1
b = u − ax1 mod p.
Since there is exactly one hash function for each pair (a, b), it follows that there is
exactly one hash function in H for which
ax1 + b = u mod p and ax2 + b = v mod p.
Therefore, in order to bound the probability that ha,b (x1 ) = ha,b (x2 ) when ha,b is
chosen uniformly at random from H, it sufices to count the number of pairs (u, v),
0 ≤ u, v ≤ p − 1, for which u = v but u = v mod n. For each choice of u there are
at most ⌈p/n⌉ − 1 possible appropriate values for v, giving at most p(⌈p/n⌉ − 1) ≤
p(p − 1)/n pairs. Each pair corresponds to one of p(p − 1) hash functions, so
p(p − 1)/n 1
Pr(ha,b (x1 ) = ha,b (x2 )) ≤ = ,
p(p − 1) n
proving that H is 2-universal.
Hence only one choice of the pair (a, b) out of the p2 possibilities results in x1 and x2
hashing to y1 and y2 , proving that
1
Pr((ha,b (x1 ) = y1 ) ∩ (ha,b (x2 ) = y2 )) = ,
p2
as required.
Although this gives a strongly 2-universal hash family, the restriction that the universe
U and the range V be the same makes the result almost useless; usually we want to
hash a large universe into a much smaller range. We can extend the construction in a
natural way that allows much larger universes. Let V = {0, 1, 2, . . . , p − 1}, but now let
U = {0, 1, 2, . . . , pk − 1} for some integer k and prime p. We can interpret an element
u in the universe U asa vector ū = (u0 , u1 , . . . , uk−1 ), where 0 ≤ ui ≤ p − 1 for 0 ≤
i ≤ k − 1 and where k−1 i
i=0 ui p = u. In fact, this gives a one-to-one mapping between
vectors of this form and elements of U.
For any vector ā = (a0 , a1 , . . . , ak−1 ) with 0 ≤ ai ≤ p − 1, 0 ≤ i ≤ k − 1, and for
any value b with 0 ≤ b ≤ p − 1, let
k−1
hā,b (u) = ai ui + b mod p,
i=0
Proof: We follow the proof of Lemma 15.7. For any two elements u1 and u2 with
corresponding vectors ūi = (ui,0 , ui,1 , . . . , ui,k−1 ) and for any two values y1 and y2 in
V, we need to show that
1
Pr((hā,b (u1 ) = y1 ) ∩ (hā,b (u2 ) = y2 )) = .
p2
Since u1 and u2 are different, they must differ in at least one coordinate. Without loss
of generality let u1,0 = u2,0 . For any given values of a1 , a2 , . . . , ak−1 , the condition that
hā,b (u1 ) = y1 and hā,b (u2 ) = y2 is equivalent to:
⎛ ⎞
k−1
a0 u1,0 + b = ⎝y1 − a j u1, j ⎠ mod p
j=1
⎛ ⎞
k−1
a0 u2,0 + b = ⎝y1 − a j u2, j ⎠ mod p.
j=1
For any given values of a1 , a2 , . . . , ak−1 , this gives a system with two equations and two
unknowns (namely, a0 and b), which – as in Lemma 15.8 – has exactly one solution.
Hence, for every a1 , a2 , . . . , ak−1 , only one choice of the pair (a0 , b) out of the p2
403
pairwise independence and universal hash functions
Although we have described both the 2-universal and the strongly 2-universal hash fam-
ilies in terms of arithmetic modulo a prime number, we could extend these techniques
to work over general inite ields – in particular, ields with 2n elements represented by
sequences of n bits. The extension requires knowledge of inite ields, so we just sketch
the result here. The setup and proof are exactly the same as for Lemma 15.8 except that,
instead of working modulo p, we perform all arithmetic in a ixed inite ield with 2n
elements. We assume a ixed one-to-one mapping f from strings of n bits, which can
also be thought of as numbers in {0, 1, . . . , 2n − 1}, to ield elements. We let
k−1
hā,b (u) = f −1 f (ai ) · f (ui ) + f (b) ,
i=0
where the ai and b are chosen independently and uniformly over {0, 1, . . . , 2n − 1}
and where the addition and multiplication are performed over the ield. This gives a
strongly 2-universal hash function with a range of size 2n .
Lemma 15.9: Assume that m elements are hashed into an n-bin chain hashing table
by using a hash function h chosen uniformly at random from a 2-universal family. For
an arbitrary element x, let X be the number of items at the bin h(x). Then
m/n if x ∈/ S,
E[X] ≤
1 + (m − 1)/n if x ∈ S.
Proof: Let Xi = 1 if the ith element of S (under some arbitrary ordering) is in the same
bin as x and 0 otherwise. Because the hash function is chosen from a 2-universal family,
it follows that
Pr(Xi = 1) ≤ 1/n.
404
15.3 universal families of hash functions
where we have used the universality of the hash function to conclude that E[Xi ] ≤ 1/n.
Similarly, if x is an element of S then (without loss of generality) let it be the irst
element of S. Hence X1 = 1, and again
Pr(Xi = 1) ≤ 1/n
when i = 1. Therefore,
m
m
m−1
E[X] = E Xi = 1 + E[Xi ] ≤ 1 + .
i=1 i=2
n
Lemma 15.9 shows that the average performance of hashing when using a hash function
from a 2-universal family is good, since the time to look through a bin of any item is
bounded by a small number. For instance, if m = n then, when searching the hash table
for x, the expected number of items other than x that must be examined is at most 1.
However, this does not give us a bound on the worst-case time of a lookup. Some bin
√
may contain n elements or more, and a search for one of these elements requires a
much longer lookup time.
This motivates the idea of perfect hashing. Given a set S, we would like to construct a
hash table that gives excellent worst-case performance. Speciically, by perfect hashing
we mean that only a constant number of operations are required to ind an item in a hash
table (or to determine that it isn’t there).
We irst show that perfect hashing is easy if we are given suficient space for the
hash table and a suitable 2-universal family of hash functions.
Proof: Let s1 , s2 , . .
. , sm be the m items of S. Let Xi j be 1 if the h(si ) = h(s j ) and 0
otherwise. Let X = 1≤i< j≤m Xi j . Then, as we saw in Eqn. (15.1), the expected number
of collisions when using a 2-universal hash function is
⎡ ⎤
m2
m 1
E[X] = E ⎣ Xi j =
⎦ E[Xi j ] ≤ < .
1≤i< j≤m 1≤i< j≤m
2 n 2n
405
pairwise independence and universal hash functions
To ind a perfect hash function when n ≥ m2 , we may simply try hash functions cho-
sen uniformly at random from the 2-universal family until we ind one with no colli-
sions. This gives a Las Vegas algorithm. On average we need to try at most two hash
functions.
We would like to have perfect hashing without requiring space for (m2 ) bins to
store the set of m items. We can use a two-level scheme that accomplishes perfect
hashing using only O(m) bins. First, we hash the set into a hash table with m bins using
a hash function from a 2-universal family. Some of these bins will have collisions.
For each such bin, we provide a second hash function from an appropriate 2-universal
family and an entirely separate second hash table. If the bin has k > 1 items in it then
we use k2 bins in the secondary hash table. We have already shown in Lemma 15.10
that with k2 bins we can ind a hash function from a 2-universal family that will give
no collisions. It remains to show that, by carefully choosing the irst hash function, we
can guarantee that the total space used by the algorithm is only O(m).
Theorem 15.11: The two-level approach gives a perfect hashing scheme for m items
using O(m) bins.
Proof: As we showed in Lemma 15.10, the number of collisions X in the irst stage
satisies
m2
1
Pr X ≥ ≤ Pr(X ≥ 2E[X]) ≤ .
n 2
When n = m, this implies that the probability of having more than m collisions is at
most 1/2. Using the probabilistic method, there exists a choice of hash function from
the 2-universal family in the irst stage that gives at most m collisions. In fact, such
a hash function can be found eficiently by trying hash functions chosen uniformly at
random from the 2-universal family, giving a Las Vegas algorithm. We may therefore
assume that we have found a hash function for the irst stage that gives at most m
collisions.
Let ci be the number of items in the ith bin. Then there are c2i collisions between
For each bin with ci > 1 items, we ind a second hash function that gives no collisions
using space c2i . Again, for each bin, this hash function can be found using a Las Vegas
algorithm. The total number of bins used is then bounded above by
m m
m
ci
m+ c2i ≤m+2 + ci ≤ m + 2m + m = 4m.
i=1 i=1
2 i=1
406
15.4 application: finding heavy hitters in data streams
A router forwards packets through a network. At the end of the day, a natural question
for a network administrator to ask is whether the number of bytes traveling from a
source s to a destination d that have passed through the router is larger than a pre-
determined threshold value. We call such a source–destination pair a heavy hitter.
When designing an algorithm for inding heavy hitters, we must keep in mind the
restrictions of the router. Routers have very little memory and so cannot keep a count
for each possible pair s and d, since there are simply too many such pairs. Also, routers
must forward packets quickly, so the router must perform only a small number of
computational operations for each packet. We present a randomized data structure
that is appropriate even with these limitations. The data structure requires a thresh-
old q; all source–destination pairs that are responsible for at least q total bytes are
considered heavy hitters. Usually q is some ixed percentage, such as 1%, of the total
expected daily trafic. At the end of the day, the data structure gives a list of possi-
ble heavy hitters. All true heavy hitters (responsible for at least q bytes) are listed, but
some other pairs may also appear in the list. Two other input constants, ε and δ, are
used to control what extraneous pairs might be put in the list of heavy hitters. Sup-
pose that Q represents the total number of bytes over the course of the day. Our data
structure has the guarantee that any source–destination pair that constitutes less than
q − εQ bytes of trafic is listed with probability at most δ. In other words, all heavy
hitters are listed; all pairs that are suficiently far from being a heavy hitter are listed
with probability at most δ; pairs that are close to heavy hitters may or may not be
listed.
This router example is typical of many situations where one wants to keep a succinct
summary of a large data stream. In most data stream models, large amounts of data
arrive sequentially in small blocks, and each block must be processed before the next
block arrives. In the setting of network routers, each block is generally a packet. The
amount of data being handled is often so large and the time between arrivals is so
small that algorithms and data structures that use only a small amount of memory and
computation per block are required.
We can use a variation of a Bloom ilter, discussed in Section 5.5.3, to solve this
problem. Unlike our solution there, which assumed the availability of completely ran-
dom hash functions, here we obtain strong, provable bounds using only a family of
2-universal hash functions. This is important, because eficiency in the router setting
demands the use of only very simple hash functions that are easy to compute, yet at the
same time we want provable performance guarantees.
We refer to our data structure as a count-min ilter. The count-min ilter processes
a sequential stream of pairs X1 , X2 , . . . of the form Xt = (it , ct ), where it is an item
and ct > 0 is an integer count increment. In our routing setting, it would be the pair of
source–destination addresses of a packet and ct would be the number of bytes in the
packet. Let
Count(i, T ) = ct .
t:it =i,1≤t≤T
407
pairwise independence and universal hash functions
That is, Count(i, T ) is the total count associated with an item i up to time T . In the
routing setting, Count(i, T ) would be the total number of bytes associated with packets
with an address pair i up to time T . The count-min ilter keeps a running approximation
of Count(i, T ) for all items i and all times T in such a way that it can track heavy hitters.
A count-min ilter consists of m counters. We assume henceforth that our coun-
ters have suficiently many bits that we do not need to worry about overlow; in many
practical situations, 32-bit counters will sufice and are convenient for implementation.
A count-min ilter uses k hash functions. We split the counters into k disjoint groups
G1 , G2 , . . . , Gk of size m/k. For convenience, we assume in what follows that k divides
m evenly. We label the counters by Ca, j , where 1 ≤ a ≤ k and 0 ≤ j ≤ m/k − 1, so that
Ca, j corresponds to the jth counter in the ath group. That is, we can think of our counters
as being organized in a 2-dimensional array, with m/k counters per row and k columns.
Our hash functions should map items from the universe into counters, so we have hash
functions Ha for 1 ≤ a ≤ k, where Ha : U → [0, m/k − 1]. That is, each of the k hash
functions takes an item from the universe and maps it into a number [0, m/k − 1].
Equivalently, we can think of each hash function as taking an item i and mapping it to
the counter Ca,Ha (i) . The Ha should be chosen independently and uniformly at random
from a 2-universal hash family.
We use our counters to keep track of an approximation of Count(i, T ). Initially, all
the counters are set to 0. To process a pair (it , ct ), we compute Ha (it ) for each a with
1 ≤ a ≤ k and increment Ca,Ha (it ) by ct . Let Ca, j (T ) be the value of the counter Ca, j after
processing X1 through XT . We claim that, for any item, the smallest counter associated
with that item is an upper bound on its count, and with bounded probability the smallest
counter associated with that item is off by no more than ε times the total count of all the
pairs (it , ct ) processed up to that point. Speciically, we have the following theorem.
Theorem 15.12: For any i in the universe U and for any sequence
(i1 , c1 ), . . . , (iT , cT ),
min Ca, j (T ) ≥ Count(i, T ).
j=Ha (i),1≤a≤k
is trivial. Each counter Ca, j with j = Ha (i) is incremented by ct when the pair (i, ct ) is
seen in the stream. It follows that the value of each such counter is at least Count(i, T )
at any time T .
For the second bound, consider any speciic i and T . We irst consider the speciic
counter C1,H1 (i) and then use symmetry. We know that the value of this counter is at
least Count(i, T ) after the irst T pairs. Let the random variable Z1 be the amount the
counter is incremented owing to items other than i. Let Xt be a random variable that is
408
15.4 application: finding heavy hitters in data streams
By Markov’s inequality,
T
k/m k
Pr Z1 ≥ ε ct ≤ = . (15.2)
t=1
ε mε
Let Z2 , Z3 , . . . , Zk be corresponding random variables for each of the other hash func-
tions. By symmetry, all of the Zi satisfy the probabilistic bound of Eqn. (15.2). More-
over, the Zi are independent, since the hash functions are chosen independently from
the family of hash functions. Hence
T
k
T
k
Pr min Z j ≥ ε ct = Pr Z j ≥ ε ct (15.3)
j=1
t=1 j=1 t=1
k
k
≤ . (15.4)
mε
It is easy to check using calculus that (k/mε)k is minimized when k = mε/e, in which
case
k
k
= e−mε/e .
mε
Of course, k needs to be chosen so that k and m/k are integers, but this does not sub-
stantially affect the probability bounds.
We can use a count-min ilter to track heavy hitters in the routing setting as follows.
When a pair (iT , cT ) arrives, we update the count-min ilter. If the minimum hash value
associated with iT is at least the threshold q for heavy hitters, then we put the item
into a list of potential heavy hitters. We do not concern ourselves with the details of
performing operations on this list, but note that it can be organized to allow updates
409
pairwise independence and universal hash functions
and searches in time logarithmic in its size by using standard balanced search-tree data
structures; alternatively, it could be organized in a large array or a hash table.
Recall that we use Q to represent the total trafic at the end of the day.
that we use a count-min ilter with k = ln 1δ hash func-
Corollary 15.13:
1 Suppose
e
tions, m = ln δ · ε counters, and a threshold q. Then all heavy hitters are put on
the list, and any source–destination pair that corresponds to fewer than q − εQ bytes
is put on the list with probability at most δ.
Proof: Since counts increase over time, we can simply consider the situation at the
end of the day. By Theorem 15.12, the count-min ilter will ensure that all true heavy
hitters are put on the list, since the smallest counter value for a true heavy hitter will
be at least q. Further, by Theorem 15.12, the smallest counter value for any source–
destination pair that corresponds to fewer than q − εQ bytes reaches q with probability
at most
k
k
≤ e− ln(1/δ) = δ.
mε
The count-min ilter is very eficient in terms of using only limited randomness in its
hash functions, only O 1ε ln 1δ counters, and only O ln 1δ computations to process
each item. (Additional computation and space might be required to handle the list of
potential heavy hitters, depending on its representation.)
Before ending our discussion of the count-min ilter, we describe a simple improve-
ment known as conservative update that often works well in practice, although it is
dificult to analyze. When a pair (it , ct ) arrives, our original count-min ilter adds ct to
each counter Ca, j that the item i hashes to, thereby guaranteeing that
min Ca, j (T ) ≥ Count(i, T )
j=Ha (i),1≤a≤k
holds for all i and T . In fact, this can often be guaranteed without adding ct to each
counter. Consider the state after the (t − 1)th pair has been processed. Suppose that,
inductively, up to that point we have, for all i,
min Ca, j (t − 1) ≥ Count(i, t − 1).
j=Ha (i),1≤a≤k
Hence we can look at the minimum counter value v obtained from the k counters that
it hashes to, add ct to that value, and increase to v + ct any counter that is smaller than
v + ct . An example is given in Figure 15.1. An item arrives with a count of 3; at the
time of arrival, the smallest counter associated with the item has value 4. It follows that
the count for this item is at most 7, so we can update all associated counters to ensure
they are all at least 7. In general, if all the counters it hashes to are equal, conservative
410
15.5 exercises
Figure 15.1: An item comes in, and 3 is to be added to the count. The initial state is on the left;
the shaded counters need to be updated. Using conservative update, the minimum counter value 4
determines that all corresponding counters need to be pushed up to at least 4 + 3 = 7. The resulting
state after the update is shown on the right.
update is equivalent to just adding ct to each counter. When the it are not all equal, the
conservative update improvement adds less to some of the counters, which will tend to
reduce the errors that the ilter produces.
15.5. Exercises
Exercise 15.1: A fair coin is lipped n times. Let Xi j , with 1 ≤ i < j ≤ n, be 1 if the
ith and jth lip landed on the same side; let Xi j = 0 otherwise. Show that the Xi j are
pairwise independent but not independent.
Exercise 15.2: (a) Let X and Y be numbers that are chosen independently and uni-
formly at random from {0, 1, . . . , n}. Let Z be their sum modulo n + 1. Show that X,
Y, and Z are pairwise independent but not independent.
(b) Extend this example to give a collection of random variables that are k-wise
independent but not (k + 1)-wise independent.
Exercise 15.3: For any family of hash functions from a inite set U to a inite set V,
show that, when h is chosen at random from that family of hash functions, there exists
a pair of elements x and y such that
1 1
Pr(h(x) = h(y)) ≥ − .
|V | |U|
This result should not depend on how the function h is chosen from the family.
Exercise 15.4: Show that, for any discrete random variable X that takes on values in
the range [0, 1], Var[X] ≤ 1/4.
Exercise 15.5: Suppose we have a randomized algorithm Test for testing whether a
string appears in a language L that works as follows. Given an input x, the algorithm
Test chooses a random integer r uniformly from the set S = {0, 1, . . . , p − 1} for some
prime p. If x is in the language, then Test(x, r) = 1 for at least half of the possible
values of r. A value of r such that Test(x, r) = 1 is called a witness for x. If x is not in
the language, then Test(x, r) = 0 always.
411
pairwise independence and universal hash functions
Exercise 15.6: Our analysis of Bucket sort in Section 5.2.2 assumed that n elements
were chosen independently and uniformly at random from the range [0, 2k ). Suppose
instead that n elements are chosen uniformly from the range [0, 2k ) in such a way that
they are only pairwise independent. Show that, under these conditions, Bucket sort still
requires linear expected time.
Exercise 15.7: (a) We have shown that the maximum load when n items are hashed
into n bins using
√ a hash function chosen from a 2-universal family of hash functions is
at most 1 + 2n with probability at least 1/2. Generalize this argument to k-universal
hash functions. That is, ind a value such that the probability that the maximum load is
larger than that value is at most 1/2.
(b) In Lemma 5.1 we showed that, under the standard balls-and-bins model, the
maximum load when n balls are thrown independently and uniformly at random into
n bins is at most 3 ln n/ ln ln n with probability 1 − 1/n. Find the smallest value of k
such that the maximum load is at most 3 ln n/ ln ln n with probability at least 1/2 when
choosing a hash function from a k-universal family.
Exercise 15.8: We can generalize the problem of inding a large cut to inding a large
k-cut. A k-cut is a partition of the vertices into k disjoint sets, and the value of a cut
is the weight of all edges crossing from one of the k sets to another. In Section 15.1.2
we considered 2-cuts when all edges had the same weight 1, and we showed how to
derandomize the standard randomized algorithm using collections of n pairwise inde-
pendent bits. Explain how this derandomization could be generalized to obtain a poly-
nomial time algorithm for 3-cuts, and give the running time for your algorithm. (Hint:
You may want to use a hash function of the type found in Section 15.3.2.)
Exercise 15.9: Suppose we are given m vectors v̄1 , v̄2 , . . . , v̄m ∈ {0, 1}ℓ such that any
k of the m vectors are linearly independent modulo 2. Let v̄i = (vi,1 , vi,2 , . . . , vi,ℓ ). Let
ū be chosen uniformly at random from {0, 1}ℓ , and let Xi = ℓj=1 vi, j u j mod 2. Show
that the Xi are uniform, k-wise independent bits.
Exercise 15.10: We examine a speciic way in which 2-universal hash functions differ
from completely random hash functions. Let S = {0, 1, 2, . . . , k}, and consider a hash
function h with range {0, 1, 2, . . . , p − 1} for some prime p much larger than k. Con-
sider the values h(0), h(1), . . . , h(k). If h is a completely random hash function, then
the probability that h(0) is smaller than any of the other values is roughly 1/(k + 1).
(There may be a tie for the smallest value, so the probability that any h(i) is the unique
smallest value is slightly less than 1/(k + 1).) Now consider a hash function h chosen
412
15.5 exercises
H = {ha,b | 0 ≤ a, b ≤ p − 1}
of Section 15.3.2. Estimate the probability that h(0) is smaller than h(1), . . . , h(k) by
randomly choosing 10,000 hash functions from h and computing h(x) for all x ∈ S.
Run this experiment for k = 32 and k = 128, using primes p = 5,023,309 and p =
10,570,849. Is your estimate close to 1/(k + 1)?
Exercise 15.11: In a multi-set, each element can appear multiple times. Suppose that
we have two multi-sets, S1 and S2 , consisting of positive integers. We want to test if
the two sets are the “same” – that is, if each item appears the same number of times in
each set. One way of doing this is to sort both sets and then compare the sets in sorted
order. This takes O(n log n) time if each multi-set contains n elements.
(a) Consider the following algorithm. Hash each element of S1 into a hash table with
cn counters; the counters are initially 0, and the ith counter is incremented each
time the hash value of an element is i. Using another table of the same size and
using the same hash function, do the same for S2 . If the ith counter in the irst table
matches the ith counter in the second table for all i, report that the sets are the same,
and otherwise report that the sets are different.
Analyze the running time and error probability of this algorithm, assuming that
the hash function is chosen from a 2-universal family. Explain how this algorithm
can be extended to a Monte Carlo algorithm, and analyze the trade-off between its
running time and its error probability.
(b) We can also design a Las Vegas algorithm for this problem. Now each entry in the
hash table corresponds to a linked list of counters. Each entry holds a list of the
number of occurrences of each element that hashes to that location; this list can be
kept in sorted order. Again, we create a hash table for S1 and a hash table for S2 ,
and we test after hashing if the resulting tables are equal.
Argue that this algorithm requires only linear expected time using only linear
space.
H = {ha,b | 1 ≤ a ≤ p − 1, 0 ≤ b ≤ p − 1}
H′ = {ha | 1 ≤ a ≤ p − 1}.
413
pairwise independence and universal hash functions
Give an example to show that H′ is not 2-universal. Then prove that H′ is almost 2-
universal in the following sense: for any x, y ∈ {0, 1, 2, . . . , p − 1}, if h is chosen uni-
formly at random from H′ then
2
Pr(h(x) = h(y)) ≤ .
n
Exercise 15.13: In describing count-min ilters, we assumed that the data stream con-
sisted of pairs of the form (it , ct ), where it was an item and ct > 0 an integer count
increment. Suppose that one were also allowed to decrement the count for an item, so
that the stream could include pairs of the form (it , ct ) with ct < 0. We could require
that the total count for an item i,
Count(i, T ) = ct ,
t:it =i,1≤t≤T
always be positive.
Explain how you could modify or otherwise use count-min ilters to ind heavy hit-
ters in this situation.
414
chapter sixteen
Power Laws and
Related Distributions
In this chapter, we explore some additional basic probability distributions that arise
in a number of computer science applications. One family of distributions we focus
on are called power law distributions. An interesting aspect of these distributions is
that, unlike many of the distributions we have seen, the variance of the distribution can
be extremely large – with some natural choices of parameters, the variance is ininite.
As a result, certain methods we usually rely on in probabilistic arguments, such as
concentration of the sum of random variables, may not apply.
Power laws and related distributions may initially appear surprising or unusual, but
in fact they are quite natural, and arise easily from a number of basic models. We
examine some of these models in the course of the chapter. Power laws may contrast
sharply with other distributions we have seen, such as Gaussian distributions, which
also appear quite frequently in real-world settings, but both types of distributions have
their uses and their place.
As some groundwork, suppose we want to consider the average height of women
in the United States. We could take a random sample of women, and we would expect
that a fairly small number of samples would quickly lead to a good estimate. (The U.S.
Census Bureau publishes data on height distribution; currently, the average woman’s
height is somewhere between 5’ 4” and 5’ 5”, although the range depends on the age
group you are considering.) This is because heights fall in a narrow range, with the
number of people of a certain height falling very quickly as you move away from
the average – very few women are more than 7 feet tall. On the other hand, suppose
we wanted to ind the average number of times a word appears in all the books printed
in the United States in a year. Some common words, such as “the”, “of”, and “an”,
appear remarkably frequently, while most words would only appear at most a handful
of times. In fact, the distribution of words in literature has been studied in some detail,
and has been found to roughly follow a power law distribution. We consider later some
proposed arguments for why that might naturally be the case.
Many other phenomena share this property that the corresponding distribution is
not well concentrated around its mean, such as the sizes of cities, the strength of
415
power laws and related distributions
earthquakes, and the distribution of wealth among families. For many such examples,
a power law distribution provides a plausible model for the distribution.
Before deining a power law distribution, it may help to give an example. A Pareto
distribution with parameters α > 0 and minimum value m > 0 satisies
−α
x
Pr(X ≥ x) = .
m
Here the minimum value m appropriately satisies Pr(X ≥ m) = 1. The value α is
sometimes called the tail index. Correspondingly, the density function for the Pareto
distribution is
f (x) = αmα x−α−1 .
Let us try to examine the moments of this random variable. The mean E[X] is given
by
∞
E[X] = x f (x)dx
x=m
∞
= x(αmα x−α−1 )dx
x=m
∞
= αmα
x−α dx.
x=m
We already notice something unusual; the mean is not inite when α ≤ 1, as the integral
in the expression above diverges. For α > 1, we can complete the calculation to ind
the mean is given by
αm
E[X] = .
α−1
If we look at the jth moment E[X j ], we have
∞
E[X j ] = x j f (x)dx
x=m
∞
= x j (αmα x−α−1 )dx
x=m
∞
= αmα x j−1−α dx.
x=m
The jth moment is not inite when α ≤ j; for α > j, we have
αm j
E[X j ] = .
α− j
So, for example, when α ≤ 2, the second moment is ininite. Correspondingly, the vari-
ance is ininite when 1 < α ≤ 2; for α ≤ 1 since both the irst and second moments are
ininite the variance is not well-deined.
416
16.1 power law distributions: basic definitions and properties
More generally, we say that a nonnegative random variable X is said to have a power
law distribution if
Pr(X ≥ x) ∼ cx−α
for constants c > 0 and α > 0. Here f (x) ∼ g(x) represents that the limit of the ratio of
f (x) and g(x) converges to 1 as x grows large. Roughly speaking, a power law distribu-
tion asymptotically behaves like a Pareto distribution. It is worth noting that the term is
sometimes used slightly differently in other contexts. For example, sometimes people
use power law distribution interchangeably with Pareto distribution, and refer to what
we have called a power law distribution as an asymptotic power law distribution. Also,
sometimes people use α + 1 in the deinition where we have used α. (This convention
yields that the density function, rather than the complementary cumulative distribution
function, has parameter α.) Finally, sometimes one allows the ratio to converge not to
1, but to some slowly growing function.
A power law is best visualized on what is called a log–log plot, where both axes
are presented using logarithmic scales. On a log–log plot the relationship y = axb is
shown by presenting ln y = b ln x + ln a, so that the polynomial relationship appears as
a straight line whose slope depends on the exponent b. (Here we use a natural logarithm
for our log–log plot, but we could use any base for the logarithm and still obtain a
straight line.) For a Pareto distribution with parameters α > 0 and m > 0, a log–log
plot of F̄ (x) = Pr(X ≥ x) (which we recall is called the complementary cumulative
distribution function) therefore follows a straight line:
ln F̄ (x) = −α ln x + α ln m.
More generally, if X has a power law distribution, then in a log–log plot of the comple-
mentary cumulative distribution function, asymptotically the behavior will be a straight
line. This provides a simple empirical test for whether a random variable may behave
according to a power law given an appropriate sample; while a nearly straight line does
not guarantee a power law distribution, if the results are far from a straight line, a power
law is unlikely. (It is important to emphasize that the “straight-line” test on a log–log
plot is sometimes used to infer that a sample arises from a distribution that follows a
power law, but because many other distributions produce nearly linear outcomes on a
log–log plot, one must take more care to test for power laws.) On a log–log plot the
density function for the Pareto distribution also is a straight line:
ln f (x) = (−α − 1) ln x + α ln m + ln α.
Similarly, asymptotically the density function for a power law will approach a straight
line.
Thus far we have focused on the mathematical deinitions for continuous power
law distributions. But we could also consider discrete variations. For example, the zeta
distribution with parameter s > 1 is deined for all positive integer values according to
k−s
Pr(X = k) = ,
ζ (s)
417
power laws and related distributions
hit with equal probability (1 − q)/d. A space is used to separate words. We consider
the frequency distribution of words.
It is clear that as the monkey types, each word with j characters occurs with proba-
bility
1−q j
qj = q,
d
and there are d j words of length j. (We allow the empty word of length 0 for con-
venience.) The words of longer length are less likely and hence occur lower in the
rank order of word frequency. In particular, the words with frequency ranks 1 + (d j −
1)/(d − 1) to (d j+1 − 1)/(d − 1) have j letters. Hence, the word with frequency rank
k = d j has length j = logd k and occurs with probability
1 − q logd k
pk = q = klogd (1−q)−1 q.
d
For k = d j , as in Section 16.2.2 it is reasonable to use as an approximation that the
length of the kth most frequent word is logd k, in which case pk ≈ klogd (1−q)−1 q, and
the power law behavior is apparent.1
The power law associated with word frequency, although it naturally arises from
optimization, does not seem to actually require it. This result serves as something of a
warning; there are multiple ways that power law distributions can arise. Indeed, we next
turn to one of the most frequently used models that leads to power law distributions,
preferential attachment.
To describe preferential attachment, let us work with a very simple model of the World
Wide Web. The World Wide Web conists of web pages and directed hyperlinks from
one page to another. The World Wide Web can naturally be thought of as a graph, with
pages corresponding to vertices and hyperlinks corresponding to directed edges. The
graph grows and changes as pages and links are added to the Web.
Our model of the Web’s growth will be very basic; our goal is not detailed accu-
racy, but a high-level understanding of what might be happening. Let us start with two
pages, each linking to the other; the starting coniguration does not make a substantial
difference, so this coniguration is chosen for convenience. At each time step, a new
page appears, with just a single link. (One could try to be more accurate by having
multiple links or a distribution on links, but having a single link per page simpliies our
1 The attentive reader might note that technically the result above does not quite match a power law as we have
deined it; the appropriate limit does not tend to 1 but is bounded above and below by a constant, because instead
of a steady decrease in the frequency with the rank there are discrete jumps. That is, instead of taking a power
law to be deined by Pr(X ≥ x) ∼ cx−α , we instead have a case here where Pr(X ≥ x) is (x−α ). This is a minor
point; small amounts of noise in the frequency of how individual letters are chosen would lead to a smoother
behavior. Also, in some settings, random variables where Pr(X ≥ x) is (x−α ) are referred to as power laws.
420
16.3 preferential attachment
analysis and yields the important insights.) How should we model what page the new
link points to?
The idea behind preferential attachment is that new links will tend to attach to popu-
lar pages. In the case of the Web graph, new links tend to go to pages that already have
links. We can model this by thinking of the new page as copying a random link, with
some probability. Speciically, with probability γ < 1, the link for the new page points
to a page chosen uniformly at random, but with probability 1 − γ , the new page copies
a random link, so that the new page points to an existing page chosen proportionally to
the indegree of that page. We point out that our preferential attachment model of the
World Wide Web is a Markov chain, as we do not care about the history of how links
attached when a new link is added. We only care about the number of links directed
into each page.
Let us start with a not entirely rigorous argument that provides the intuition for how
this model behaves. Let X j (t ) (or just X j where the meaning is clear) be the number
of pages with indegree j when there are t pages in the system. Then for j ≥ 1 the
probability that X j increases is just
the irst term is the probability a new link is chosen at random and chooses a page
with indegree j − 1, and the second term is the probability that a new link is chosen
proportionally to the indegrees and chooses a page with indegree j − 1. Similarly, the
probability that X j decreases is
γ X j (t )/t + (1 − γ ) jX j (t )/t.
c0 = 1 − γ c0 ,
421
power laws and related distributions
1
so c0 = 1+γ
. More generally, we ind using Eqn. (16.1) that
c j = γ c j−1 + (1 − γ )( j − 1)c j−1 − γ c j − (1 − γ ) jc j . (16.3)
This gives the following recurrence for c j .
(γ + ( j − 1)(1 − γ ))
c j = c j−1 . (16.4)
(1 + γ + j(1 − γ ))
This is enough for us to ind the values for c j explicitly. If we focus on the asymp-
totics, we ind that for large j
cj 2−γ 2−γ 1
=1− ∼1− .
c j−1 1 + γ + j(1 − γ ) 1−γ j
2−γ
Asymptotically, for the above to hold we have c j ∼ j− 1−γ , giving a power law. To see
2−γ
this, we observe that c j ∼ j− 1−γ implies
2−γ
cj j − 1 1−γ 2−γ 1
∼ ∼1− .
c j−1 j 1−γ j
only vertex degrees affected are those of v1 and v2 . To see this, consider the evolu-
tion of graphs G1 and G2 arising after choosing v1 or v2 , respectively, at the tth step.
If the next step creates a link to a random vertex (the same random vertex in both
graphs, as the Zt+1 would be the same), the degree of every vertex besides v1 and v2
remains the same in G1 and G2 . Similarly, consider if the next step creates a link by
copying a random link; that is, Zt+1 says to copy the link created at the ℓth step of the
process for some ℓ. If ℓ = t, then again the degree of every vertex besides v1 and v2
clearly remains the same in G1 and G2 , since the same vertex receives a new link. If
ℓ = t, then v1 obtains an extra link in G1 and v2 obtains an extra link in G2 ; however,
this only affects the degrees of vertices v1 and v2 , so the bound on |Y j,t − Y j,t+1 | still
holds.
1
δ(t + 1) − δ(t ) = E[X0,t ] − E[X0,t+1 ] +
1+γ
γ E[X0,t ] 1
= −1 + +
t 1+γ
γ δ(t )
=− .
t
It follows that δ(t ) is decreasing in t, but is always greater than 0. The lemma follows,
since δ(t ) < 2.
Lemma 16.3: Let c j be the constants given by Eqn. (16.3). For any constant j, there
is a constant B j such that for t ≥ 2
E[X j,t ] Bj
t − c j
≤ .
t
423
power laws and related distributions
Before beginning the proof, we outline the reasoning behind it. The idea is that if
E[X j,t ] is too far from c j t, at the next step there will be a push to reduce the difference.
If E[X j,t ] becomes too large, then it becomes more likely that at the next step a vertex
with degree j will gain a link to it and become a vertex of degree j + 1, reducing the
difference between E[X j,t ] and c j t. Similarly, if E[X j,t ] becomes too small, then it is
less likely that a vertex with degree j will gain a link and the difference is similarly
reduced. The complication is that X j,t+1 also depends on E[X j−1,t ], which may itself be
deviating from c j−1t, and therefore may also serve to push E[X j,t ] from c j t. Inductively,
however, those deviations are small, and as such their effect is overcome by the initial
effect that pushes E[X j,t ] to c j t.
Proof: We know the statement is true for j = 0, and we prove it by induction for larger
j, by also performing an induction on the time t. For j ≥ 0 and t ≥ 2, let
δ j (t ) = c j t − E[X j,t ],
where c j is given by Eqn. (16.3). From Eqn. (16.1), we have for j ≥ 1
γ + (1 − γ )( j − 1) γ + (1 − γ ) j
E[X j,t+1 ] = E[X j,t ] + E[X j−1,t ] − E[X j,t ]. (16.6)
t t
Using Eqn. (16.6), we ind for j ≥ 1
δ j (t + 1) = c j (t + 1) − E[X j,t+1 ]
γ + (1 − γ )( j − 1) γ + (1 − γ ) j
= c j t + c j −E[X j,t ] − E[X j−1,t ] + E[X j,t ]
t t
γ + (1 − γ )( j − 1) γ + (1 − γ ) j
= c j + δ j (t ) − E[X j−1,t ] + E[X j,t ]
t t
γ + (1 − γ )( j − 1)
= c j + δ j (t ) − (c j−1t − δ j−1 (t ))
t
γ + (1 − γ ) j
+ (c j t − δ j (t ))
t
γ + (1 − γ )( j − 1) γ + (1 − γ ) j
= δ j (t ) + δ j−1 (t ) − δ j (t ).
t t
Suppose inductively that |δ j−1 (t )| ≤ B j−1 . Now for t ≤ γ + (1 − γ ) j we can ind a
constant B j so that |δ j−1 (t )| ≤ B j , since this is only over a constant number of steps.
Let us also suppose that B j ≥ B j−1 ; if not, we could simply increase B j to this value.
For t > γ + (1 − γ ) j, the right-hand side above has absolute value bounded by
1 − γ + (1 − γ ) j δ j (t ) + γ + (1 − γ )( j − 1) δ j−1 (t ) .
t t
)j
As t > γ + (1 − γ ) j implies γ +(1−γ
t
< 1, this expression is bounded above by
γ + (1 − γ ) j γ + (1 − γ )( j − 1)
1− Bj + B j−1 ≤ B j
t t
where here we have inductively assumed δ j (t ) ≤ B j , and the right-hand side then
)j
follows from γ +(1−γ
t
≥ γ +(1−γt )( j−1) and B j ≥ B j−1 .
As a result we have |δ j (t + 1)| ≤ B j and by induction the lemma follows.
424
16.4 using the power law in algorithm analysis
To summarize, we have shown that under appropriate initial conditions for the pref-
erential attachment model for the World Wide Web with high probability the fraction
of pages with j other pages linking to that page converges to c j , where the c j follow a
2−γ
power law given by c j ∼ j− 1−γ . Here 1 − γ is the probability a link for a new page is
chosen by copying an existing link.
Although we have presented preferential attachment as a potential model for the
Web graph, our analysis applies generally to preferential attachment models. In fact,
the idea of preferential attachment arose much earlier than the World Wide Web; in
1925, Yule used a similar analysis to explain the distribution of species among gen-
era of plants, which had been shown empirically to satisfy a power law distribution.
Another development of how preferential attachment leads to a power law was given
by Simon in 1955. While Simon was a bit too early to provide a model for the graph
arising from the World Wide Web, he suggested several potential applications of this
type of preferential attachment model: distributions of word frequencies in documents,
distributions of numbers of papers published by scientists, distribution of cities by pop-
ulation, distribution of incomes, and distribution of species among genera.
Theorem 16.4: The Triangle Listing Algorithm runs in O(m3/2 ) time (assuming
m ≥ n).
425
power laws and related distributions
Proof: We irst show that all triangles are listed exactly once. Consider a triangle
{x, y, z} with d ∗ (x) > d ∗ (y) > d ∗ (z). Let us say a vertex v is processed when we reach
it as we go through the vertices in decreasing order of the d ∗ values. Then vertex x
is processed before vertices y and z. The triangle is listed precisely when vertex y is
processed, since vertex x was added to A[y] and A[z] when x was processed, and when
y is processed we have that z ∈ N(y) and d ∗ (z) < d ∗ (y), so the triangle will be output.
Moreover, this will be the only time this triangle is output, since when vertex x is pro-
cessed neither y nor z is in A[x], and when vertex z is processed both d ∗ (x) and d ∗ (y)
are greater than d ∗ (z).
To bound the running time, we irst see that calculating the vertex degrees is O(m)
and the initial sorting step is O(n log n), and these are each O(m3/2 ). We claim that for
each of the O(m) edges, corresponding to the step “For each u ∈ N(v)”, we do at most
√
O( m) work to calculate the intersection of A[u] and A[v]. Because A[u] and A[v] are
in sorted order with vertices ordered by d ∗ as they are added to the list, the intersection
can be computed in time proportional to the maximum list size. But for any vertex x,
A[x] contains only vertices with degree at least as large as x’s degree. If x’s degree was
√
larger than 2 m, then all of x’s neighbors in A[x] would also have degree larger than
√ √
2 m, which would yield at least (2 m)2 /2 = 2m edges in the graph. (We divide by
2 as each edge might be counted twice.) This contradicts that there are only m edges
√
in the graph, so every list A[x] has size at most O( m), and the total running time is
bounded by O(m3/2 ).
Now let us consider this algorithm in the setting where the graph has a degree dis-
tribution that is governed by a power law. There are various ways one could deine
the degree distribution being governed by a power law, but for our purposes here it
will sufice to say that we assume we have a graph where the number of vertices of
426
16.5 other related distributions
degree at least j is at most cn j−α for some constants c and α. Notice that if the num-
ber of vertices of degree exactly j is at most c2 n j−β for some constants c2 and β, then
this condition is satisied, with α = β − 1. Such an assumption could hold with high
probability for a random graph produced by, for example, a preferential attachment
model.
Theorem 16.5: The Triangle Listing Algorithm runs in O(mn1/(1+α) ) time if the num-
ber of vertices of degree at least j is at most cn j−α for some constants c and α.
Theorem 16.5 offers an improved bound over Theorem 16.4 when α > 1. Notice
that such power law graphs are sparse, so in this case m = O(n); hence the running
time could also be expressed as O(n(2+α)/(1+α) ).
Proof: We again bound the work to calculate the intersections of A[u] and A[v]. For
any vertex x, |A[x]| ≤ d(x), since only neighbors of x are on the list A[x]. Also, A[x]
only contains vertices with degree at least d(x). Hence |A[x]| ≤ min(d(x), cn(d(x))−α ).
Equalizing terms in the minimization, we ind |A[x]| ≤ (cn)1/(1+α) . The theorem
follows.
While power law distributions can often provide a natural model, there are other dis-
tributions with similar behaviors that may provide better models in some situations.
Indeed, there can be controversy as to what is the best model in various situations, and
because the tail of a power law distribution corresponds to relatively rare events, the
choice of model can have signiicant implications regarding the importance of these
rare events. It could be important to have a good model, for example, of exactly how
rarely very strong earthquakes will occur. Here we examine some distributions that are
often suggested as alternatives to a power law distribution.
f (x) dx = g(y) dy
for some α, λ > 0. The idea behind using such a distribution is that, similar to a lognor-
mal distribution, it can roughly follow a power law distribution for much of the body
of the distribution when λ is small, but for suficiently large values of x the exponential
term will dominate. The exponential cutoff can model power laws that eventually must
end because of resource limitations. For example, the distribution of wealth may be
better it to a power law with an exponential cutoff rather than a power law; eventually,
there are limits to the money to be had, and as such the exponential cutoff may better
model the tail of the distribution.
16.6. Exercises
Exercise 16.1: (a) Pareto distributions are often said to be “scale invariant” in the
following sense. If X is a random variable that follows a Pareto distribution and has
density f (x), then the rescaled random variable having density g(x) = f (cx) has density
proportional to f (x). Prove this statement.
(b) An implication of scale invariance is that if we measure our random variable in
different units, it remains a Pareto distribution. For example, if we think wealth follows
a Pareto distribution and we rescale to measure wealth in millions of dollars instead of
dollars, we still have a Pareto distribution. Show that, under such a rescaling (where
g(x) = f (cx)), a Pareto distribution remains a straight line on a log–log plot, and is just
shifted up or down.
Exercise 16.2: Suppose that the time to inish a project in hours is given by a Pareto
distribution with parameter α = 2 and a minimum time of one hour. What is the
expected time to complete the project? Now suppose that the project is not completed
after three hours. If the time to complete the project is given by the initial Pareto dis-
tribution conditioned on the completion time being at least three hours, what is the
expected remaining time until the completion of the project? How does this compare
with the original expected time to complete the project?
Exercise 16.3: Consider a random variable X that has a Pareto distribution with
parameters α > 0 and minimum value m. Determine for x ≥ y ≥ m the conditional
distribution
Pr(X ≥ x | X ≥ y).
Exercise 16.4: Suppose that the time to inish a project in hours is given by a Pareto
distribution with parameter α and a minimum time of one hour. Pareto distributions
can have the property that the longer the project goes without completing, the longer
it is expected to take to complete. That is, if X is the time the project inishes, we are
concerned with
f (y) = E[X − y | X ≥ y].
Show that f is an increasing function when α > 1.
429
power laws and related distributions
Exercise 16.6: Power law distributions are often described anecdotally by phrases
such as “20% of the population has 80% of the income.” If one assumes a Pareto dis-
tribution, this phrase determines a parameter α. What value of α corresponds to this
phrase? Your argument should explain why your result is independent of the minimum
value m.
Exercise 16.7: Consider the standard random walk X0 , X1 , X2 , . . . on the integers that
starts at 0 and moves from Xi to Xi + 1 with probability 1/2 at each step and from Xi to
Xi − 1 with probability 1/2 at each step. We are interested in the irst return time to 0.
Note that this time must be even. Let ft be the probability that the irst time the walk
returns to 0 is at time 2t. Let ut be the probability the walk is at 0 at time 2t.
(a) Prove that ut = 2tt 2−2t .
(b) Consider the probability Pr(X1 > 0, X2 > 0, . . . , X2t−1 > 0 | X2t = 0). Show that
1
this probability is 2t−1 . (Hint: this can be done using the Ballot Theorem from
Section 13.2.1.)
ut
(c) Prove that ft = 2t−1 .
(d) Using Stirling’s formula, show that ft follows a power law.
Exercise 16.8: Consider the monkey typing randomly experiment with an alphabet
of two letters that are hit with differing probabilities: “a” occurs with probability q,
“b” occurs with probability q2 , and a space occurs with probability 1 − q − q2 . (Here
q satisies 1 − q − q2 > 0.)
(a) Show that every word the monkey can type occurs with probability q j (1 − q − q2 )
for some integer j.
(b) Let us say a word has pseudo-rank j if it occurs with probability q j (1 − q − q2 ).
Show that the number of words with pseudo-rank j is the ( j + 1)st Fibonacci num-
ber Fj+1 (where here we start with F0 = 0 and F1 = 1).
k
(c) Use facts about the Fibonacci numbers, such as i=1 Fk = Fk+2 − 1, and Fk ≈
k
√ √
1+ 5
φ / 5 for large k where φ = 2 , to show that the frequency of the jth most
frequent word behaves (roughly) like a power law, following a similar approach to
that used to analyze the setting of the monkeys typing randomly experiment with
equal character probabilities.
Exercise 16.9: Write a program to simulate the monkeys typing randomly experiment
of Section 16.2.3. Your simulation should consider the following two scenarios.
430
16.6 exercises
r You have an alphabet of 8 letters and space; the space is chosen with probability 0.2,
and the other letters are chosen with equal probability.
r You have an alphabet of 8 letters and space; the space is chosen with probability 0.2,
and the probability for each of the other letters is chosen uniformly at random, with
the constraint that the sum of their probabilities is 0.8.
For each scenario, generate 1 million words, and track the frequency for each word
that appears. Recall that the empty word should be treated as a word, and you will have
many fewer than 1 million distinct words to track.
(a) In practice, you should be able to represent each word seen in an experiment using
at most 256 bits (at least most of the time). Explain why this is the case.
(b) Plot the distribution of word frequencies for each scenario on a log–log plot. The x-
axis should be the rank of the word in terms of its frequency, and the y-axis should
be the frequency. Do the two plots differ?
(c) Do your plots appear to follow a power law? Explain.
Exercise 16.12: Derive the expressions for the mean and variance of a power law
distribution with an exponential cutoff with parameters α and λ. (You may assume α
is a positive integer.)
Exercise 16.13: Derive the expressions for the mean, median, and mode of a lognor-
1 2
mal distribution with parameters μ and σ 2 . Recall the mean is eμ+ 2 σ , the median is
2
eμ , and the mode is eμ−σ .
Exercise 16.14: Consider the count-min ilter from Section 15.4. We show that the
bounds on its performance can be improved if the distribution of item counts follows a
power law distribution. Suppose we have a collection of N items, where the total count
associated with the ℓth most frequent item is given by fℓ = c/ℓz for a constant z > 1 and
431
power laws and related distributions
a value c. (You may assume all the fℓ are suitably rounded integers for convenience.) As
described in Section 15.4, we assume we have k disjoint groups of counters, each with
m/k counters. We use the minimum counter Ca, j that an item hashes to as an estimate
for its count.
(a) Show that the tail of the total count for all items after removing the b ≥ 1 most
frequent items is bounded by
N
cb1−z
fi ≤ ≤ Fb1−z ,
i=b+1
z−1
N
where F = i=1 fi .
(b) Consider now an element i with total count fi , and let us consider a single group
of counters. Show that the probability that i collides with any of the m/(3k) items
with the largest count (besides possibly itself) is at most 1/3.
(c) Show that, conditioned on the event E that i does not collide with any of the m/(3k)
items with the largest count, the expected count for the counter Ca, j that i hashes
to is bounded by
(m/3k)1−z
E[Ca, j | E] ≤ fi + F .
(m/k)
1−z
(d) Let γ = 3 (m/3k) m/k
. Prove that Ca, j ≤ fi + γ F with probability at least 1/3.
(e) Explain why the above implies that the count-min ilter produces an estimate for
fi that is at most fi + γ F with probability at least 1 − (2/3)k .
(f) Suppose we want to ind all items with a count of at least q; when an item is hashed
into the count-min ilter, it is put on a list if its minimum counter is at least q.
Prove that we can construct a count-min ilter with O(⌈ln 1δ ⌉) hash functions and
O(⌈ln 1δ ⌉⌈ǫ −1/z ⌉) counters so that all items with count at least q are put on the list,
and any item that has a count of less than q − ǫF is put on the list with probability
at most δ. (This improves the result of Corollary 15.13 for this type of skewed
distribution of item counts.)
432
chapter seventeen∗
Balanced Allocations and
Cuckoo Hashing
In this chapter, we examine simple and powerful variants of the classic balls-and-bins
paradigm, where each ball may have a choice of a small number of bins where it can
be placed. In our irst setting, often referred to as balanced allocations, the balls have
choices, and a choice of where a ball is to be placed must be made once and for all when
the ball enters the system. In our second setting, referred to as cuckoo hashing, balls
may move to another choice after their initial placement under some circumstances.
Suppose that we sequentially place n balls into n bins by putting each ball into a bin
chosen independently and uniformly at random. We studied this classic balls-and-bins
problem in Chapter 5. There we showed that, at the end of the process, the most balls
in any bin – the maximum load – is (ln n/ ln ln n) with high probability.
In a variant of the process, each ball comes with d possible destination bins, each
chosen independently and uniformly at random, and is placed in the least full bin among
the d possible locations at the time of the placement. The original balls-and-bins process
corresponds to the case where d = 1. Surprisingly, even when d = 2, the behavior is
completely different: when the process terminates, the maximum load is ln ln n/ ln 2 +
O(1) with high probability. Thus, an apparently minor change in the random allocation
process results in an exponential decrease in the maximum load. We may then ask what
happens if each ball has three choices; perhaps the resulting load is then O(ln ln ln n).
We shall consider the general case of d choices per ball and show that, when d ≥ 2, with
high probability the maximum load is ln ln n/ ln d + (1). Although having more than
two choices does reduce the maximum load, for any constant d the reduction changes
it by only a constant factor, so it remains (ln ln n) for a constant d.
random (with replacement). Each ball is placed in the least full of the d bins at the
time of the placement, with ties broken randomly. After all the balls are placed, the
maximum load of any bin is at most ln ln n/ ln d + O(1) with probability 1 − o(1/n).
The proof is rather technical, so before beginning we informally sketch the main points.
In order to bound the maximum load, we need to approximately bound the number of
bins with i balls for all values of i. In fact, for any given i, instead of trying to bound the
number of bins with load exactly i, it will be easier to bound the number of bins with
load at least i. The argument proceeds via what is, for the most part, a straightforward
induction. We wish to ind a sequence of values βi such that the number of bins with
load at least i is bounded above by βi with high probability.
Suppose that we knew that, over the entire course of the process, the number of bins
with load at least i was bounded above by βi . Let us consider how we would determine
an appropriate inductive bound for βi+1 that holds with high probability. Deine the
height of a ball to be one more than the number of balls already in the bin in which
the ball is placed. That is, if we think of balls as being stacked in the bin by order of
arrival, the height of a ball is its position in the stack. The number of balls of height at
least i + 1 gives an upper bound for the number of bins with at least i + 1 balls.
A ball will have height at least i + 1 only if each of its d choices for a bin has load
at least i. If there are indeed at most βi bins with load at least i at all times, then the
probability that each choice yields a bin with load at least i is at most βi /n. Therefore,
the probability that a ball has height at least i + 1 is at most (βi /n)d . We can use a
Chernoff bound to conclude that, with high probability, the number of balls of height
at least i + 1 will be at most 2n(βi /n)d . That is, if everything works as sketched, then
d
βi+1 βi
≤2 .
n n
We examine this recursion carefully in the analysis and show that β j becomes O(ln n)
when j = ln ln n/ ln d + O(1). At this point, we must be a bit more careful in our ana-
lysis because Chernoff bounds will no longer be suficiently useful, but the result is
easy to inish from there.
The proof is technically challenging primarily because one must handle the condi-
tioning appropriately. In bounding βi+1 , we assumed that we had a bound on βi . This
assumption must be treated as a conditioning in the formal argument, which requires
some care.
We shall use the following notation: the state at time t refers to the state of the system
immediately after the tth ball is placed. The variable h(t ) denotes the height of the tth
ball, and νi (t ) and μi (t ) refer (respectively) to the number of bins with load at least i
and the number of balls with height at least i at time t. We use νi and μi for νi (n) and
μi (n) when the meaning is clear. An obvious but important fact, of which we make
frequent use in the proof, is that νi (t ) ≤ μi (t ), since every bin with load at least i must
contain at least one ball with height at least i.
Before beginning, we make note of two simple lemmas. First, we utilize a speciic
Chernoff bound for binomial random variables, easily derived from Eqn. (4.2) by letting
δ = 1.
434
17.1 the power of two choices
Lemma 17.2:
The following lemma will help us cope with dependent random variables in the main
proof.
Pr(Yi = 1 | X1 , . . . , Xi−1 ) ≤ p,
then
n
Pr Yi > k ≤ Pr(B(n, p) > k).
i=1
Proof: If we consider the Yi one at a time, then each Yi is less likely to take on the
value 1 than an independent Bernoulli trial with success probability p, regardless of
the values of the Xi . The result then follows by a simple induction.
Proof of Theorem 17.1: Following the earlier sketch, we shall construct values βi such
that, with high probability, νi (n) ≤ βi for all i. Let β4 = n/4, and let βi+1 = 2βid /nd−1
for 4 ≤ i < i∗ , where i∗ is to be determined. We let E i be the event that νi (n) ≤ βi . Note
that E4 holds with probability 1; there cannot be more than n/4 bins with at least 4 balls
when there are only n balls. We now show that, with high probability, if Ei holds then
Ei+1 holds for 4 ≤ i < i∗ .
Fix a value of i in the given range. Let Yt be a binary random variable such that
That is, Yt is 1 if the height of the tth ball is at least i + 1 and if, at time t − 1, there
are at most βi bins with load at least i. The requirement that Yt be 1 only if there are at
most βi bins with load at least i may seem a bit odd; however, it makes handling the
conditioning much easier.
Speciically, let ω j represent the bins selected by the jth ball. Then
βid
Pr(Yt = 1 | ω1 , . . . , ωt−1 ) ≤ .
nd
That is, given the choices made by the irst t − 1 balls, the probability that Yt is 1 is
bounded by (βi /n)d . This is because, in order for Yt to be 1, there must be at most βi
bins with load at least i; and when this condition holds, the d choices of bins for the
tth ball all have load at least i with probability (βi /n)d . If we did not force Yt to be 0 if
there are more than βi bins with load at least i, then we would not be able to bound this
conditional probability in this way.
435
balanced allocations and cuckoo hashing
Let pi = βid /nd . Then, from Lemma 17.3, we can conclude that
n
Pr Yt > k ≤ Pr(B(n, pi ) > k).
t=1
This holds independently of any of the events Ei , owing to our careful deinition of Yt .
(Had we not included the condition that Yt = 1 only if νi (t − 1) ≤ βi , the inequality
would not necessarily hold.)
n
Conditioned on Ei , we have t=1 Yt = μi+1 . Since νi+1 ≤ μi+1 , we have
Let i∗ be the smallest value of i such that pi = βid /nd < 6 ln n/n. We show that i∗ is
ln ln n/ ln d + O(1). To do this, we prove inductively the bound
n
βi+4 = i−1 .
2d i − dj
2 j=0
of choosing two balls, and for each pair the probability that both balls have height at
least i∗ + 2 is at most (18 ln n/n)2d .
437
balanced allocations and cuckoo hashing
Removing the conditioning as before and then using Eqn. (17.5) yields
Pr(νi∗ +3 ≥ 1) ≤ Pr(μi∗ +2 ≥ 2)
≤ Pr(μi∗ +2 ≥ 2 | νi∗ +1 ≤ 18 ln n) Pr(νi∗ +1 ≤ 18 ln n)
+ Pr(νi∗ +1 > 18 ln n)
(18 ln n)2d i∗ + 1
≤ + ,
n2d−2 n2
showing that Pr(νi∗ +3 ≥ 1) is o(1/n) for d ≥ 2 and hence that the probability the max-
imum bin load is more than i∗ + 3 = ln ln n/ ln d + O(1) is o(1/n).
Breaking ties randomly is convenient for the proof, but in practice any natural tie-
breaking scheme will sufice. For example, in Exercise 17.1 we show that if the bins
are numbered from 1 to n then breaking ties in favor of the smaller-numbered bin is
suficient.
As an interesting variation, suppose that we split the n bins into two groups of equal
size. Think of half of the bins as being on the left and the other half on the right.
Each ball now chooses one bin independently and uniformly at random from each half.
Again, each ball is placed in the least loaded of the two bins – but now, if there is a
tie, the ball is placed in the bin on the left half. Surprisingly, by splitting the bins and
breaking ties in this fashion,
√ we can obtain a slightly better bound on the maximum
load: ln ln n/(2 ln((1 + 5)/2)) + O(1). One can generalize this approach by splitting
the bins into d ordered equal-sized groups; in case of a tie for the least-loaded bin,
the bin in the lowest-ranked group obtains the ball. This variation is the subject of
Exercise 17.13.
In this section we demonstrate that the result of Theorem 17.1 is essentially tight by
proving a corresponding lower bound.
Theorem 17.4: Suppose that n balls are sequentially placed into n bins in the fol-
lowing manner. For each ball, d ≥ 2 bins are chosen independently and uniformly at
random (with replacement). Each ball is placed in the least full of the d bins at the
time of the placement, with ties broken randomly. After all the balls are placed, the
maximum load of any bin is at least ln ln n/ ln d − O(1) with probability 1 − o(1/n).
The proof is similar in spirit to the upper bound, but there are some key differences. As
with the upper bound, we wish to ind a sequence of values γi such that the number of
bins with load at least i is bounded below by γi with high probability. In deriving the
upper bound, we used the number of balls with height at least i as an upper bound on
the number of bins with height at least i. We cannot do this in proving a lower bound,
however. Instead, we ind a lower bound on the number of balls with height exactly i
and then use this as a lower bound on the number of bins with height at least i.
In a similar vein, for the proof of the upper bound we used that the number of bins
with at least i balls at time n was at least νi (t ) for any time t ≤ n. This is not helpful
438
17.2 two choices: the lower bound
now that we are proving a lower bound; we need a lower bound on νi (t ), not an upper
bound, to determine the probability that the tth ball has height i + 1. To cope with this,
we determine a lower bound γi on the number of bins with load at least i that exist at
time n(1 − 1/2i ) and then bound the number of balls of height i + 1 that arise over the
interval (n(1 − 1/2i ), n(1 − 1/2i+1 )]. This guarantees that appropriate lower bounds
hold when we need them in the induction, as we shall clarify in the proof.
We state the lemmas that we need, which are similar to those for the upper bound.
Lemma 17.5:
Pr(B(n, p) ≤ np/2) ≤ e−np/8 . (17.6)
Lemma 17.6: Let X1 , X2 , . . . , Xn be a sequence of random variables in an arbitrary
domain, and let Y1 , Y2 , . . . , Yn be a sequence of binary random variables with the prop-
erty that Yi = Yi (X1 , . . . , Xi ). If
Pr(Yi = 1 | X1 , . . . , Xi−1 ) ≥ p,
then
n
Pr Yi > k ≥ Pr(B(n, p) > k).
i=1
Proof of Theorem 17.4: Let Fi be the event that νi (n(1 − 1/2i )) ≥ γi , where γi is
given by:
γ0 = n;
d
n γi
γi+1 = .
2i+3 n
Clearly F0 holds with probability 1. We now show inductively that successive Fi hold
with suficiently high probability to obtain the desired lower bound.
We want to compute
Pr(¬Fi+1 | Fi ).
With this in mind, for t in the range R = [n(1 − 1/2i ), n(1 − 1/2i+1 )], deine the binary
random variable by
Zt = 1 if and only if h(t ) = i + 1 or νi+1 (t − 1) ≥ γi+1 .
Hence Zt is always 1 if νi+1 (t − 1) ≥ γi+1 .
The probability that the tth ball has height exactly i + 1 is
νi (t − 1) d νi+1 (t − 1) d
− .
n n
The irst term is the probability that all the d bins chosen by the tth ball have load at
least i. This is necessary for the height of the tth ball to have height exactly i + 1.
However, we must subtract out the probability that all d choices have at least i + 1
balls, because in this case the height of the ball will be larger than i + 1.
439
balanced allocations and cuckoo hashing
Again letting ω j represent the bins selected by the jth ball, we conclude that
d
γi+1 d
γi
Pr(Zt = 1 | ω1 , . . . , ωt−1 , Fi ) ≥ − .
n n
This is because Zt is automatically 1 if νi+1 (t − 1) ≥ γi+1 ; hence we can consider the
probability in the case where νi+1 (t − 1) ≤ γi+1 . Also, conditioned on Fi , we have
νi (t − 1) ≥ γi .
From the deinition of the γi we can further conclude that
d
γi+1 d 1 γi d
γi
Pr(Zt = 1 | ω1 , . . . , ωt−1 , Fi ) ≥ − ≥ .
n n 2 n
Let pi = 12 (γi /n)d .
Applying Lemma 14.6 yields
n
Pr Zt < k Fi ≤ Pr B i+1 , pi < k .
t∈R
2
Now our choice of γi nicely satisies
1 n
γi+1 = pi .
2 2i+1
By the Chernoff bound,
n i+1
Pr B i+1 , pi < γi+1 ≤ e−npi /(8·2 ) ,
2
which is o(1/n2 ) provided that pi n/2i+1 ≥ 17 ln n. Let i∗ be a lower bound on the
largest integer for which this holds. We subsequently show that i∗ can be chosen to
be ln ln n/ ln d − O(1); for now let us assume that this is the case. Then, for i ≤ i∗ , we
have shown that
n 1
Pr Zt < γi+1 Fi ≤ Pr B i+1 , pi < γi+1 = o 2 .
t∈R
2 n
∗
Further, by deinition we have that t∈R Zt < γi+1 implies ¬Fi+1 . Hence, for i ≤ i ,
1
Pr(¬Fi+1 | Fi ) ≤ Pr Zt < γi+1 Fi = o 2 .
t∈R
n
Therefore, for suficiently large n,
Pr(Fi∗ ) ≥ Pr(Fi∗ | Fi∗ −1 ) · Pr(Fi∗ −1 | Fi∗ −2 ) · · · Pr(F1 | F0 ) · Pr(F0 )
∗
≥ (1 − 1/n2 )i
= 1 − o(1/n).
All that remains is to demonstrate that ln ln n/ ln d − O(1) is indeed an appropriate
choice for i∗ . It sufices to show that γi ≥ 17 ln n when i is ln ln n/ln d − O(1). From
the recursions γi+1 = γid /(2i+3 nd−1 ), we ind by a simple induction that
n
γi = i−1 k
.
2 k=0 (i+2−k)d
440
17.3 applications of the power of two choices
17.3.1. Hashing
When we considered hashing in Chapter 5, we related it to the balls-and-bins paradigm
by assuming that the hash function maps the items being hashed to random entries in the
hash table. Subject to this assumption, we proved that (a) when O(n) items are hashed
to a table with n entries, the expected number of items hashed to each individual entry in
the table is O(1), and (b) with high probability, the maximum number of items hashed
to any entry in the table is (ln n/ ln ln n).
These results are satisfactory for most applications, but for some they are not, since
the expected value of the worst-case lookup time over all items is (ln n/ ln ln n). For
example, when storing a routing table in a router, the worst-case time for a lookup in
a hash table can be an important performance criterion, and the (ln n/ ln ln n) result
is too large. Another potential problem is wasted memory. For example, suppose that
we design a hash table where each bin should it in a single ixed-size cache line of
memory. Because the maximum load is so much larger than the average, we will have
to use a large number of cache lines and many of them will be completely empty. For
some applications, such as routers, this waste of memory is undesirable.
441
balanced allocations and cuckoo hashing
Applying the balanced allocation paradigm, we obtain a hashing scheme with O(1)
expected and O(ln ln n) maximum access time. The 2-way chaining technique uses two
random hash functions. The two hash functions deine two possible entries in the table
for each item. The item is inserted to the location that is least full at the time of insertion.
Items in each entry of the table are stored in a linked list. If n items are sequentially
inserted into a table of size n, the expected insertion and lookup time is still O(1). (See
Exercise 17.3.) Theorem 17.1 implies that with high probability the maximum time to
ind an item is O(ln ln n), versus the (ln n/ ln ln n) time when a single random hash
function is used. This improvement does not come without cost. Since a search for an
item now involves a search in two bins instead of one, the improvement in the expected
maximum search time comes at the cost of roughly doubling the average search time.
This cost can be mitigated if the two bins can be searched in parallel.
u v w y z
u v w x y z
Figure 17.1: An example of a cuckoo hash table. For each placed item, the directed arrow shows
the other location where that item can be moved to. In the initial coniguration (top image), item x is
inserted, but its choices contain items w and y. If x causes y to move and y causes z to move, then the
resulting coniguration (bottom image) can hold all items. In the original coniguration, if the choices
for x had been the locations containing items u and w, then x could not have been successfully placed.
As an exercise, you can check that without moves, when there are n bins, with high
probability you can place only O(n2/3 ) items before a new item being placed inds
both its choices already hold another item. This is a simple variation of the birthday
paradox.
Now let us consider the power that comes from moving items. If, on inserting an
item x, there is no room for an item at either of its two choices, we instead move the
item y in one of those bins to the other of its two choices. If the other bin for y is empty,
then we are done – every item has a suitable place. However, there may be another item
z in y’s other location, in which case we may have to move z, and so on, until either we
ind an empty space, or we realize that there is no empty space to be found, which is a
possibility. See Figure 17.1 for an example.
This approach is referred to as cuckoo hashing, taking the name from the cuckoo
bird, which lays its eggs in the nests of other birds and whose young kick out the eggs
or other young residing in the nest. We would like to understand various things about
cuckoo hashing, namely:
r How many items can be successfully placed before an item cannot be placed?
r How long do we expect it to take to insert a new item?
r How can we know if we are in a situation where an item cannot be placed?
We address these issues by relating the cuckoo hashing process to a random graph
process. Let us treat the bins as vertices, and the items being hashed as edges. That
443
balanced allocations and cuckoo hashing
u v
y z
Figure 17.2: Items u, v, y, and z all reside in a bin, but their choices create a cycle in the cuckoo
graph. Adding item x would create a component with two cycles (when considering the edges as
undirected), which cannot be done. In simpler terms, a cuckoo hash table cannot store ive items if
all of their choices fall into four bins.
is, since each item hashes to two possible hash locations, we can view it as an edge
connecting those two bins, or vertices, to which it hashes. As usual, we assume our
hash values are completely random. In that case the resulting graph may have parallel
edges, which are pairs of nodes connected by more than one edge, which occur when
different items hash to the same two locations (vertices). The graph may also have
self-loops, which are edges connecting a vertex to itself, which occur when both
locations (vertices) chosen for an item are the same. We call this graph the cuckoo
graph. We model the cuckoo graph corresponding to m items hashed into a table
of n entries by a random graph with n nodes and m edges, where each of the two
vertices of an edge is chosen independently and uniformly at random from the set of
n nodes.
We remark that self-loops can be eliminated by partitioning the table into two sub-
tables of equal size, and assigning each item a bin at random from each subtable. In that
case we have a random bipartite graph with n/2 vertices on each side and m random
edges, with each edge connecting two nodes, one chosen uniformly at random from
each side. The differences arising from these variations is minimal.
The load of our cuckoo hash table will be m/n, the ratio of the number of items
to the number of locations. Our main result is that if a cuckoo hash table with two
choices has load less than and bounded away from 1/2, placement will succeed with
high probability.
A key approach in studying cuckoo hashing is to look at the connected components
of the cuckoo graph. Recall that a connected component is simply a maximal group
of vertices that are all connected, or reachable, by traversing edges in the graph. We
show that as long as m/n ≤ (1 − ǫ)/2 for some constant ǫ > 0, the maximum-sized
connected component in the cuckoo graph has only O(log n) vertices with high proba-
bility, the expected number of vertices in a connected component for a given vertex v is
constant, and all components are trees or contain a single cycle with high probability.
(Here, a self-loop is considered a cycle on one vertex.) These facts about the cuckoo
graph translate directly into answers to our questions about cuckoo hashing.
It should be clear that an item cannot be placed if it falls into a component that,
after its placement, will have more items than bins to hold them, as shown by exam-
ple in Figure 17.2. On the other hand, when all components are trees or have just a
444
17.4 cuckoo hashing
single cycle, every item can be placed successfully and eficiently. In fact we have the
following lemma:
Proof: If the number of edges, or items, exceeds the number of vertices, or bins, in
the component, as is the case if there are two or more cycles, then an item cannot be
placed.
To analyze the allocations of items to locations, it can help to think of each edge,
or item, as being directed away from the vertex, or bin, in which it currently resides.
Since each bin can store only one item, a proper allocation of items to bins must
have no more than one edge directed out of each vertex. Keep in mind, however, that
when we discuss cycles in components in our analysis, we are considering the undi-
rected edges; the directed edges are just to help us keep track of how items can be
moved.
It is clear that as long as all components are trees or have just a single cycle, the
items can be placed successfully. For a tree, one can simply choose one vertex as a
root, and orient all edges toward that root. That assignment has only one edge directed
out of each vertex and the root of the tree is assigned no element. For a component with
a cycle, the edges around the cycle have to be oriented consistently, and all other edges
have to be directed toward the cycle.
Cuckoo hashing will place items if they can be placed, and each bin is visited at most
twice during an insertion. There are three main cases to consider. When the item, or
edge, is placed into a component that joins two existing tree components (one of which
might be just a single vertex) so that the resulting component remains a tree, directed
edges will be followed until we reach a vertex with no outgoing edge. When a directed
edge is followed, it is reversed, corresponding to the replacement of the old item with
the new item. (See Figure 17.3.)
When the item is placed so that both possible vertices already lie in the same compo-
nent, the behavior is similar to the irst case. There is a unique path to an empty vertex,
corresponding to a bin that holds no item, and directed edges are followed and reversed
until that vertex is reached.
The last case is when the item to be placed joins a component that has a cycle with
a component that is a tree. It is possible in that case that placing the item will cause
the process to follow edges around the cycle, reversing the cycle orientation as it goes.
After going around the cycle and returning to the node where the new item was initially
placed, the new item will be kicked out, and then a path is followed to the empty location
in the tree component. It is important to see that while we can return once to the node
the insertion started at, we traverse each edge at most twice, once in each direction. We
never follow the same edge twice back into the cycle, because the edge will be lipped
to point away from the cycle. (See Figure 17.4.)
In each case, placement takes time proportional to the component size.
445
balanced allocations and cuckoo hashing
Figure 17.3: Item x is inserted into the cuckoo graph, and placed in the bin (or vertex) on the left
(top image). It kicks out the item already there, moving the item to the neighboring bin in the graph.
In terms of the graph, the vertex can only have one outgoing edge, so the other adjacent edge must
reverse, and so on until the process terminates. In the bottom image, the reversed edges are shown as
dashed.
Lemma 17.7 tells us that to understand how cuckoo hashing performs, we simply
need to understand the component structure of the cuckoo graph. When we place a
new item in the cuckoo hash table, we add a new edge to the graph, which lies in an
existing component or joins two components. If we show the maximum component
size is O(log n) with high probability, then we know from Lemma 17.7 that the max-
imum work needed to insert an item is O(log n) with high probability. Similarly, if
we show that the expected size of a component is constant, then since the insertion
of a new item joins two components, the expected time to insert an item is bounded
by a constant. Of course, it is important to keep in mind that while insertion of a new
item can take a logarithmic number of steps, a lookup of an item always takes constant
time, since it is in one of two locations; this feature remains the key beneit of cuckoo
hashing.
Finally, as we try to place an item by moving other items in the hash table, keeping
track of the corresponding vertices visited in the cuckoo graph allows one to tell if the
graph has a bad component with two cycles, in which case placement of a new item
fails. Alternatively, because the maximum component size is O(log n) with high prob-
ability, in practice in implementations one often allows at most c log n replacements
of items for a suitable constant c before declaring a failure. With this approach, one
does not have to keep track of the vertices seen, avoiding the use of memory during
placement.
We turn now to analyzing the connected component structure of a random cuckoo
graph with n nodes and m = (1 − ǫ)n/2 edges, for any constant ǫ > 0. Based on our
446
17.4 cuckoo hashing
Figure 17.4: Item x is inserted into the cuckoo graph, and placed in the top bin (or vertex) of its two
choices (top image). It kicks out the item already there, moving the item to the neighboring bin in the
graph. In terms of the graph, the vertex can only have one outgoing edge, so the other adjacent edge
must reverse, and so on. In this case, the process goes around the cycle and returns back to the original
vertex where x was placed (middle image). The item x is itself kicked out to its other location, and
the process terminates. Edges in the original graph that changed direction at least once are shown as
dashed; an edge can only change direction at most twice.
analysis thus far, our task now is to analyze the maximum size and the expected size
of connected components in the graph. Here size refers to the number of vertices in the
component. Our proof is based on a branching processes technique.
Lemma 17.8: Consider a cuckoo graph with n nodes and m = (1 − ǫ)n/2 edges for
some constant ǫ > 0.
(1) With high probability the largest connected component in the cuckoo graph has
size O(log n).
(2) The expected size of a connected component cuckoo in the graph is O(1).
no parallel edges or self-loops can only increase the probability of having a large
connected component in the graph. This random graph model was introduced in
Section 5.6 as the Gn,N model. In our case the number of edges is N = m, and we refer
to graphs with m uniformly chosen edges as being chosen from Gn,m .
Our second observation transforms the analysis to the related random graph model
recall from Section 5.6 consists of graphs on n nodes with each of
Gn,p , which we
the possible n2 edges included in the graph independently with probability p. We
recall that having a connected component of size at least k for any value of k is a
monotone increasing graph property; if a graph G(V, E ) has that property, then any
graph G′ = (V, E ′ ) with E ⊆ E ′ also has that property.
Since having a connected component of a given size is a monotone increasing graph
property, we can use Lemma 5.14. In particular, for any 0 < ǫ ′ < 1, Lemma 5.14 allows
us to conclude that the probability that a graph drawn from Gn,m has a connected com-
ponent of size at least k is within e−O(m) of the probability that a graph drawn from Gn,p
has a connected component of that size, where
m (1 + ǫ ′ )(1 − ǫ) 1−γ
p = (1 + ǫ ′ ) n = = .
2
n−1 n−1
This holds for any constant γ with 0 < γ < ǫ, by choosing a suitably small ǫ ′ . Thus,
our problem is reduced to bounding the maximum size of a connected component in a
graph drawn from Gn,p , with p = (1 − γ )/n.
Fix a vertex v. We explore the connected component containing vertex v by execut-
ing a breadth irst search from v. We start by placing node v in a queue and look at
the neighbors of v. We add these neighbors into a queue and look at the neighbors of
these neighbors, adding any new vertices to the queue, and so on. More formally, after
adding all the nodes at distance ℓ from the root v to the queue, we sequentially look at
the neighbors of each of these nodes, adding to the queue neighbors at distance ℓ + 1
from the root that are not yet in the queue. The process ends when there are no new
neighbors to add to the queue. Clearly, when the process ends, the queue stores all the
nodes in the connected component that includes v. Let v = v1 , v2 , . . . , vk be the nodes
in the queue at the termination of the process, in the order in which they entered the
queue.
Let Zi be the number of nodes added to the queue while looking at neighbors of
vi , i.e., Zi counts the neighbors of vi that are not neighbors of any node v j , j < i. The
key point in the analysis is that conditioning on the neighborhoods of v1 , . . . , vi−1 , the
distribution of Zi is stochastically dominated by a binomial random variable distributed
B(n − 1, (1 − γ )/(n − 1)).
Deinition 17.1: A random variable X stochastically dominates a random variable Y
if for all a,
Pr(X ≥ a) ≥ Pr(Y ≥ a).
Equivalently, X stochastically dominates Y if for all a
FX (a) ≤ FY (a),
where FX and FY are the cumulative distribution functions of X and Y , respectively.
448
17.4 cuckoo hashing
because then we would have found fewer than k − 1 additional vertices in exploring
the irst k − 1 vertices. So we must have
k−1
Zi ≥ k − 1
j=1
for the breadth irst search to reach k vertices. From our domination argument, the
probability that our breadth irst search reaches k vertices is bounded above by
⎛ ⎞ ⎛ ⎞
k−1
k−1
Pr ⎝ Zi ≥ k − 1⎠ ≤ Pr ⎝ Bi ≥ k − 1⎠
j=1 j=1
Here we have used that the sum of binomials is itself binomial. We are now ready
to apply the standard Chernoff bound (4.2). Let S be a binomial B((k − 1)(n − 1),
(1 − γ )/(n − 1)) of mean E[S] = (1 − γ )(k − 1). Then
E[S]
Pr(S ≥ k − 1) = Pr S ≥
1−γ
≤ Pr (S ≥ E[S](1 + γ ))
2
≤ e−(k−1)(1−γ )γ /3
.
9
Here we have used that 1/(1 − γ ) > (1 + γ ). Setting k ≥ 1 + γ 2 (1−γ )
ln n, we have
that the probability that v1 is part of a connected component of size at least k is
bounded above by 1/n3 , and by a union bound the probability that any vertex is part
of a connected component of size at least k is bounded above by 1/n2 . Now applying
Lemma 5.14, we can conclude that in the cuckoo graph with n nodes and m edges, the
probability that there is connected component of size at least k is bounded above by
1/n2 + e−O(m) ≤ 2/n2 for large enough n.
449
balanced allocations and cuckoo hashing
Next we bound the expected size of a connected component that includes a given
node v. Consider irst a graph chosen from Gn,p with p = (1 − γ )/(n − 1). Let X the
size of the component that includes vertex v in that graph. As we have seen, for a graph
chosen from Gn,p , we can view the breadth irst search process as a branching process
where the number of offspring of node vi is Zi , which is stochastically dominated by
a random variable Bi distributed as B(n − 1, (1 − γ )/(n − 1)) and with expectation
1 − γ . As we showed in Section 2.3, a branching process where the expected number
of offspring of a node is bounded above by 1 − γ has an expected size of 1/γ . Hence,
in Gn,p , E[X] ≤ 1/γ .
Let Y be the size of the connected component that includes v in a graph chosen from
Gn,m . Then, for any v,
n n
1
E[Y ] = Pr(Y ≥ k) ≤ Pr(X ≥ k) + ne−O(m) ≤ + ne−O(m) = O(1),
i=1 k=1
γ
where in the irst inequality we applied Lemma 5.14 and in the second inequality we
used the bound on E[X].
Next we need to show that all connected components with more than one node in
the cuckoo graph are either trees or have a single cycle.
Lemma 17.9: Consider a cuckoo hashing graph with n nodes and m = (1 − ǫ)n/2
edges. For any constant ǫ > 0, with high probability all the connected components in
the graph are either single vertices, trees, or unicyclic.
Proof: For the proof, we need a bound on the number of ways k vertices can be a
connected by a tree. We make use of the following combinatorial fact.
Lemma 17.10 [Cayley’s Formula]: The number of distinct labeled trees on k vertices
is kk−2 .
Here a labeled tree on k vertices is one where each vertex is given a distinct num-
ber from 1 to k, and trees that are isomorphic when taking into account the labels are
considered the same. Hence there is one labeled tree on two vertices with one edge
between them – there are two ways of labeling the vertices, but they are isomorphic.
Similarly, there are three labeled trees on three vertices, with one tree for each assign-
ment of a number to the vertex of degree 2. There are many proofs to Cayley’s formula;
one approach is given in Exercise 17.15.
A connected component that is not a tree or has more than one cycle must include
a tree plus at least two additional edges. Let Yk be a random variable denoting the
number of components with k vertices and at least k + 1 edges. We determine a bound
on E[Yk ] to bound the probability of the existence of such a component. We need only
worry about values of k where k = O(log n) since we already proved that with high
probability the graph has no larger connected components.
Given a set of k vertices that form a component, the k vertices must be connected by
a tree. Suppose we choose a tree of k − 1 edges connecting those vertices. We require
450
17.4 cuckoo hashing
all of the edges in the tree to be part of the graph, and because we allow self-loops
and multi-edges, each of the m = (1 − ǫ)n/2 possible random edges will be a given
speciic edge of the tree with probability 2/n2 . We then must have at least two additional
edges within that component. The two additional edges fall within the component with
probability k2 /n2 . Finally, all the k(n − k) edges between vertices in the component
and vertices not in the component must not be in the graph, or we would not have a
component of size k. The following expression overcounts the number of components
somewhat, as the same component may be counted multiple times.
⎛ k ⎞2
k−1
m−k−1
n k−2 m k+1 2 2 2k(n − k)
E[Yk ] ≤ k (k − 1)! 2 ⎝ ⎠ 1−
n .
k k+1 2 n 2
n2
That is, we irst choose k vertices from the n vertices, we choose one of the kk−2 trees
to connect these vertices, and we choose k + 1 of the m edges to form this tree and add
two additional edges to the component, so there is more than one cycle.
⎛ k ⎞2
k−1
m−k−1
n k−2 m k+1 2 ⎝ ⎠ 1 − 2k(n − k)
2
E[Yk ] ≤ k (k − 1)! 2 n
k k+1 2 n 2
n2
k−1
2
nk mk+1 k−2 k2
2 2
≤ k e−2k(n−k)(m−k−1)/n
2k! n2 n2
1 k2 ek −2k(n−k)(m−k−1)/n2
≤ (1 − ǫ)k+1 e
n 8
1 k2 2 2
≤ (1 − ǫ)k+1 e(kn −2k(n−k)(m−k−1))/n
n 8
k2 2 2 2
≤ (1 − ǫ)k e(kn −2knm)/n e4k /n
8n
k2 2
≤ (1 − ǫ)k ekǫ e4k /n
8n
k2 k(ǫ+ln(1−ǫ)) 4k2 /n
≤ e e .
8n
To reach the second line in the equations above we have used that nk < nk /k! and
1 − x ≤ e−x ; to reach the third line we have used kk /k! ≤ ek . Because, from our pre-
2
vious argument, we can assume that k = O(log n), the inal term e4k /n in the inal line
can be bounded above by 2 for large enough n. The key term in the inal line is the
ǫ+ln(1 − ǫ), which is negative, as can be seen using the expansion ln(1 − ǫ) =
− ∞ i 2
i=1 ǫ /i; the term ǫ + ln(1 − ǫ) is therefore −(ǫ ) as ǫ goes to 0. The inal
−(kǫ 2 )
expression therefore includes a term of the forme that is geometrically decreas-
ing in k. It follows that for any z = O(log n), zk=1 E[Yk ] is O(1/n), and hence the
probability that any component contains more than one cycle is O(1/n). We can con-
clude that cuckoo hashing successfully places every item with high probability.
451
balanced allocations and cuckoo hashing
One might wonder if we could do better. However, it is also easy to check that a cycle
component occurs with probability (1/n); for example, there is an (1/n) probabil-
ity that two items both choose the same bin for both its choices, or that three items
choose the same distinct pair of bins. We consider ways one might improve this failure
probability in Section 17.5.
Similarly, one might wonder if we could handle loads larger than 1/2, or if the 1/2
is just an outcome of our analysis. In fact, for cuckoo hashing as we have described
it, 1/2 is the limit. With m = (1 + ǫ)/2 edges, the cuckoo graph looks very different;
a constant fraction of the vertices become joined in a giant component of size (n),
and many of the vertices lie on cycles. We have seen similar threshold behaviors in
random graphs before in Section 6.5.1; the threshold here corresponds directly to the
load that can be handled by cuckoo hashing. However, higher loads are possible for
more complex variations of cuckoo hashing, as we describe in Section 17.5.
Finally, it is worth mentioning that the analysis using Cayley’s formula that we used
to bound the number of components with two or more cycles could also be applied to
bound the expected number of components of each size. There are some subtleties in
the random graph model we have used here, but in Exercise 17.15, we show how to use
this method rather than the branching process method to give an alternative proof that
the largest component size is O(log n) in the Gn,p random graph model.
ways of choosing these vertices, and mk ways of choosing the items that correspond
to the edges. After adding the new edge for the inserted item, the k + 1 edges must
form a spanning tree, as well as two additional edges. Finally, there can be no other
edges among the k vertices, or between those k vertices and the other n − k vertices.
Following the same analysis as we have used previously, if E is the event that the new
452
17.5 extending cuckoo hashing
k
4n2
k2 (k + 1) 2
≤ (1 − ǫ)k ekǫ e4k /n
k
4n2
k2 (k + 1) 2
≤ ek(ǫ+ln(1−ǫ)) e4k /n
.
k
4n2
2
Again, we need only consider k = O(log n). The exponential term grows like e−(ǫ k) ,
which gives that Pr(E ) is O(1/n2 ).
Of course, deletion of an item in a cuckoo hash table, like insertion, only takes con-
stant time.
This is because there are m3 ways to choose the three balls. With probability 1 − n1 the
irst ball did not choose a self-loop; the other two balls then each choose the same pair
of bins with probability 2/n2 . We easily observe that when m = (n) this expectation
is (1/n). A calculation of the variance readily yields the probability that there is such
a triple is also (1/n), using the second moment method.
While the probability of failure for a cuckoo hash table is o(1), the fact that it is
(1/n) remains concerning; this could be very high for many practical situations. One
way to cope with this problem is to allow rehashing. If we ever reach a failure point,
453
balanced allocations and cuckoo hashing
where either we ind that we can’t place an item because of cycles, or we simply ind
that a component is too large (over c log n for a suitable constant c), then we can choose
a new hash function and rehash all the items into a new cuckoo hash table. A question
is how much impact will rehashing have.
The amount of work to rehash using a new hash function for all items is O(n), and we
only have to do it with probability O(1/n). Even if we have to rehash multiple times
before we reach success, the expected work to hash m items can be bounded. Using
order notation rather loosely, we ind the total number of operations is
∞
O(n) + k · O(n) · (O(1/n))k = O(n).
k=1
Hence the amortized amount of work per item due to rehashing is only constant in
expectation, and also with high probability, since the probability of rehashing k or more
times is O(1/nk ). However, one can imagine that rehashing might not be a suitable
solution in some practical settings, because it would be undesirable for the system to
have to wait for a complete rehashing of the hash table.
An alternative approach to rehashing that generally does quite well is to set aside a
small amount of memory for a stash. If an item cannot be placed because it creates a
component with more than one cycle, it can be placed in the stash. Usually, the stash
will be empty; however, when it is not empty, it will need to be checked on every lookup.
(Further, if items are deleted, one should check whether an item in the stash can then
be put back into the cuckoo hash table.) We have seen that the use of the stash should
be rare, since the failure probability is only O(1/n). Extending the previous analysis,
one can show that failures behave “nearly independently”; the probability that j items
need to be held in a stash falls like O(1/n j ). Hence, even a very small stash, such as one
that can hold four items, can greatly reduce the failure probability. The use of stashes
is considered further in Exercise 17.17.
If each item has two choices but we allow b > 1 items per bin, then we continue
to have a random graph problem, but now the question is whether cuckoo hashing
can effectively ind an orientation with at most b edges pointing away from a vertex.
Allowing more than one item per bin can be very natural; for example, a bin may
correspond to a ixed amount of memory, such as a cache line, that might correspond
to the size of multiple items. One issue is how to choose which item to kick out of a
bin when it is necessary to place an item into a full bin. Natural possibilities include
breadth irst search, or a “random walk” style search where at each step a random item
is selected from the bin to be kicked out to make room for the existing element.
If each item has d > 2 choices but there is just one item per bin, then our problem
involves random hypergraphs, rather than random graphs, where each edge is a col-
lection of d vertices. When all choices for an item lead to a bin that already contains
an item, we again face the issue of how to choose which item to kick out. One could
again use approaches based on a breadth irst search, or a “random walk” style search.
A further variation is to allow different items differing number of choices, according to
some distribution, where the number of choices is itself determined by a hash function
on the item.
Of course, one can also combine more than two choices per item and more than one
item per bin. With four choices and one item per bin, the maximum load that can be
achieved with no failures with high probability (as n grows large) is over 0.97, much
more than the 0.5 bound for two choices. Similarly, two choices with up to four items
per bin allow loads over 0.98. Combining multiple choices with multiple items per bin
yields even higher load factors.
The following theorem provides the form of the load threshold as the number of bin
choices and the number of items per bin varies. Its proof is quite complex and beyond
the scope of this book.
Theorem 17.11: Consider a cuckoo hash table with n items, m/ℓ bins that each can
hold up to ℓ items, and k choices per item. We consider a regime when n/m is held
ixed, but n, m → ∞. Let β(c) denote the largest value of β so that
1 β
= c,
k (Pr(Po(β ) ≥ ℓ))k−1
where Po(x) refers to a discrete Poisson random variable with mean x. Deine ck,ℓ to
be the unique value of c that satisies
β(c) · Pr[Po(β(c)) ≥ ℓ]
= ℓ.
k · Pr[Po(β(c)) ≥ ℓ + 1]
The following results hold for any constant values k ≥ 3 and ℓ ≥ 1, or for k = 2 and
constant ℓ ≥ 2. For every ǫ > 0, for large enough n, we have that if n/m < ck,ℓ − ǫ,
there is a way of placing the items in the hash table that respects their choices and
the limits on the number of items per bin with probability 1 − o(1). If n/m > ck,ℓ + ǫ,
then there is no way to place the items that respects their choices and the limits on the
number of items per bin with probability 1 − o(1).
455
balanced allocations and cuckoo hashing
17.6. Exercises
Exercise 17.1: (a) For Theorems 17.1 and 17.4, the statement of the proof is for the
case that ties are broken randomly. Argue informally that, if the bins are numbered from
1 to n and if ties are broken in favor of the lower-numbered bin, then the theorems still
hold.
(b) Argue informally that the theorems apply to any tie-breaking mechanism that
has no knowledge of the bin choices made by balls that have not yet been placed.
Exercise 17.2: Consider the following variant of the balanced allocation paradigm:
n balls are placed sequentially in n bins, with the bins labeled from 0 to n − 1. Each
ball chooses a bin i uniformly at random, and the ball is placed in the least loaded of
bins i, i + 1 mod n, i + 2 mod n, . . . , i + d − 1 mod n. Argue that, when d is a constant,
the maximum load grows as (ln n/ ln ln n). That is, the balanced allocation paradigm
does not yield an O(ln ln n) result in this case.
Exercise 17.3: Explain why, with 2-way chaining, the expected time to insert an item
and to search for an item in a hash table of size n with n items is O(1). Consider two
cases: the search is for an item that is in the table; and the search is for an item that is
not in the table.
Exercise 17.4: Consider the following variant of the balanced allocation paradigm:
n balls are placed sequentially in n bins. Each ball comes with d choices, chosen
independently and uniformly at random from the n bins. When a ball is placed, we
are also allowed to move balls among these d bins to equalize their load as much as
possible. Show that the maximum load is still at least ln ln n/ ln d − O(1) with proba-
bility 1 − o(1/n) in this case.
Exercise 17.5: Suppose that in the balanced allocation setup there are n bins, but the
bins are not chosen uniformly at random. Instead, the bins have two types: 1/3 of the
bins are type A and 2/3 of the bins are type B. When a bin is chosen at random, each
of the type-A bins is chosen with probability 2/n and each of the type-B bins is chosen
with probability 1/2n. Prove that the maximum load of any bin when each ball has d
bin choices is still at most ln ln n/ ln d + O(1).
Exercise 17.7: We have shown that sequentially throwing n balls into n bins randomly,
using two bin choices for each ball, yields a maximum load of ln ln n/ ln 2 + O(1)
456
17.6 exercises
with high probability. Suppose that, instead of placing the balls sequentially, we had
access to all of the 2n choices for the n balls, and suppose we wanted to place each ball
into one of its choices while minimizing the maximum load. In this setting, with high
probability, we can obtain a maximum load that is constant.
Write a program to explore this scenario. Your program should take as input a para-
meter k and implement the following greedy algorithm. At each step, some subset of the
balls are active; initially, all balls are active. Repeatedly ind a bin that has at least one
but no more than k active balls that have chosen it, assign these active balls to that bin,
and then remove these balls from the set of active balls. The process stops either when
there are no active balls remaining or when there is no suitable bin. If the algorithm
stops with no active balls remaining, then every bin is assigned no more than k balls.
Try running your program with 10,000 balls and 10,000 bins. What is the smallest
value of k for which the program terminates with no active balls remaining at least four
out of ive times? If your program is fast enough, try experimenting with more trials.
Also, if your program is fast enough, try answering the same question for 100,000 balls
and 100,000 bins.
Exercise 17.8: The following problem models a simple distributed system where
agents contend for resources and back off in the face of contention. As in Exercise 5.12,
balls represent agents and bins represent resources.
The system evolves over rounds. In the irst part of every round, balls are thrown
independently and uniformly at random into n bins. In the second part of each round,
each bin in which at least one ball has landed in that round serves exactly one ball from
that round. The remaining balls are thrown again in the next round. We begin with n
balls in the irst round, and we inish when every ball is served.
(a) Show that, with probability 1 − o(1/n), this approach takes at most log2 log2 n +
O(1) rounds. (Hint: Let bk be the number of balls left after k rounds; show that
bk+1 ≤ c(bk )2 /n, for a suitable constant c with high probability, as long as bk+1 is
suficiently large.)
(b) Suppose that we modify the system so that a bin accepts a ball in a round if and only
if that ball was the only ball to request that bin in that round. Show that, again with
probability 1 − o(1/n), this approach takes at most log2 log2 n + O(1) rounds.
Exercise 17.9: The natural way to simulate experiments with balls and bins is to create
an array that stores the load at each bin. To simulate 1,000,000 balls being placed into
1,000,000 bins would require an array of 1,000,000 counters. An alternative approach
is to keep an array that records in the jth cell the number of bins with load j. Explain
how this could be used to simulate placing 1,000,000 balls into 1,000,000 bins using
the standard balls-and-bins paradigm and the balanced allocation paradigm with much
less space.
Exercise 17.10: Write a program to compare the performance of the standard balls-
and-bins paradigm and the balanced allocation paradigm. Run simulations placing n
balls into n bins, with each ball having d = 1, 2, 3, and 4 random choices. You should
457
balanced allocations and cuckoo hashing
try n = 10,000, n = 100,000, and n = 1,000,000. Repeat each experiment at least 100
times and compute the expectation and variance of the maximum load for each value
of d based on your trials. You may wish to use the idea of Exercise 17.9.
Exercise 17.11: Write a simulation showing how the balanced allocation paradigm
can improve performance for distributed queueing systems. Consider a bank of n FIFO
queues with a Poisson arrival stream of customers to the entire bank of rate λn per sec-
ond, where λ < 1. Upon entry a customer chooses a queue for service, and the service
time for each customer is exponentially distributed with mean 1 second. You should
compare two settings: (i) where each customer chooses a queue independently and uni-
formly at random from the n queues for service; and (ii) where each customer chooses
two queues independently and uniformly at random from the n queues and waits at
the queue with fewer customers, breaking ties randomly. Notice that the irst setting is
equivalent to having a bank of n M/M/1 FIFO queues, each with Poisson arrivals of rate
λ < 1 per second. You may ind the discussion in Exercise 8.27 helpful in constructing
your simulation.
Your simulation should run for t seconds, and it should return the average (over all
customers that have completed service) of the time spent in the system as well as the
average (over all customers that have arrived) of the number of customers found waiting
in the queue they selected for service. You should present results for your simulations
for n = 100 and for t = 10,000 seconds, with λ = 0.5, 0.8, 0.9, and 0.99.
Exercise 17.12: Write a program to compare the performance of the following vari-
ation of the standard balls-and-bins paradigm and the balanced allocation paradigm.
Initially n points are placed uniformly at random on the boundary of a circle of circum-
ference 1. These n points divide the circle into n arcs, which correspond to bins. We
now place n balls into the bins as follows: each ball chooses d points on the boundary of
the circle, uniformly at random. These d points correspond to the arcs (or, equivalently,
bins) that they lie on. The ball is placed in the least loaded of the d bins, breaking ties
in favor of the smallest arc.
Run simulations placing n balls into n bins for the cases d = 1 and d = 2. You
should try n = 1,000, n = 10,000, and n = 100,000. Repeat each experiment at least
100 times; for each run, the n initial points should be re-chosen. Give a chart showing
the number of times the maximum load was k, based on your trials for each value of d.
You may note that some arcs are much larger than others, and therefore when d = 1
the maximum load can be rather high. Also, to ind which bin each ball is placed in
may require implementing a binary search or some other additional data structure to
quickly map points on the circle boundary to the appropriate bin.
Exercise 17.13: There is a small but interesting improvement that can be made to the
balanced allocation scheme we have described. Again we will place n balls into n bins.
We assume here than n is even. Suppose that we divide the n bins into two groups of
size n/2. We call the two groups the left group and the right group. For each ball, we
independently choose one bin uniformly at random from the left and one bin uniformly
at random from the right. We put the ball in the least loaded bin, but if there is a tie we
458
17.6 exercises
always put the ball in the bin from the left group.With√this scheme, the maximum load
is reduced to ln ln n/2 ln φ + O(1), where φ = 1 + 5 /2 is the golden ratio. This
improves the result of Theorem 17.1 by a constant factor. (Note the two changes to our
original scheme: the bins are split into two groups, and ties are broken in a consistent
way; both changes are necessary to obtain the improvement we describe.)
(a) Write a program to compare the performance of the original balanced allocation
paradigm with this variation. Run simulations placing n balls into n bins, with
each ball having d = 2 choices. You should try n = 10,000, n = 100,000, and n =
1,000,000. Repeat each experiment at least 100 times and compute the expectation
and variance of the maximum load based on your trials. Describe the extent of the
improvement of the new variation.
(b) Adapt Theorem 17.1 to prove this result. The key idea in how the theorem’s proof
must change is that we now require two sequences, βi and γi . Similar to Theo-
rem 17.1, βi represents a desired upper bound on the number of bins on the left
with load at least i, and γi is a desired upper bound on the number of bins on the
right with load at least i. Argue that choosing
c1 βi γi c2 βi+1 γi
βi+1 = and γi+1 =
n2 n2
for some constants c1 and c2 is suitable (as long as βi and γi are large enough that
Chernoff bounds may apply).
Now let Fk be the kth Fibonacci number. Apply induction to show that, for suf-
iciently large i, βi ≤ nc3 cF42i and γi ≤ nc3 c4F2i+1 for some constants c3 and c4 . Fol-
lowing Theorem 17.1, use this to prove the ln ln n/2 ln φ + O(1) upper bound.
(c) This variation can easily be extended to the case of d > 2 choices by splitting the
n bins into d ordered groups, choosing one bin uniformly at random from each
group, and breaking ties in favor of the group that comes irst in the ordering.
Suggest what would be the appropriate upper bound on the maximum load for this
case, and give an argument backing your suggestion. (You need not give a complete
formal proof.)
Exercise 17.14: The birthday paradox (discussed in Section 5.1) shows that if balls
are sequentially thrown randomly into n bins, with constant probability there will be a
√
collision after ( n) balls are thrown.
(a) Suppose that balls are placed sequentially, each ball has two choices of where to be
placed, and a ball will choose a bin that avoids a collision if that is possible. Show
that there are constants c1 and c2 so that after c1 n2/3 − o(n2/3 ) balls are thrown no
collision has occurred with probability at least 1/2, and after c2 n2/3 + o(n2/3 ) balls
are thrown at least one collision has occurred with probability at least 1/2.
(b) How close can you make the constants c1 and c2 ?
(c) Extend your analyis for more than two choices. Speciically, show that if each ball
has k choices for some constant k, there are constants c1,k and c2,k so that after
c1,k n1−1/k − o(n1−1/k ) balls are thrown no collision has occurred with probability
459
balanced allocations and cuckoo hashing
at least 1/2, and after c2,k n1−1/k + o(n1−1/k ) balls are thrown at least one collision
has occurred with probability at least 1/2.
(d) How close can you make the constants c1,k and c2,k ?
Exercise 17.15: In our analysis for cuckoo hash tables we showed that the largest com-
ponent size was O(log n) with high probability. Here we provide part of an alternative
proof of this result in the Gn,p model, using an analysis that makes use of Cayley’s
formula. Consider a random graph G chosen from Gn,p , with p = c/n for a constant
c < 1.
(a) Let Xk be the expected number of tree components on exactly k vertices for a ran-
dom graph from Gn,p with p = c/n for a constant c < 1. A tree component on k
vertices will be connected with k − 1 edges, and will have no edges to the other
n − k vertices. Show that
n k−2 c k−1
c kn−k(k+3)/2+1
E[Xk ] = k 1− .
k n n
√
(b) Show that for 1 ≤ k ≤ n,
n
E[Xk ] ≤ C 2 e(1−c+ln c)k
ck
for some constant C for large enough n.
(c) Using the expression for E[Xk ], show that
1 k−1 c
E[Xk+1 ] c n−k−2
= (n − k) 1 + 1− ,
E[Xk ] k n n
and in turn
E[Xk+1 ] k c −2
≤ 1− ce1−c(1−k/n) 1 − .
E[Xk ] n n
(d) Show that xe1−x ≤ 1 for x > 0, and conclude that
E[Xk+1 ] c −2
≤ 1− .
E[Xk ] n
(e) Using the above, argue that the probability that there is any tree component with
√
more than n vertices in G is o(1/n), and that therefore the maximum size of a
tree component of G is O(log n) with probability 1 − o(1/n).
Exercise 17.16: Complete the argument from Section 17.5.2 to show that the failure
probability for standard cuckoo hashing is at least (1/n).
Exercise 17.17: Write code to implement the following experiment. You will build a
cuckoo hash table using two choices per item and one item per bin, with an array of
size 220 , and you will insert 514,000 items into it. (This is a bit more than than 49%
of 220 .) You have to decide how many moves you will allow before deciding a failure
has occurred; 200 should be suficient. If during the insertion process an item cannot
be inserted, place the item in a stash and continue inserting the remaining items.
460
17.6 exercises
Perform 100,000 trials. How often would you need a stash to hold an item? How
often would a stash that can hold one item sufice? Two items?
Exercise 17.18: We show here one way to derive Cayley’s formula. A directed rooted
tree is a tree with a special root vertex, and all the edges in the tree are assigned a
direction, with all edges directed away from the root. We count the number of sequences
of directed edges that can lead to a directed rooted tree in two different ways, and use
it to calculate an expression for T (k), the number of distinct labeled trees on k vertices.
(a) We create an ordered triple as follows. We irst choose a labeled but undirected
tree. We next choose a vertex as a root, and now we can think of the tree as being a
directed rooted tree. Finally, we choose one of the (k − 1)! possible permutations
of the directed edges. We can think of our choices of labeled tree, root vertex, and
edge permutation as an ordered triple.
Show that there is a one-to-one correspondence between these ordered triples
and sequences of directed edges on k vertices that lead to a directed rooted tree.
Explain why this shows that the number of sequences of directed edges that can
lead to a directed rooted tree on k vertices is (k!) · T (k).
(b) Now suppose instead we start with an empty graph, where we think of each vertex
as initially its own rooted tree (with no edges), and add directed edges one at a
time. At each step we will have a forest of directed edges. After ℓ steps the forest
will have k − ℓ roots, so that after k − 1 edges are added, we will have a directed
rooted tree. At each step we choose an edge to add by irst choosing any of the
k vertices in the graph. This vertex will be in one of the trees in the forest. We
then choose a root from another tree to connect to, with the edge directed from
the irst vertex to the second. This removes one of the roots from consideration,
so each step reduces the number of roots by one. Show that there is a one-to-one
correspondence between sequences of directed edges that lead to a directed rooted
tree and the sequences of edges that can be chosen in this manner, and show that
there are kk−1 (k − 1)! ways of choosing the sequences of edges as above.
(c) Argue from the above steps that T (k) = kk−2 .
Exercise 17.19: Suppose we consider the effects of adding a stash that can hold a
single item, with standard cuckoo hashing using two choices and one item per bin.
In this case, we can consider two ways to fail; we might have a single component of
k vertices with at least k + 2 edges, or we might have two disjoint components, one
with k1 vertices with at least k1 + 1 edges and one with k2 vertices with at least k2 + 1
edges. By extending our previous analysis regarding components and edges, show that
the probability of having a failure with cuckoo hashing when using a stash that can
hold one item is O(1/n2 ).
Exercise 17.20: Write code to implement the following experiment. You will build a
cuckoo hash table using four choices per item and one item per bin, with an array of
size 220 . If all choices are full, choose one of the items to kick out randomly. (You may,
if you like, optimize after the irst move on an insertion by not allowing yourself to
461
balanced allocations and cuckoo hashing
choose to place an item in a bin that it has just been kicked out of at the last step.) You
have to decide how many moves you will allow before deciding a failure has occurred;
200 should be suficient. Load the table until you reach an item that cannot be placed.
Record the load, or the fraction of the array that has been illed; that is, the number
of items divided by 220 . Repeat the experiment 1000 times. What load level seems
safe with four choices? How does this compare to Theorem 17.11? (Theorem 17.11 is
about the existence of a valid assignment, not about this placement algorithm, and is
an asymptotic result. It is therefore not necessarily expected that the experiment should
achieve the performance suggested by the theorem.)
Exercise 17.21: Modify your code above so that you can experiment with varying
numbers of choices per item and varying numbers of items per bin. For different values
of these parameters, determine (approximately) the load where the failure probability
appears to be nontrivial and compare to Theorem 17.11.
462
Further Reading
N. Alon and J. Spencer, The Probabilistic Method, 2nd edn. Wiley, New York, 2000.
B. Bollobás, Random Graphs, 2nd edn. Academic Press, Orlando, FL, 1999.
T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, 2nd edn. MIT
Press / McGraw-Hill, Cambridge / New York, 2001.
T. M. Cover and J. A. Thomas, Elements of Information Theory, Wiley, New York, 1991.
W. Feller, An Introduction to the Probability Theory, Vol. 1, 3rd edn. Wiley, New York, 1968.
W. Feller, An Introduction to the Probability Theory, Vol. 2. Wiley, New York, 1966.
S. Har-Peled, Geometric Approximation Algorithms, AMS, Providence, RI, 2011.
M. Jerrum, Counting, Sampling and Integrating: Algorithms and Complexity (Lectures in
Mathematics. ETH Zürich), Birkhäuser, Berlin, 2003.
S. Karlin and H. M. Taylor, A First Course in Stochastic Processes, 2nd edn. Academic Press,
New York, 1975.
S. Karlin and H. M. Taylor, A Second Course in Stochastic Processes. Academic Press, New York,
1981.
M. Kearns, U. Vazirani, An Introduction to Computational Learning Theory. MIT Press, Cambridge,
MA, 1994.
F. Leighton, Parallel Algorithms and Architectures. Morgan Kauffmann, San Mateo, CA, 1992.
R. Motwani and P. Raghavan, Randomized Algorithms. Cambridge University Press, Cambridge, UK,
1995.
S. Ross, Stochastic Processes. Wiley, New York, 1996.
S. Ross, A First Course in Probability, 6th edn., Prentice-Hall, Englewood Cliffs, NJ, 2002.
S. Ross, Probability Models for Computer Science. Academic Press, Orlando, FL, 2002.
S. Shalev-Shwartz and S. Ben-David, Understanding Machine Learning: From Theory to Algorithms,
Cambridge University Press, Cambridge, UK, 2014.
J. H. Spencer, Ten Lectures on the Probabilistic Method, 2nd edn. SIAM, Philadelphia, 1994.
L. Valiant, Probably Approximately Correct. Basic Books, New York, 2013.
R. W. Wolff, Stochastic Modeling and the Theory of Queues. Prentice-Hall, Englewood Cliffs, NJ,
1989.
463
Index
464
index
465
index
466
index
467