f22 hw5 Sol
f22 hw5 Sol
f22 hw5 Sol
1. Consider a random walk on a path with vertices numbered 1, 2, . . . , n from left to right. At
each step, we flip a coin to decide which direction to walk, moving one step left or one
step right with equal probability. The random walk ends when we fall off one end of the
path, either by moving left from vertex 1 or by moving right from vertex n.
(a) Prove that if we start at vertex 1, the probability that the random walk ends by falling
off the right end of the path is exactly 1/(n + 1).
Solution: Let L(n) be the probability of falling off the Left end of a path of
length n, starting at vertex 1. This function satisfies the recurrence
1 1
L(n) = + · L(n − 1) · L(n)
2 2
The random walk falls off the left end of 1, 2, . . . , n if and only if (1) the first step
is to the left, or (2) the first step is to the right, then we fall off 2, 3, . . . , n to the
left, and finally (recursively) we fall off 1, 2, . . . , n to the left. The base case of
the recurrence is L(1) = 1/2 (or, if you prefer, L(0) = 0).
The closed-form solution L(n) = n/(n + 1) now follows by induction. Specifi-
cally, for any n > 1, the inductive hypothesis implies
1 1 n−1
L(n) = + · · L(n),
2 2 n
from which L(n) = n/(n + 1) follows by straightforward algebra. ■
Rubric: 2 points = 1 for recurrence + 1 for solution. “See part (b)” is worth 2/3 of the score
for part (b), unless the part (b) solution relies on part (a).
1
CS 473 Homework 5 Solutions Fall 2022
(b) Prove that if we start at vertex k, the probability that the random walk ends by falling
off the right end of the path is exactly k/(n + 1).
Solution: Let’s suppose the path includes vertices 0 and n + 1. Let R(n, k)
denote the probability that our random walk visits vertex n + 1 before it visits
vertex 0, assuming we start at vertex k. We immediately have R(n, 0) = 0 and
R(n, n + 1) = 1.
For all 1 ≤ k ≤ n, the rules of the random walk imply
1 1
R(n, k) = R(n, k − 1) + R(n, k + 1).
2 2
In other words, the probabilities R(n, 0), R(n, 1), R(n, 2), . . . , R(n, n), R(n, n + 1)
define an arithmetic sequence; the intermediate values are evenly spaced between
k
R(n, 0) = 0 and R(n, n + 1) = 1. It follows that R(n, k) = n+1 for all k. ■
Solution: Let’s add vertices 0 and n + 1 to the ends of our path. Let R(n, k)
denote the probability that our random walk visits vertex n + 1 before it visits
k
vertex 0, assuming we start at vertex k. I claim that R(n, k) = n+1 for all integers
n and k such that n > 0 and 0 ≤ k ≤ n + 1.
Fix an arbitrary integers n and k such that n > 0 and 0 ≤ k ≤ n + 1. As an
j
inductive hypothesis, assume R( j, m) = m+1 for all positive integers m and j
such that 0 < m < n and 0 ≤ j ≤ m + 1.
We immediately have R(n, 0) = 0 and R(n, n + 1) = 1, so suppose 1 ≤ k ≤ n.
Any random walk from vertex k to vertex n + 1 must consist of a random walk
from vertex k to vertex n, followed by an independent random walk from vertex n
to vertex n + 1. Thus,
Rubric: 3 points. A proof that relies on part (a) is worth full credit, but only if a standalone
solution is given for part (a).
2
CS 473 Homework 5 Solutions Fall 2022
(c) Prove that if we start at vertex 1, the expected number of steps before the random
walk ends is exactly n.
Solution: Let S(n) be the expected number of steps before the random walk
ends, assuming we start at vertex 1. We immediately observe that S(0) = 0 and
S(1) = 1.
So assume n ≥ 2. In the first step, either the random walk ends immediately,
or it enters the interior path from 2 to n − 1. In the latter case, the random walk
eventually leaves this shorter path, after which we are once again at the end of a
path of length n. Linearity of expectation now implies
1 1
S(n) = · 1 + · (1 + S(n − 2) + S(n))
2 2
or equivalently, S(n) = S(n−2)+2. The closed form S(n) = n follows immediately
by induction. ■
Rubric: 2 points. “See part (d)” is with 2/3 of the score for part (d), unless the submitted
solution to part (d) relies on part (c).
3
CS 473 Homework 5 Solutions Fall 2022
(d) What is the exact expected length of the random walk if we start at vertex k, as a
function of n and k? Prove your result is correct. (For partial credit, give a tight
Θ-bound for the case k = (n + 1)/2, assuming n is odd.)
Solution: For all integers n and k such that 0 ≤ k ≤ n + 1, let S(n, k) denote
the expected number of steps for the random walk to reach either vertex 0 or
vertex n + 1, assuming we start at vertex k. For all n, we immediately have
S(n, 0) = S(n, n + 1) = 0. (Alternatively, if you prefer, part (c) implies S(n, 1) =
S(n, n) = n.) If 1 ≤ k ≤ n, linearity of expectation implies
1 1
S(n, k) = 1 + S(n, k − 1) + S(n, k + 1),
2 2
or equivalently,
Solution: For all integers n and k such that 0 ≤ k ≤ n + 1, let S(n, k) denote
the expected number of steps for the random walk to reach either vertex 0 or
vertex n + 1, assuming we start at vertex k.
Let’s break the random walk starting at k into two phases. The first phase
ends when the walk reaches either vertex 1 or vertex n for the first time; the
second phase is the rest of the walk.
The expected number of steps to reach either 1 or n from k is equal to
the expected number of steps to reach either 0 or n − 1 from k − 1. Thus,
the expected length of the first phase is exactly S(n − 2, k − 1). The expected
length of the second phase is either S(n, 1) or S(n, n), and part (c) implies
S(n, 1) = S(n, n) = n. So we have a simple recurrence:
S(n, k) = S(n − 2, k − 1) + n
To solve the recurrence, there are two cases to consider. If k ≤ n/2, then
inductively expanding the recurrence k times gives us
k−1
X
S(n, k) = S(n − 2k, 0) + (n − 2 j)
j=0
4
CS 473 Homework 5 Solutions Fall 2022
k−1
X
= nk − 2 k = nk − k(k − 1) = k(n − k + 1)
j=0
Rubric: 3 points = 1 for exact solution + 2 for proof. A proof that refers to part (c) is worth
full credit only if a standalone proof is given for part (c). A Θ(n2 ) bound for the special case
k = (n + 1)/2 is worth 2 points.
5
CS 473 Homework 5 Solutions Fall 2022
2. Tabulated hashing uses tables of random numbers to compute hash values. Suppose
|U| = 2w × 2w and m = 2ℓ , so the items being hashed are pairs of w-bit strings (or 2w-bit
strings broken in half) and hash values are ℓ-bit strings.
Let A[0 .. 2w − 1] and B[0 .. 2w − 1] be arrays of independent random ℓ-bit strings, and
define the hash function hA,B : U → [m] by setting
where ⊕ denotes bit-wise exclusive-or. Let H denote the set of all possible functions hA,B .
Filling the arrays A and B with independent random bits is equivalent to choosing a hash
function hA,B ∈ H uniformly at random.
1
Pr a, b, a′ , b′ are good = 2 .
m
There are three cases to consider.
• Suppose x ̸= x ′ and y ̸= y ′ . Then a, b, a′ , b′ are four different and there-
fore independent random w-bit strings. There are m4 possible values for
a, b, a′ , b′ . If we fix a and a′ arbitrarily, there is exactly one good value of b
and exactly one good value of b′ , namely, b = a ⊕ i and b′ = a′ ⊕ j. Thus,
there are m2 good values for a, b, a′ , b′ . We conclude that the probability
that a, b, a′ , b′ are good is m2 /m4 = 1/m2 .
• Suppose x = x ′ and y ̸= y ′ . Then a = a′ , so there are only m3 possible
values for a, b, a′ , b′ . If we fix a = a′ arbitrarily, there is exactly one good
value of b and exactly one good value of b′ , namely, b = a ⊕ i and b′ = a′ ⊕ j.
Thus, there are m good values of a, b, a′ , b′ . We conclude that the probability
that a, b, a′ , b′ are good is m/m3 = 1/m2 .
• The final case x ̸= x ′ and y = y ′ is symmetric with the previous case. ■
Rubric: 3 points = 1 for basic setup + 1 or each case. This is more detail than necessary for
full credit. This is not the only correct solution. “See part (b)” is worth 3/4 of your score for
part (b).
6
CS 473 Homework 5 Solutions Fall 2022
Say that a, b, a′ , b′ , a′′ , b′′ are good if a ⊕ b = i and a′ ⊕ b′ = j and a′′ ⊕ b′′ = k.
There are three cases to consider.
• Suppose x, x ′ , x ′′ are all different. Arbitrarily fix y, y ′ , y ′′ . There are m3
possible values for x, x ′ , x ′′ , but only one good value: x = y ⊕ i and x ′ =
y ′ ⊕ j and x ′′ = y ′′ ⊕ k.
• If x = x ′ = x ′′ , then y, y ′ , y ′′ must be all different, and we can argue exactly
as in the previous case.
• The only remaining case (up to symmetry) is x = x ′ ̸= x ′′ and y ̸= y ′ = y ′′ .
Then there are m4 possible values for a, b, b′ , a′′ . If we fix a arbitrarily, the
only good values of the remaining variables are b = a ⊕ i and b′ = a ⊕ j and
a′′ = b′ ⊕ k = a ⊕ j ⊕ k. Thus, there are exactly m good values for a, b, b′ , a′′ .
In all cases, we conclude that Pr[a, b, a′ , b′ , a′′ , b′′ are good] = 1/m3 . ■
Rubric: 4 points = 1 for basic setup + 1 for each case. This is not the only correct solution.
Rubric: 3 points. This is more detail than necessary for full credit. This is not the only
correct solution.
7
CS 473 Homework 5 Solutions Fall 2022
3. Suppose we are given a coin that may or may not be biased, and we would like to compute an
accurate estimate of the probability of heads. Specifically, if the actual unknown probability
of heads is p, we would like to compute an estimate p̃ such that
MeanEstimate(ϵ):
count ← 0
for i ← 1 to N
if Flip( ) = Heads
count ← count + 1
return count/N
(a) Let p̃ denote the estimate returned by MeanEstimate(ϵ). Prove that E[p̃] = p.
N
X
E[X ] = Pr[X i = 1] = N p.
i=1
Rubric: 3 points.
(b) Prove that if we set N = ⌈α/ϵ 2 ⌉ for some appropriate constant α, then Pr[|p̃ − p| > ϵ] <
1/4. [Hint: Use Chebyshev’s inequality.]
Solution: The coin flips are pairwise independent (in fact, fully independent)
so we can apply Chebyshev’s inequality. Let X be the final value of count, and
recall from part (a) that µ = E[X ] = N p.
Rubric: 3 points. We can’t apply the form of Chebyshev’s inequality given in the notes to p̃
directly, because p̃ is not a sum of indicators.
8
CS 473 Homework 5 Solutions Fall 2022
(c) We can increase the previous estimator’s confidence by running it multiple times,
independently, and returning the median of the resulting estimates.
MedianOfMeansEstimate(δ, ϵ):
for j ← 1 to K
estimate[ j] ← MeanEstimate(ϵ)
return Median(estimate[1 .. K])
Let p∗ denote the estimate returned by MedianOfMeansEstimate(δ, ϵ). Prove
that if we set N = ⌈α/ϵ 2 ⌉ (inside MeanEstimate) and K = ⌈β ln(1/δ)⌉, for some
appropriate constants α and β, then Pr[|p∗ − p| > ϵ] < δ. [Hint: Use Chernoff
bounds.]
Let Y = j Y j denote the number of bad mean estimates. Our analysis in part (b)
P
implies that if we set N = ⌈4/ϵ 2 ⌉ inside MeanEstimate), then Pr[Y j = 1] < 1/4
for all j and therefore E[Y ] < K/4.
The median estimate p∗ is larger than p + ϵ if and only if at least half of the
mean estimates are larger than p + ϵ. Similarly, p∗ < p − ϵ if and only if at least
half of the mean estimates are larger than p + ϵ. Thus,
The indicator variables Y j are mutually independent (because the coin flips inside
MeanEstimate are mutually independent). However, we cannot apply Chernoff
bounds directly to Y , because we would eventually need a lower bound on E[Y ].
Let Z1 , Z2 , . . . , Zd be mutually independent indicator variables such that
Pd
Pr[Zi = 1] = 1/4 for all i, and let Z = i=1 Zi . We immediately have
Rubric: 4 points. −1 for implicitly assuming that E[Y ] = K/4. A perfect solution must
explicitly invoke the fact that the mean estimates are mutually independent. This is more
detail than necessary for full credit.