3 Randomized Binary Search Trees: 3.1 Treaps
3 Randomized Binary Search Trees: 3.1 Treaps
3 Randomized Binary Search Trees: 3.1 Treaps
I thought the following four [rules] would be enough, provided that I made a firm and constant
resolution not to fail even once in the observance of them. The first was never to accept anything
as true if I had not evident knowledge of its being so.... The second, to divide each problem I
examined into as many parts as was feasible, and as was requisite for its better solution. The
third, to direct my thoughts in an orderly way...establishing an order in thought even when the
objects had no natural priority one to another. And the last, to make throughout such complete
enumerations and such general surveys that I might be sure of leaving nothing out.
— René Descartes, Discours de la Méthode (1637)
There are those who think that life has nothing left to chance
A host of holy horrors to direct our aimless dance
— Rush, “Freewill”, Permanent Waves (1980), lyrics by Neal Peart
What is luck?
Luck is probability taken personally.
It is the excitement of bad math.
— Penn Jillette (2001), quoting Chip Denman (1998)
3.1 Treaps
3.1.1 Definitions
A treap is a binary tree in which every node has both a search key and a priority, where the
inorder sequence of search keys is sorted and each node’s priority is smaller than the priorities of
its children.1 In other words, a treap is simultaneously a binary search tree for the search keys
and a (min-)heap for the priorities. In our examples, we will use letters for the search keys and
numbers for the priorities.
M1
H2 T3
G7 I4 R5
A9 L8 O6
A treap. Letters are search keys; numbers are priorities.
I’ll assume from now on that all the keys and priorities are distinct. Under this assumption,
we can easily prove by induction that the structure of a treap is completely determined by the
1Sometimes I hate English. Normally, ‘higher priority’ means ‘more important’, but ‘first priority’ is also more
important than ‘second priority’. Maybe ‘posteriority’ would be better; one student suggested ‘unimportance’.
1
Algorithms Lecture 3: Treaps and Skip Lists [Sp’20]
search keys and priorities of its nodes. Since it’s a heap, the node v with highest priority must be
the root. Since it’s also a binary search tree, any node u with key(u) < key(v) must be in the left
subtree, and any node w with key(w) > key(v) must be in the right subtree. Finally, since the
subtrees are treaps, by induction, their structures are completely determined. The base case is
the trivial empty treap.
Another way to describe the structure is that a treap is exactly the binary search tree that
results by inserting the nodes one at a time into an initially empty tree, in order of increasing
priority, using the standard textbook insertion algorithm. This characterization is also easy to
prove by induction.
A third description interprets the keys and priorities as the coordinates of a set of points in
the plane. The root corresponds to a T whose joint lies on the topmost point. The T splits the
plane into three parts. The top part is (by definition) empty; the left and right parts are split
recursively. This interpretation has some interesting applications in computational geometry,
which (unfortunately) we won’t have time to talk about.
1
2
3
4
5
6
7
8
9
A G H I L M O R T
Treaps were first discovered by Jean Vuillemin in 1980, but he called them Cartesian trees.2
The word ‘treap’ was first used by Edward McCreight around 1980 to describe a slightly different
data structure, but he later switched to the more prosaic name priority search trees.3 Treaps were
rediscovered and used to build randomized search trees by Cecilia Aragon and Raimund Seidel in
1989.⁴ A different kind of randomized binary search tree, which uses random rebalancing instead
of random priorities, was later discovered and analyzed by Conrado Martínez and Salvador Roura
in 1996.⁵
The search algorithm is the usual one for binary search trees. The time for a successful search is
proportional to the depth of the node. The time for an unsuccessful search is proportional to the
depth of either its successor or its predecessor.
To insert a new node z, we start by using the standard binary search tree insertion algorithm
to insert it at the bottom of the tree. At the point, the search keys still form a search tree, but the
2J. Vuillemin, A unifying look at data structures. Commun. ACM 23:229–239, 1980.
3E. M. McCreight. Priority search trees. SIAM J. Comput. 14(2):257–276, 1985.
⁴R. Seidel and C. R. Aragon. Randomized search trees. Algorithmica 16:464–497, 1996.
⁵C. Martínez and S. Roura. Randomized binary search trees. J. ACM 45(2):288-323, 1998. The results in this paper
are virtually identical (including the constant factors!) to the corresponding results for treaps, although the analysis
techniques are quite different.
2
Algorithms Lecture 3: Treaps and Skip Lists [Sp’20]
priorities may no longer form a heap. To fix the heap property, as long as z has smaller priority
than its parent, perform a rotation at z, a local operation that decreases the depth of z by one
and increases its parent’s depth by one, while maintaining the search tree property. Rotations
can be performed in constant time, since they only involve simple pointer manipulation.
y x
right
x y
left
C A
A B B C
A right rotation at x and a left rotation at y are inverses.
The overall time to insert z is proportional to the depth of z before the rotations—we have to
walk down the treap to insert z, and then walk back up the treap doing rotations. Another way
to say this is that the time to insert z is roughly twice the time to perform an unsuccessful search
for key(z).
M1 M1 M1 S–1
H2 T3 H2 T3 H2 S–1 M1 T3
G7 I4 R5 G7 I4 S–1 G7 I4 R5 T3 H2 R5
A9 L8 O6 S–1 A9 L8 R5 A9 L8 O6 G7 I4 O6
O6 A9 L8
Left to right: After inserting S with priority −1, rotate it up to fix the heap property.
Right to left: Before deleting S , rotate it down to make it a leaf.
To delete a node, we just run the insertion algorithm backward in time. Suppose we want to
delete node z. As long as z is not a leaf, perform a rotation at the child of z with smaller priority.
This moves z down a level and its smaller-priority child up a level. The choice of which child to
rotate preserves the heap property everywhere except at z. When z becomes a leaf, chop it off.
We sometimes also want to split a treap T into two treaps T< and T> along some pivot key π,
so that all the nodes in T< have keys less than π and all the nodes in T> have keys bigger then
π. A simple way to do this is to insert a new node z with key(z) = π and priority(z) = −∞.
After the insertion, the new node is the root of the treap. If we delete the root, the left and right
sub-treaps are exactly the trees we want. The time to split at π is roughly twice the time to
(unsuccessfully) search for π.
Similarly, we may want to join two treaps T< and T> , where every node in T< has a smaller
search key than any node in T> , into one super-treap. Merging is just splitting in reverse—create
a dummy root whose left sub-treap is T< and whose right sub-treap is T> , rotate the dummy
node down to a leaf, and then cut it off.
The cost of each of these operations is proportional to the depth of some node v in the treap.
• Search: A successful search for key k takes O(depth(v)) time, where v is the node with
key(v) = k. For an unsuccessful search, let v − be the inorder predecessor of k (the node
whose key is just barely smaller than k), and let v + be the inorder successor of k (the
node whose key is just barely larger than k). Since the last node examined by the binary
search is either v − or v + , the time for an unsuccessful search is either O(depth(v + )) or
O(depth(v − )).
3
Algorithms Lecture 3: Treaps and Skip Lists [Sp’20]
• Insert/Delete: Inserting a new node with key k takes either O(depth(v + )) time or
O(depth(v − )) time, where v + and v − are the predecessor and successor of the new node.
Deletion is insertion in reverse.
• Split/Join: Splitting a treap at pivot value k takes either O(depth(v + )) time or O(depth(v − ))
time, because the split is identical to inserting a new dummy root with search key k and
priority −∞. Merging is splitting in reverse.
In the worst case, the depth of an n-node treap is Θ(n), so each of these operations has a
worst-case running time of Θ(n).
A randomized treap is a treap in which the priorities are independently and uniformly distributed
continuous random variables. Whenever we insert a new search key into the treap, we indepen-
dently generate a real number (say) uniformly at random between 0 and 1 and use that number
as the priority of the new node. The precise distribution is unimportant, as long as the same
distribution is used for all nodes, and the probability that two nodes have equal priorities is
zero. (Equal priorities make the analysis slightly messier; in practice, we can choose random
integers from a large range, like 0 to 231 − 1, and break ties arbitrarily; occasional ties have
almost no practical effect on the performance of the data structure.) Also, since the priorities are
independent, each node is equally likely to have the smallest priority.
The cost of all the operations we discussed—search, insert, delete, split, join—is proportional
to the depth of some node in the tree. Here we’ll see that the expected depth of any node
is O(log n), which implies that the expected running time for any of those operations is also
O(log n).
Let x k denote the node with the kth smallest search key. To simplify notation, let us write
i ↑ k (read “i above k”) to mean that x i is a proper ancestor of x k . By definition, the depth of v
is the number of proper ancestors of v, so we can write
n
X
depth(x k ) = [i ↑ k].
i=1
(Again, we’re using Iverson bracket notation.) Now we can express the expected depth of a node
in terms of these indicator variables as follows.
n
X n
X
E[depth(x k )] = E [i ↑ k] = Pr[i ↑ k]
i=1 i=1
(Just as in our analysis of matching nuts and bolts, we’re using linearity of expectation and the
fact that E[X ] = Pr[X = 1] for any zero-one variable X ; in this case, X = [i ↑ k].) So to compute
the expected depth of a node, we only need to compute the probabilities that one node is a
proper ancestor of another.
Fortunately, we can do this easily once we prove a simple structural lemma. Let X (i, k) denote
either the subset of treap nodes {x i , x i+1 , . . . , x k } or the subset {x k , x k+1 , . . . , x i }, depending on
whether i < k or i > k. The order of the arguments is unimportant; the subsets X (i, k) and
X (k, i) are identical. The subset X (1, n) = X (n, 1) contains all n nodes in the treap.
Lemma 1. For all i 6= k, we have i ↑ k if and only if x i has the smallest priority among all nodes
in X (i, k).
4
Algorithms Lecture 3: Treaps and Skip Lists [Sp’20]
Since each node in X (i, k) is equally likely to have smallest priority, we immediately have the
probability we wanted:
1
if i < k
k − i + 1
[i 6= k]
Pr[i ↑ k] = = 0 if i = k
|k − i| + 1
1
if i > k
i−k+1
To compute the expected depth of a node x k , we plug this probability into our formula and grind
through the algebra.
n k−1 n
X X 1 X 1
E[depth(x k )] = Pr[i ↑ k] = +
i=1 i=1
k − i + 1 i=k+1 i − k + 1
k n−k+1
X 1 X 1
= +
j=2
j i=2
j
= H k − 1 + H n−k+1 − 1
< ln k + ln(n − k + 1) − 2
< 2 ln n − 2.
In conclusion, every search, insertion, deletion, split, and join operation in an n-node randomized
binary search tree takes O(log n) expected time.
Since a treap is exactly the binary tree that results when you insert the keys in order of
increasing priority, a randomized treap is the result of inserting the keys in random order. So
our analysis also automatically gives us the expected depth of any node in a binary tree built by
random insertions (without using priorities).
We’ve already seen two completely different ways of describing randomized quicksort. The first
is the familiar recursive one: choose a random pivot, partition, and recurse. The second is a
less familiar iterative version: repeatedly choose a new random pivot, partition whatever subset
contains it, and continue. But there’s a third way to describe randomized quicksort, this time in
terms of (generic, off-the-shelf, standard) binary search trees.
⁶See Larry Niven and Jerry Pournelle, The Gripping Hand, Pocket Books, 1994.
5
Algorithms Lecture 3: Treaps and Skip Lists [Sp’20]
RandomizedQuicksort(A[1 .. n]):
T ← an empty binary search tree
for i ← 1 to n in random order
insert A[i] into T
output the inorder sequence of keys in T
Our treap analysis tells us is that this algorithm will run in O(n log n) expected time, since each
key is inserted in O(log n) expected time.
Why is this quicksort? As in our analysis of Rawlins nuts and bolts algorithm, all we’ve done
is rearrange the order of the comparisons. The binary search tree is the recursion tree created
by the normal version of quicksort. In the recursive formulation, we first compare the initial
pivot against everything else and then recurse. In the binary-tree formulation, the first “pivot”
becomes the root of the tree without any comparisons, but then later, as other keys are inserted
into the tree, each new key is compared against the root. Either way, the first pivot chosen is
compared with every other key. The partition splits the remaining items into a left subarray
and a right subarray; in the binary tree version, these are exactly the items that go into the left
subtree and the right subtree. Because both algorithms define the same two subproblems, by
induction, both algorithms perform the same comparisons within those subproblems.
We’ve even the probability 1/(|k − i| + 1) before in the analysis of Rawlins’ aglorithm. In the
more familiar setting of sorting an array of numbers, the probability that randomized quicksort
compares the ith largest and kth largest elements is exactly 2/(|k − i| + 1). The binary tree
version of quicksort compares x i and x k if and only if either i ↑ k or k ↑ i, so the probabilities are
exactly the same.
–∞ 0 1 3 6 7 9 ∞
–∞ 0 1 2 3 4 5 6 7 8 9 ∞
Now we can find a value x in this augmented structure using a two-stage algorithm. First,
we scan for x in the shortcut list, starting at the −∞ sentinel node. If we find x, we’re done.
Otherwise, we reach some value bigger than x and we know that x is not in the shortcut list. Let
w be the largest item less than x in the shortcut list. In the second phase, we scan for x in the
original list, starting from w. Again, if we reach a value bigger than x, we know that x is not in
the data structure.
⁷William Pugh. Skip lists: A probabilistic alternative to balanced trees. Commun. ACM 33(6):668–676, 1990.
6
Algorithms Lecture 3: Treaps and Skip Lists [Sp’20]
–∞ 0 1 3 6 7 9 ∞
–∞ 0 1 2 3 4 5 6 7 8 9 ∞
Since each node appears in the shortcut list with probability 1/2, the expected number of
nodes examined in the first phase is at most n/2. Only one of the nodes examined in the second
phase has a duplicate. The probability that any node is followed by k nodes without
Pduplicates is
2−k , so the expected number of nodes examined in the second phase is at most 1 + k≥0 2−k = 2.
Thus, by adding these random shortcuts, we’ve reduced the cost of a search from n to n/2 + 2,
roughly a factor of two in savings.
Now there’s an obvious improvement—add shortcuts to the shortcuts, and repeat recursively.
That’s exactly how skip lists are constructed. For each node in the original list, we repeatedly
flip a coin until we get tails. Each time we get heads, we make a new copy of the node. The
duplicates are stacked up in levels, and the nodes on each level are strung together into sorted
linked lists. Each node v stores a search key key(v), a pointer down(v) to its next lower copy,
and a pointer right(v) to the next node in its level.
–∞ ∞
–∞ 7 ∞
–∞ 1 7 ∞
–∞ 1 6 7 ∞
–∞ 0 1 3 6 7 9 ∞
–∞ 0 1 2 3 4 5 6 7 8 9 ∞
The search algorithm for skip lists is very simple. Starting at the leftmost node L in the
highest level, we scan through each level as far as we can without passing the target value x, and
then proceed down to the next level. The search ends when we either reach a node with search
key x or fail to find x on the lowest level.
SkipListFind(x, L):
v←L
while (v 6= Null and key(v) 6= x)
if key(right(v)) > x
v ← down(v)
else
v ← right(v)
return v
Intuitively, since each level of the skip lists has about half the number of nodes as the previous
level, the total number of levels should be about O(log n). Similarly, each time we add another
7
Algorithms Lecture 3: Treaps and Skip Lists [Sp’20]
–∞ ∞
–∞ 7 ∞
–∞ 1 7 ∞
–∞ 1 6 7 ∞
–∞ 0 1 3 6 7 9 ∞
–∞ 0 1 2 3 4 5 6 7 8 9 ∞
level of random shortcuts to the skip list, we cut the search time roughly in half, except for a
constant overhead, so O(log n) levels should give us an overall expected search time of O(log n).
Let’s formalize each of these two intuitive observations.
The actual values of the search keys don’t affect the skip list analysis, so let’s assume the keys
are the integers 1 through n. Let L(x) be the number of levels of the skip list that contain some
search key x, not counting the bottom level. Each new copy of x is created with probability 1/2
from the previous level, essentially by flipping a coin. We can compute the expected value of
L(x) recursively—with probability 1/2, we flip tails and L(x) = 0; and with probability 1/2, we
flip heads, increase L(x) by one, and recurse:
1 1
E[L(x)] = · 0 + 1 + E[L(x)]
2 2
Solving this equation gives us E[L(x )] = 1.
In order to analyze the expected worst-case cost of a search, however, we need a bound on
the number of levels L = max x L(x). Unfortunately, we can’t compute the average of a maximum
the way we would compute the average of a sum. Instead, we derive a stronger result: The
depth of a skip list storing n keys is O(log n) with high probability. “High probability” is a
technical term that means the probability is at least 1 − 1/nc for some constant c ≥ 1; the hidden
constant in the O(log n) bound could depend on c.
In order for a search key x to appear on level `, it must have flipped ` heads in a row when it
was inserted, so Pr[L(x ) ≥ `] = 2−` . The skip list has at least ` levels if and only if L(x) ≥ ` for
at least one of the n search keys.
Pr[L ≥ `] = Pr (L(1) ≥ `) ∨ (L(2) ≥ `) ∨ · · · ∨ (L(n) ≥ `)
Using the union bound — Pr[A ∨ B] ≤ Pr[A] + Pr[B] for any random events A and B — we can
simplify this as follows:
n
X n
Pr[L ≥ `] ≤ Pr[L(x) ≥ `] = n · Pr[L(x) ≥ `] = .
x=1
2`
When ` ≤ lg n, this bound is trivial. However, for any constant c > 1, we have a strong upper
bound
1
Pr[L ≥ c lg n] ≤ c−1 .
n
8
Algorithms Lecture 3: Treaps and Skip Lists [Sp’20]
We conclude that with high probability, a skip list has O(log n) levels.
This high-probability bound indirectly implies a bound on the expected number of levels.
Some simple algebra gives us the following alternate definition for expectation:
X X
E[L] = ` · Pr[L = `] = Pr[L ≥ `]
`≥0 `≥1
Clearly, if ` < `0 , then Pr[L(x) ≥ `] > Pr[L(x) ≥ `0 ]. So we can derive an upper bound on the
expected number of levels as follows:
X lg n
X X
E[L(x)] = Pr[L ≥ `] = Pr[L ≥ `] + Pr[L ≥ `]
`≥1 `=1 `≥lg n+1
lg n
X X n
≤ 1 + `
`=1 `≥lg n+1
2
X 1
= lg n + [i = ` − lg n]
i≥1
2i
= lg n + 2
So in expectation, a skip list has at most two more levels than an ideal version where each level
contains exactly half the nodes of the next level below. Notice that this is an additive penalty
over a perfectly balanced structure, as opposed to treaps, where the expected depth is a constant
multiple of the ideal lg n.
It’s a little easier to analyze the cost of a search if we imagine running the algorithm backwards.
dniFtsiLpikS takes the output from SkipListFind as input and traces back through the data
structure to the upper left corner. Skip lists don’t really have up and left pointers, but we’ll
pretend that they do so we don’t have to write ‘) v (nwod ← v ’ or ‘) v (thgir ← v ’.⁸
dniFtsiLpikS(v):
while (level(v) 6= L)
if up(v) exists
v ← up(v)
else
v ← left(v)
Now for every node v in the skip list, up(v) exists with probability 1/2. So for purposes of
analysis, dniFtsiLpikS is equivalent to the following algorithm:
FlipWalk(v):
while (v 6= L)
if CoinFlip = Heads
v ← up(v)
else
v ← left(v)
⁸ seirevocsid sih peek ot detnaw eh esuaceb ton tub ,gnitirw-rorrim gnisu seton sih lla etorw icniV ad odranoeL
!dnah thgir sih ni sitirhtra dab yllaer dah tsuj eH .terces
9
Algorithms Lecture 3: Treaps and Skip Lists [Sp’20]
Obviously, the expected number of heads is exactly the same as the expected number of Tails.
Thus, the expected running time of this algorithm is twice the expected number of upward
jumps. But we already know that the number of upward jumps is O(log n) with high probability.
It follows the running time of FlipWalk is O(log n) with high probability (and therefore in
expectation).
Exercises
1. Prove that a treap is exactly the binary search tree that results from inserting the nodes
one at a time into an initially empty tree, in order of increasing priority, using the standard
textbook insertion algorithm.
2. Consider a treap T with n vertices. As in the notes above, identify nodes in T by the ranks
of their search keys; thus, ‘node 5’ means the node with the 5th smallest search key. Let
i, j, and k be integers such that 1 ≤ i ≤ j ≤ k ≤ n.
(a) Prove that the expected number of proper descendants of any node in a treap is
exactly equal to the expected depth of that node.
(b) The left spine of a binary tree is a path starting at the root and following only left-child
pointers. What is the expected number of nodes in the left spine of T ?
(c) What is the expected number of leaves in T ? [Hint: What is the probability that
node k is a leaf?]
(d) What is the expected number of nodes in T with two children?
(e) What is the expected number of nodes in T with exactly one child?
? (f) What is the expected number of nodes in T with exactly one grandchild?
(g) Define the priority rank of a node in T to be one more than the number of nodes with
smaller priority. For example, the root of T always has priority rank 1, and one of the
children of the root has priority rank 2. What is the expected priority rank of node i?
(h) What is the expected priority rank of the left child of the root (given that such a node
exists)?
? (i) What is the expected priority rank of the leftmost grandchild of the root (given that
such a node exists)?
? (j) What is the expected priority rank of a node with depth d?
(k) What is the exact probability that node j is a common ancestor of node i and node k?
(l) What is the exact expected length of the unique path in T from node i to node k?
(m) What is the expected (key) rank of the leftmost leaf in T ?
(n) What is the expected (key) rank of the leftmost node in T with two children (given
that such a node exists)?
(o) What is the probability that T has no nodes with two children?
3. Let X be a set of n real numbers. For any two numbers a ≤ z, let [a, z] denote the closed
interval {a ≤ x ≤ z | x ∈ R}. An interval emptiness query over the set X asks, given two
real numbers a ≤ z, whether the subset X ∩ [a, z] is empty.
10
Algorithms Lecture 3: Treaps and Skip Lists [Sp’20]
(a) Suppose X is stored (as the search keys) in a randomized treap. Describe an algorithm
to answer an emptiness query for any interval [a, z] in O(1 + log(n/w)) time, where
w = 1 + |X ∩ [a, z]|. In particular, if X ∩ [a, z] = ∅, your query algorithm should run
in O(log n) time, and if X ⊆ [a, z], your query algorithm should run in O(1) time.
(b) Consider a weighted version of randomized treaps where each search key k comes
with a positive integer weight w(k), and the priority of k is defined as the minimum
of w(k) independent real random numbers.
W
Prove that the expected depth of the node storing any key k is at most 1+2 ln( w(k) ),
where W = i w(i) is the sum of the weights of all nodes. [Hint: Use the identity
P
4. Recall that a priority search tree is a binary tree in which every node has both a search key
and a priority, arranged so that the tree is simultaneously a binary search tree for the keys
and a min-heap for the priorities. A heater is a priority search tree in which the priorities
are given by the user, and the search keys are distributed uniformly and independently at
random in the real interval [0, 1]. Intuitively, a heater is a sort of anti-treap.⁹
The following problems consider an n-node heater T whose priorities are the integers
from 1 to n. We identify nodes in T by their priorities; thus, ‘node 5’ means the node in T
with priority 5. For example, the min-heap property implies that node 1 is the root of T .
Finally, let i and j be integers with 1 ≤ i < j ≤ n.
(a) Prove that in a random permutation of the (i + 1)-element set {1, 2, . . . , i, j}, elements
i and j are adjacent with probability 2/(i + 1).
(b) Prove that node i is an ancestor of node j with probability exactly 2/(i + 1). [Hint:
Use part (a)!]
(c) What is the exact probability that node i is a descendant of node j? [Hint: Don’t use
part (a)!]
(d) What is the exact expected depth of node j?
(e) Describe and analyze an algorithm to insert a new item into a heater. Express the
expected running time of the algorithm in terms of the rank of the newly inserted
item.
(f) Describe an algorithm to delete the minimum-priority item (the root) from an n-node
heater. What is the expected running time of your algorithm?
⁹If you choose not to decide, you still have made a choice.
11
Algorithms Lecture 3: Treaps and Skip Lists [Sp’20]
5. Prove the following basic facts about skip lists, where n is the number of keys.
6. Suppose we are given two skip lists, one storing a set A of m keys, and the other storing
a set B of n keys. Describe and analyze an algorithm to merge these into a single skip
list storing the set A ∪ B in O(n + m) expected time. Do not assume that every key in A is
smaller than every key in B; the two sets could be arbitrarily intermixed. [Hint: Do the
obvious thing.]
7. Any skip list L can be transformed into a binary search tree T (L) as follows. The root of
T (L) is the leftmost node on the highest non-empty level of L; the left and right subtrees
are constructed recursively from the nodes to the left and to the right of the root. Let’s call
the resulting tree T (L) a skip list tree.
(a) Show that any search in T (L) is no more expensive than the corresponding search
in L. (Searching in T (L) could be considerably cheaper—why?)
(b) Describe an algorithm to insert a new search key into a skip list tree in O(log n)
expected time. Inserting key x into the tree T (L) should produce exactly the same
tree as inserting x into the skip list L and then transforming L into a tree. [Hint:
You need to maintain some additional information in the tree nodes.]
(c) Describe an algorithm to delete a search key from a skip list tree in O(log n) expected
time. Again, deleting key x from T (L) should produce exactly the same tree as
deleting x from L and then transforming L into a tree.
8. Consider the following “loose” variant of treaps. Instead of generating priorities uniformly
at random, for each node, we flip an independent fair coin until it comes up heads, and
define the priority of the node to be the number of flips. Thus, for every positive integer k,
a node has priority k with probability 2−k . In addition, we invert the heap property, by
requiring that the priority of any node is not larger than the priority of its parent.
This method creates many nodes with the same priority; in particular, about half the
nodes will have priority 1. Thus, a single set of search keys and priorities may be consistent
with many different loose treaps.
Prove that the expected depth of any node in an n-node “loose treap” is O(log n). To be
pedantic, the expectation is over the random choice of priorities, but for each choice of
priorities, we consider the worst possible loose treap.
[Hint: This is almost exactly the same as the previous question.]
? 9. In the usual theoretical presentation of treaps, the priorities are random real numbers
chosen uniformly from the interval [0, 1]. In practice, however, computers have access only
to random bits. This problem asks you to analyze an implementation of treaps that takes
this limitation into account.
12
Algorithms Lecture 3: Treaps and Skip Lists [Sp’20]
However, only a finite number ` v of these bits are actually known at any given time.
When a node v is first created, none of the priority bits are known: ` v = 0. We generate
(or “reveal”) new random bits only when they are necessary to compare priorities. The
following algorithm compares the priorities of any two nodes in O(1) expected time:
LargerPriority(v, w):
for i ← 1 to ∞
if i > ` v
` v ← i; π v [i] ← RandomBit
if i > `w
`w ← i; πw [i] ← RandomBit
if π v [i] > πw [i]
return v
else if π v [i] < πw [i]
return w
Suppose we insert n items one at a time into an initially empty treap. Let L = v ` v
P
denote the total number of random bits generated by calls to LargerPriority during these
insertions.
10. A meldable priority queue stores a set of keys from some totally-ordered universe (such
as the integers) and supports the following operations:
A simple way to implement such a data structure is to use a heap-ordered binary tree,
where each node stores a key, along with pointers to its parent and two children. Meld
can be implemented using the following randomized algorithm:
13
Algorithms Lecture 3: Treaps and Skip Lists [Sp’20]
Meld(Q 1 , Q 2 ):
if Q 1 is empty return Q 2
if Q 2 is empty return Q 1
if key(Q 1 ) > key(Q 2 )
swap Q 1 ↔ Q 2
with probability 1/2
left(Q 1 ) ← Meld(left(Q 1 ), Q 2 )
else
right(Q 1 ) ← Meld(right(Q 1 ), Q 2 )
return Q 1
(a) Prove that for any heap-ordered binary trees Q 1 and Q 2 (not just those constructed by
the operations listed above), the expected running time of Meld(Q 1 , Q 2 ) is O(log n),
where n = |Q 1 | + |Q 2 |. [Hint: What is the expected length of a random root-to-leaf
path in an n-node binary tree, where each left/right choice is made with equal
probability?]
(b) Prove that Meld(Q 1 , Q 2 ) runs in O(log n) time with high probability.
(c) Show that each of the other meldable priority queue operations can be implemented
with at most one call to Meld and O(1) additional time. (This implies that every
operation takes O(log n) time with high probability.)
? 11. Our probabilistic analysis of treaps and skip lists assumes that the sequence of operations
used to query and update the data structures are independent of the random choices used
to build the data structure. However, if a malicious adversary has additional information
about the data structure, he can force significantly worse performance.
14
Algorithms Lecture 3: Treaps and Skip Lists [Sp’20]
15